Computer technology trends for 2016

It seems that PC market seems to be stabilizing in 2016. I expect that the PC market to shrinks slightly. While mobile devices have been named as culprits for the fall of PC shipments, IDC said that other factors may be in play. It is still pretty hard to make any decent profits with building PC hardware unless you are one of the biggest players – so again Lenovo, HP, and Dell are increasing their collective dominance of the PC market like they did in 2015. I expect changes like spin-offs and maybe some mergers with with smaller players like Fujitsu, Toshiba and Sony. The EMEA server market looks to be a two-horse race between Hewlett Packard Enterprise and Dell, according to Gartner. HPE, Dell and Cisco “all benefited” from Lenovo’s acquisition of IBM’s EMEA x86 server organisation.

Tablet market is no longer high grow market – tablet maker has started to decline, and decline continues in 2016 as owners are holding onto their existing devices for more than 3 years. iPad sales are set to continue decline and iPad Air 3 to be released in 1st half of 2016 does not change that. IDC predicts that detachable tablet market set for growth in 2016 as more people are turning to hybrid devices. Two-in-one tablets have been popularized by offerings like the Microsoft Surface, with options ranging dramatically in price and specs. I am not myself convinced that the growth will be as IDC forecasts, even though Company have started to make purchases of tablets for workers in jobs such as retail sales or field work (Apple iPads, Windows and Android tablets managed by company). Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits.

All your consumer tech gear should be cheaper come July as shere will be less import tariffs for IT products as World Trade Organization (WTO) deal agrees that tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually.

Happy Computer Laptop

In 2015 the storage was rocked to its foundations and those new innovations will be taken into wider use in 2016. The storage market in 2015 went through strategic foundation-shaking turmoil as the external shared disk array storage playbook was torn to shreds: The all-flash data centre idea has definitely taken off as a vision that could be achieved so that primary data is stored in flash with the rest being held in cheap and deep storage.  Flash drives generally solve the dusk drive latency access problem, so not so much need for hybrid drives. There is conviction that storage should be located as close to servers as possible (virtual SANs, hyper-converged industry appliances  and NVMe fabrics). The existing hybrid cloud concept was adopted/supported by everybody. Flash started out in 2-bits/cell MLC form and this rapidly became standard and TLC (3-bits/cell or triple layer cell) had started appearing. Industry-standard NVMe drivers for PCIe flash cards appeared. Intel and Micron blew non-volatile memory preconceptions out of the water in the second half of the year with their joint 3D XPoint memory announcement. Boring old disk  disk tech got shingled magnetic recording (SMR) and helium-filled drive technology; drive industry is focused on capacity-optimizing its drives.  We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being. Tape industry developed a 15TB LTO-7 format.

The use of SSD will increase and it’s price will drop. SSDs will be in more than 25% of new laptops sold in 2015.  SSDs are expected to be in 31% of new consumer laptops in 2016 and more than 40% by 2017. The prices of mainstream consumer SSDs have fallen dramatically every year over the past three years while HDD prices have not changed much.  SSD prices will decline to 24 cents per gigabyte in 2016. In 2017 they’re expected to drop to 11-17 cents per gigabyte (means a 1TB SSD on average would retail for $170 or less).

Hard disk sales will decrease, but this technology is not dead. Sales of hard disk drives have been decreasing for several years now (118 million units in the third quarter of 2015), but according to Seagate hard disk drives (HDDs) are set to still stay relevant around for at least 15 years to 20 years.  HDDs remain the most popular data storage technology as it is cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. 

Because all-flash storage systems with homogenous flash media are still too expensive to serve as a solution to for every enterprise application workload, enterprises will increasingly turn to performance optimized storage solutions that use a combination of multiple media types to deliver cost-effective performance. The speed advantage of Fibre Channel over Ethernet has evaporated. Enterprises also start  to seek alternatives to snapshots that are simpler and easier to manage, and will allow data and application recovery to a second before the data error or logical corruption occurred.

Local storage and the cloud finally make peace in 2016 as the decision-makers across the industry have now acknowledged the potential for enterprise storage and the cloud to work in tandem. Over 40 percent of data worldwide is expected to live on or move through the cloud by 2020 according to IDC.

Happy Computer Laptop

Open standards for data center development are now a reality thanks to advances in cloud technology. Facebook’s Open Compute Project has served as the industry’s leader in this regard.This allows more consolidation for those that want that. Consolidation used to refer to companies moving all of their infrastructure to the same facility. However, some experts have begun to question this strategy as  the rapid increase in data quantities and apps in the data center have made centralized facilities more difficult to operate than ever before. Server virtualization, more powerful servers and an increasing number of enterprise applications will continue to drive higher IO requirements in the datacenter.

Cloud consolidation starts heavily in 2016: number of options for general infrastructure-as-a-service (IaaS) cloud services and cloud management software will be much smaller at the end of 2016 than the beginning. The major public cloud providers will gain strength, with Amazon, IBM SoftLayer, and Microsoft capturing a greater share of the business cloud services market. Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them so advanced users want to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.

Year 2016 will be harder for legacy IT providers than 2015. In its report, IDC states that “By 2020, More than 30 percent of the IT Vendors Will Not Exist as We Know Them Today.” Many enterprises are turning away from traditional vendors and toward cloud providers. They’re increasingly leveraging open source. In short, they’re becoming software companies. The best companies will build cultures of performance and doing the right thing — and will make data and the processes around it self-service for all their employees. Design Thinking to guide companies who want to change the lives of its customers and employees. 2016 will see a lot more work in trying to manage services that simply aren’t designed to work together or even be managed – for example Whatever-As-A-Service cloud systems to play nicely together with their existing legacy systems. So competent developers are the scarce commodity. Some companies start to see Cloud as a form of outsourcing that is fast burning up inhouse ITops jobs with varying success.

There are still too many old fashioned companies that just can’t understand what digitalization will mean to their business. In 2016, some companies’ boards still think the web is just for brochures and porn and don’t believe their business models can be disrupted. It gets worse for many traditional companies. For example Amazon is a retailer both on the web and increasingly for things like food deliveries. Amazon and other are playing to win. Digital disruption has happened and will continue.
Happy Computer Laptop

Windows 10 is coming more on 2016. If 2015 was a year of revolution, 2016 promises to be a year of consolidation for Microsoft’s operating system. I expect that Windows 10 adoption in companies starts in 2016. Windows 10 is likely to be a success for the enterprise, but I expect that word from heavyweights like Gartner, Forrester and Spiceworks, suggesting that half of enterprise users plan to switch to Windows 10 in 2016, are more than a bit optimistic. Windows 10 will also be used in China as Microsoft played the game with it better than with Windows 8 that was banned in China.

Windows is now delivered “as a service”, meaning incremental updates with new features as well as security patches, but Microsoft still seems works internally to a schedule of milestone releases. Next up is Redstone, rumoured to arrive around the anniversary of Windows 10, midway through 2016. Also Windows servers will get update in 2016: 2016 should also include the release of Windows Server 2016. Server 2016 includes updates to the Hyper-V virtualisation platform, support for Docker-style containers, and a new cut-down edition called Nano Server.

Windows 10 will get some of the already promised features not delivered in 2015 delivered in 2016. Windows 10 was promised coming  to PCs and Mobile devices in 2015 to deliver unified user experience. Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. The promise was same unified interface for PCs, tablets and smart phones – but it was only delivered in 2015 for only PCs and some tablets. Mobile Windows 10 for smart phone is expected to start finally in 2016 – The release of Microsoft’s new Windows 10 operating system may be the last roll of the dice for its struggling mobile platform. Because Microsoft Plan A is to get as many apps and as much activity as it can on Windows on all form factor with Universal Windows Platform (UWP), which enables the same Windows 10 code to run on phone and desktop. Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developer. Can Microsoft keep the developer momentum going? I am not sure. In addition there are also plans for tools for porting iOS apps and an Android runtime, so expect also delivery of some or all of the Windows Bridges (iOS, web app, desktop app, Android) announced at the April 2015 Build conference in hope to get more apps to unified Windows 10 app store. Windows 10 does hold out some promise for Windows Phone, but it’s not going to make an enormous difference. Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater.

Older Windows versions will not die in 2016 as fast as Microsoft and security people would like. Expect Windows 7 diehards to continue holding out in 2016 and beyond. And there are still many companies that run their critical systems on Windows XP as “There are some people who don’t have an option to change.” Many times the OS is running in automation and process control systems that run business and mission-critical systems, both in private sector and government enterprises. For example US Navy is using obsolete operating system Microsoft Windows XP to run critical tasks. It all comes down to money and resources, but if someone is obliged to keep something running on an obsolete system, it’s the wrong approach to information security completely.

Happy Computer Laptop

Virtual reality has grown immensely over the past few years, but 2016 looks like the most important year yet: it will be the first time that consumers can get their hands on a number of powerful headsets for viewing alternate realities in immersive 3-D. Virtual Reality will become the mainstream when Sony, and Samsung Oculus bring consumer products on the market in 2016. Whole virtual reality hype could be rebooted as Early build of final Oculus Rift hardware starts shipping to devs. Maybe HTC‘s and Valve‘s Vive VR headset will suffer in the next few month. Expect a banner year for virtual reality.

GPU and FPGA acceleration will be used in high performance computing widely. Both Intel and AMD have products with CPU and GPU in the same chip, and there is software support for using GPU (learn CUDA and/or OpenCL). Also there are many mobile processors have CPU and GPU on the same chip. FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. There was lots of interest in 2015 for using FPGA for accelerating computations as the nest step after GPU, and I expect that the interest will grow even more in 2016. FPGAs are not quite as efficient as a dedicated ASIC, but it’s about as close as you can get without translating the actual source code directly into a circuit. Intel bought Altera (big FPGA company) in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016.

Artificial intelligence, machine learning and deep learning will be talked about a lot in 2016. Neural networks, which have been academic exercises (but little more) for decades, are increasingly becoming mainstream success stories: Heavy (and growing) investment in the technology, which enables the identification of objects in still and video images, words in audio streams, and the like after an initial training phase, comes from the formidable likes of Amazon, Baidu, Facebook, Google, Microsoft, and others. So-called “deep learning” has been enabled by the combination of the evolution of traditional neural network techniques, the steadily increasing processing “muscle” of CPUs (aided by algorithm acceleration via FPGAs, GPUs, and, more recently, dedicated co-processors), and the steadily decreasing cost of system memory and storage. There were many interesting releases on this in the end of 2015: Facebook Inc. in February, released portions of its Torch software, while Alphabet Inc.’s Google division earlier this month open-sourced parts of its TensorFlow system. Also IBM Turns Up Heat Under Competition in Artificial Intelligence as SystemML would be freely available to share and modify through the Apache Software Foundation. So I expect that the year 2016 will be the year those are tried in practice. I expect that deep learning will be hot in CES 2016 Several respected scientists issued a letter warning about the dangers of artificial intelligence (AI) in 2015, but I don’t worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it’s not ready for. How machine learning will affect your business? MIT has a good free intro to AI and ML.

Computers, which excel at big data analysis, can help doctors deliver more personalized care. Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better. Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Computers excel at searching and combining vastly more data than a human so algorithms can be put to good use in certain areas of medicine. There are also things that can slow down development in 2016: To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.

Internet of Things (IoT) was talked a lot in 2015, and it will be a hot topics for IT departments in 2016 as well. Many companies will notice that security issues are important in it. The newest wearable technology, smart watches and other smart devices corresponding to the voice commands and interpret the data we produce - it learns from its users, and generate appropriate  responses in real time. Interest in Internet of Things (IoT) will as bring interest to  real-time business systems: Not only real-time analytics, but real-time everything. This will start in earnest in 2016, but the trend will take years to play out.

Connectivity and networking will be hot. And it is not just about IoT.  CES will focus on how connectivity is proliferating everything from cars to homes, realigning diverse markets. The interest will affect job markets: Network jobs are hot; salaries expected to rise in 2016  as wireless network engineers, network admins, and network security pros can expect above-average pay gains.

Linux will stay big in network server marker in 2016. Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world’s busiest sites. Linux will also run many parts of out Internet infrastructure that moves the bits from server to the user. Linux will also continue to rule smart phone market as being in the core of Android. New IoT solutions will be moist likely to be built mainly using Linux in many parts of the systems.

Microsoft and Linux are not such enemies that they were few years go. Common sense says that Microsoft and the FOSS movement should be perpetual enemies.  It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. Microsoft is already partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.

Happy Computer Laptop

Web browsers are coming more and more 64 bit as Firefox started 64 bit era on Windows and Google is killing Chrome for 32-bit Linux. At the same time web browsers are loosing old legacy features like NPAPI and Silverlight. Who will miss them? The venerable NPAPI plugins standard, which dates back to the days of Netscape, is now showing its age, and causing more problems than it solves, and will see native support removed by the end of 2016 from Firefox. It was already removed from Google Chrome browsers with very little impact. Biggest issue was lack of support for Microsoft’s Silverlight which brought down several top streaming media sites – but they are actively switching to HTML5 in 2016. I don’t miss Silverlight. Flash will continue to be available owing to its popularity for web video.

SHA-1 will be at least partially retired in 2016. Due to recent research showing that SHA-1 is weaker than previously believed, Mozilla, Microsoft and now Google are all considering bringing the deadline forward by six months to July 1, 2016.

Adobe’s Flash has been under attack from many quarters over security as well as slowing down Web pages. If you wish that Flash would be finally dead in 2016 you might be disappointed. Adobe seems to be trying to kill the name by rebranding trick: Adobe Flash Professional CC is now Adobe Animate CC. In practive it propably does not mean much but Adobe seems to acknowledge the inevitability of an HTML5 world. Adobe wants to remain a leader in interactive tools and the pivot to HTML5 requires new messaging.

The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.

Java will still stick around, but Java’s decline as a language will accelerate as new stuff isn’t being written in Java, even if it runs on the JVM. We will  not see new Java 9 in 2016 as Oracle’s delayed the release of Java 9 by six months. The register tells that Java 9 delayed until Thursday March 23rd, 2017, just after tea-time.

Containers will rule the world as Docker will continue to develop, gain security features, and add various forms of governanceUntil now Docker has been tire-kicking, used in production by the early-adopter crowd only, but it can change when vendors are starting to claim that they can do proper management of big data and container farms.

NoSQL databases will take hold as they be called as “highly scalable” or “cloud-ready.” Expect 2016 to be the year when a lot of big brick-and-mortar companies publicly adopt NoSQL for critical operations. Basically NoSQL could be seem as key:value store, and this idea has also expanded to storage systems: We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being.

In the database world Big Data will be still big but it needs to be analyzed in real-time. A typical big data project usually involves some semi-structured data, a bit of unstructured (such as email), and a whole lot of structured data (stuff stored in an RDBMS). The cost of Hadoop on a per-node basis is pretty inconsequential, the cost of understanding all of the schemas, getting them into Hadoop, and structuring them well enough to perform the analytics is still considerable. Remember that you’re not “moving” to Hadoop, you’re adding a downstream repository, so you need to worry on systems integration and latency issues. Apache Spark will also get interest as Spark’s multi-stage in-memory primitives provides more performance  for certain applications. Big data brings with it responsibility – Digital consumer confidence must be earned.

IT security continues to be a huge issue in 2016. You might be able to achieve adequate security against hackers and internal threats but every attempt to make systems idiot proof just means the idiots get upgraded. Firms are ever more connected to each other and the general outside world. So in 2016 we will see even more service firms accidentally leaking critical information and a lot more firms having their reputations scorched by incompetence fuelled security screw-ups. Good security people are needed more and more – a joke doing the rounds of ITExecs doing interviews is “if you’re a decent security bod, why do you need to look for a job”

There will still be unexpected single points of failures in big distributed networked system. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop. That’s not a single point of failure in the classical sense but it’s really hard to manage unless you go for full cloud agnosticism – which is costly. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail. Oh, and no one seems to test their mission-critical data centre properly, because it’s mission criticalSo they just over-specify where they can and cross their fingers (= pay twice and get the half the coverage for other vulnerabilities).

For IT start-ups it seems that Silicon Valley’s cash party is coming to an end. Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money could be over and valuation expectations are re-calibrating down. The cheap capital party is over. It could mean trouble for weaker startups.

 

933 Comments

  1. Tomi Engdahl says:

    Announcing Visual Studio “15” Preview 5
    https://blogs.msdn.microsoft.com/visualstudio/2016/10/05/announcing-visual-studio-15-preview-5/

    Today we released Visual Studio “15” Preview 5. With this Preview, I want to focus mostly on performance improvements, and in the coming days we’ll have some follow-up posts about the performance gains we’ve seen. I’m also going to point out some of the productivity enhancements we’ve made.

    Reply
  2. Tomi Engdahl says:

    AI based very advanced chat-bot application:

    When Her Best Friend Died, She Rebuilt Him Using Artificial Intelligence
    https://tech.slashdot.org/story/16/10/09/0113234/when-her-best-friend-died-she-rebuilt-him-using-artificial-intelligence?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    When Roman Mazurenko died, his friend Eugenia Kuyda created a digital monument to him: an artificial intelligent bot that could “speak” as Roman using thousands of lines of texts sent to friends and family.

    Speak, Memory
    When her best friend died, she rebuilt him using artificial intelligence
    http://www.theverge.com/a/luka-artificial-intelligence-memorial-roman-mazurenko-bot

    It had been three months since Roman Mazurenko, Kuyda’s closest friend, had died.

    But ever since Mazurenko’s death, Kuyda had wanted one more chance to speak with him.

    Kuyda and Mazurenko, who by then had become close friends, came to believe that their futures lay elsewhere. Both became entrepreneurs, and served as each other’s chief adviser as they built their companies.

    Running a startup had worn him down, and he was prone to periods of melancholy

    In the weeks after Mazurenko’s death, friends debated the best way to preserve his memory. One person suggested making a coffee-table book about his life, illustrated with photography of his legendary parties. Another friend suggested a memorial website. To Kuyda, every suggestion seemed inadequate.

    As she grieved, Kuyda found herself rereading the endless text messages her friend had sent her over the years — thousands of them, from the mundane to the hilarious. She smiled at Mazurenko’s unconventional spelling

    Kuyda found herself rereading the endless text messages her friend had sent her

    For two years she had been building Luka, whose first product was a messenger app for interacting with bots. Backed by the prestigious Silicon Valley startup incubator Y Combinator, the company began with a bot for making restaurant reservations.

    Reading Mazurenko’s messages, it occurred to Kuyda that they might serve as the basis for a different kind of bot — one that mimicked an individual person’s speech patterns. Aided by a rapidly developing neural network, perhaps she could speak with her friend once again.

    She set aside for a moment the questions that were already beginning to nag at her.

    What if it didn’t sound like him?

    What if it did?

    In the summer of 2015, with Stampsy almost out of cash, Mazurenko applied for a Y Combinator fellowship proposing a new kind of cemetery that he called Taiga.
    creating what he called “memorial forests.”

    Y Combinator rejected the application. But Mazurenko had identified a genuine disconnection between the way we live today and the way we grieve. Modern life all but ensures that we leave behind vast digital archives — text messages, photos, posts on social media — and we are only beginning to consider what role they should play in mourning.

    “She said, what if we try and see if things would work out?”

    “Can we collect the data from the people Roman had been talking to, and form a model of his conversations, to see if that actually makes sense?”

    “The team building Luka are really good with natural language processing,” he said. “The question wasn’t about the technical possibility. It was: how is it going to feel emotionally?”

    Today’s bots remain imperfect mimics of their human counterparts. They do not understand language in any real sense. They respond clumsily to the most basic of questions. They have no thoughts or feelings to speak of. Any suggestion of human intelligence is an illusion based on mathematical probabilities.

    And yet recent advances in artificial intelligence have made the illusion much more powerful. Artificial neural networks

    Two weeks before Mazurenko was killed, Google released TensorFlow for free under an open-source license.

    Luka had been using TensorFlow to build neural networks for its restaurant bot.

    In February, Kuyda asked her engineers to build a neural network in Russian. At first she didn’t mention its purpose, but given that most of the team was Russian, no one asked questions. Using more than 30 million lines of Russian text, Luka built its second neural network. Meanwhile, Kuyda copied hundreds of her exchanges with Mazurenko from the app Telegram and pasted them into a file.

    the next step: training the Russian network to speak in Mazurenko’s voice.

    Only a small percentage of the Roman bot’s responses reflected his actual words. But the neural network was tuned to favor his speech whenever possible. Any time the bot could respond to a query using Mazurenko’s own words, it would. Other times it would default to the generic Russian.

    On May 24th, Kuyda announced the Roman bot’s existence in a post on Facebook.

    The Roman bot was received positively by most of the people who wrote to Kuyda, though there were exceptions.

    many of Mazurenko’s friends found the likeness uncanny. “It’s pretty weird when you open the messenger and there’s a bot of your deceased friend, who actually talks to you,” Fayfer said.

    Several users agreed to let Kuyda read anonymized logs of their chats with the bot. (She shared these logs with The Verge.) Many people write to the bot to tell Mazurenko that they miss him.

    For many users, interacting with the bot had a therapeutic effect. The tone of their chats is often confessional

    Kuyda continues to talk with the bot herself — once a week or so, often after a few drinks. “I answer a lot of questions for myself about who Roman was,”

    Someday you will die, leaving behind a lifetime of text messages, posts, and other digital ephemera. For a while, your friends and family may put these digital traces out of their minds. But new services will arrive offering to transform them — possibly into something resembling Roman Mazurenko’s bot.

    Reply
  3. Tomi Engdahl says:

    Why 2016 Is a Little Like 1984
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330581&

    The vision of artificial intelligence is becoming a reality in a different and more nuanced way than technologists once imagined.

    Wafer-thin smartphones more powerful than a 486 PC are common, and everyone assumes they can connect to one big network that holds all the world’s information. It’s amazing, and a little scary.

    At least for today, it turns out those intelligent agents people talked about are the products of a handful of companies that own giant collections of data centers. They include Apple, Amazon, Facebook, Google — and the world’s largest governments.

    Most of the agents are what spy novels call double agents. They serve two masters — a consumer like you and me and their owner with the big data center.

    Ostensibly the agents come free with an iPhone, a Facebook account or an Amazon Echo. Their real cost is they share their data with their vendor who resells it to his paying customers. As Peter Clarke, my colleague from EE Times Europe, puts it, “these days you are either selling or being sold.”

    Few expected it would turn out this way. The visionaries weren’t predicting a World Wide Web, massively parallel distributed computing or the current kerosene — an emerging family of neural networking algorithms that run on those distributed data centers.

    Today’s agents are still in their infancy.

    So in 2016, the landscape seems set for a battle among a handful of intelligent agents poised to become giants. Consumers and OEMs need to decide carefully which they will partner with and on what terms.

    Arguably science fiction writers like George Orwell saw this coming long before the PC arrived. It took folks like Edward Snowden to make it clear that 2016 is in a way 1984.

    Reply
  4. Tomi Engdahl says:

    These are three of the biggest problems facing today’s AI
    Not-so-deep learning
    http://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks

    Speaking to attendees at a deep learning conference in London last month, there was one particularly noteworthy recurring theme: humility, or at least, the need for it.

    While companies like Google are confidently pronouncing that we live in an “AI-first age,” with machine learning breaking new ground in areas like speech and image recognition, those at the front lines of AI research are keen to point out that there’s still a lot of work to be done. Just because we have digital assistants that sound like the talking computers in movies doesn’t mean we’re much closer to creating true artificial intelligence.

    Problems include the need for vast amounts of data to power deep learning systems; our inability to create AI that is good at more than one task; and the lack of insight we have into how these systems work in the first place. Machine learning in 2016 is creating brilliant tools, but they can be hard to explain, costly to train, and often mysterious even to their creators. Let’s take a look at these challenges in more detail:

    First you get the data, then you get the AI
    Specialization is for insects — AI needs to be able to multitask
    It’s only real intelligence if you can show your working

    Reply
  5. Tomi Engdahl says:

    Microsoft’s redesigned Paint app for Windows 10 looks awesome
    http://www.theverge.com/2016/10/7/13207612/microsoft-paint-windows-10-app

    Microsoft is said to be working on software and hardware that will enhance the use of stylus, touch, and traditional inputs on a full desktop PC. Microsoft’s new Paint app for Windows 10 will play into this plan, alongside apps from third parties. Any potential Surface hardware will compliment the company’s software improvements.

    Reply
  6. Tomi Engdahl says:

    Linus Torvalds says ARM just doesn’t look like beating Intel
    Linux Lord also feels Internet of Things hardware is mostly doomed, like his old Sinclair QL
    http://www.theregister.co.uk/2016/10/10/linus_torvalds_says_arm_just_doesnt_look_like_beating_intel/

    Linus Torvalds believes ARM has little chance of overhauling x86, because the latter has built an open hardware ecosystem that the former just doesn’t look like replicating.

    Rusling asked Torvalds if he has a favourite architecture and Torvalds quickly responded that “x86 is still the one I favour most and that is because of the PC.”

    “The infrastructure is there there and it is open in a way no other architecture is.”

    “The instruction set and the core of the CPU is not very important,” Torvalds added. “It is a factor people kind of fixate on but it does not matter in the end. What matters is the infrastructure around the instruction set. x86 has it and has it at a lot of levels.”

    Torvalds said ARM’s hardware story is strong in mobile, but that “I have been disappointed in ARM” because “as a hardware platform it is still not very pleasant to deal with.”

    “It does not have the same unified models around the instruction set as we do in the PC space, but it is getting better.”

    “Being compatible just wasn’t as big a deal in the ARM ecosystem as it was in the x86 system.”

    The evidence, he said, is there to see in the fact that development for ARM nearly always takes place on an x86 PC. While Torvalds admires the Raspberry Pi, he classed it as a “toy” and said ARM cannot win until it provides a platform developers will want to use for their primary machines.

    Torvalds said similar things about the internet of things (IoT). Asked about efforts to shrink Linux to run on very modest computing devices, he said the Linux development community won’t make the effort to do so because “because most of the small devices tend to be very locked down.”

    Reply
  7. Tomi Engdahl says:

    Torvalds: Often the drivers are really bad code

    Often the drivers are really bad code, but it is due to the fact that the hardware installations are really bad. Code just reflects this. So said Linaro Linux Torvalds conference.

    - When I say that the code is bad, I mean code that is written in only one device. It can not be so to finish the code, which is used for all devices and all architectures.

    ARM cent Torvalds said he was disappointed. Not so much a command position, but the fact that the hardware platform with its not pleasant to work with. – Compatibility has never been such a big thing for the ARM ecosystem, as it has been for the PC. The result can be seen today on ARM-iron fragmentation.

    When the code should be developed so many devices, it takes a different way.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5188:torvalds-usein-ajurit-ovat-todella-huonoa-koodia&catid=13&Itemid=101

    Video:
    LAS16-500K3: Fireside Chat with David Rusling and Linus Torvalds
    https://www.youtube.com/watch?v=fuAebQvFnRI

    Reply
  8. Tomi Engdahl says:

    The first integrated into the FPGA chip

    Achronix is ​​known as a small FPGA manufacturer, a high-speed Speedster22i circuits have been delivered on the market since 2013. Now, however, the company makes a revolutionary takeover of the area by presenting the world’s first embeddable system circuit FPGA.

    When the FPGA portion embedded into the system circuit, achieved significantly better performance than when using separate chips. Signals travel faster than 10 per cent, 10 per cent of the delay is shortened, power consumption will shrink by as much as half the cost of manufacturing are reduced to 90 percent.

    - The main reason is probably the market. Altera and Xilinx wanted to focus on large, monolithic FPGA development, because it is they perform.

    There is a definite reason why many heavy calculation of a shift device-based acceleration. Microprocessors are falling in, and, for example, many complex functions have to be divided into smaller parts. Whipped up what accounting can be divided into parallel cores, which significantly improves performance. In particular, this applies to all DSP-intensive computation, such as, say, the processing of data packets in a data center cards.

    For acceleration of calculations has been used, for example, modified graphics processors and now also increasingly FPGA-circuits due to their programmability. The processor speed integrated server core block is a sort of ideal accelerator, because it increases the performance with low power consumption and a very low added cost.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5198:ensimmainen-sulautettava-fpga-piiri&catid=13&Itemid=101

    More:
    Introducing Speedcore™ eFPGAs
    http://www.achronix.com/product/speedcore/

    Reply
  9. Tomi Engdahl says:

    Make yourself presentable, upstart: We’re going out
    Not you, buddy, the product’s UI
    http://www.theregister.co.uk/2016/10/10/storage_start_up_pt_3/

    Part Three You’ve got the talent, you’ve the idea for something that resembles a product. But, as Steve Jobs said “real artists ship”, and art isn’t something that comes just like that. In this case we’re talking architecture, we’re talking tools, we’re talking interface and UI – not necessarily things you were thinking about were they?

    Reply
  10. Tomi Engdahl says:

    Chelsio demonstrates 100 Gigabit iSCSI performance
    http://www.cablinginstall.com/articles/pt/2016/09/chelsio-demonstrates-100-gigabit-iscsi-performance.html

    Noting that iSCSI offload is especially suited for enterprise deployments, Chelsio says its T6 100G Unified Wire controllers enable enterprise storage systems that are purpose-built to deliver optimized storage performance for various application workloads in mission-critical virtualized, private cloud environments.

    A recent demonstration shows Chelsio T6 silicon enabling 100 Gigabits-per-second (Gbps) line-rate iSCSI performance for a cost-effective enterprise-class storage target solution built with volume, off-the-shelf hardware and software components.

    According to the company, iSCSI offload is especially suited for enterprise deployments for the following reasons:

    – It requires only software peers (that are supported in all volume operating systems) and as such can be non-disruptively deployed in existing environments.

    – The second source consists of operating system built-in software-only stacks. The enterprise customer will never be in a line-down situation because of the adapter hardware. It is therefore much easier to address dual source requirements.

    – It is supported in Linux (LIO Target, Open-iSCSI Initiator), FreeBSD (Target and Initiator), Windows (Initiator – the most popular iSCSI initiator in the industry), VMware, and other OSes and distributions.

    – It is built on the proven TCP/IP protocol and as such can scale, route and handle congestion and network loss resiliently and robustly without needing special switch features.

    – It allows the same redline performance as other storage networking protocols such as NVMe over Fabrics or iSER, and has built-in RDMA, but it does so over the proven TCP/IP protocol (removing any challenges pertaining to TCP/IP “overhead”).

    – It does not suffer from interoperability requirements of fabrics-based storage protocols such as iSER or NVMe over Fabrics.
    It is a proven $3.6B market today without any technology adoption risks.

    – It runs at 100Gb and beyond, and will scale consistently with Ethernet evolution.

    Reply
  11. Tomi Engdahl says:

    Analyst: Data center rack densities remain below 5 kW
    http://www.cablinginstall.com/articles/2016/10/data-center-rack-densities-ihs-markit.html?cmpid=Enl_CIM_DataCenters_October112016&eid=289644432&bid=1553009

    In its most recent analysis of the rack PDU market, IHS Markit found that the uptake of higher-power rack PDUs strongly outpaces the actual power density within racks in the data center. Sarah McElroy, senior research analyst for data centers and cloud with IHS Markit, pointed out, “Rack PDUs with higher power ratings, in the 5-10 kW range, accounted for 41 percent of global rack PDU revenue in 2015 compared to 38 percent in 2013, proving that a shift is occurring. While 5-10 kW rack PDUs are growing in popularity over those 10 kW, accounted for 18 percent of global rack PDU revenue in 2013 and grew to account for 22 percent by 2015.”

    “Despite the growth in rack PDUs with higher power ratings, IHS finds that average rack power densities are not as high as the rack PDU power rating data might suggest. IHS estimates that average rack power density globally is approaching but remains below 5 kW per rack.”

    Advances in the energy efficiency of power supplies and IT equipment has increased the achievable compute per watt.

    Server virtualization allows users to run servers at higher capacity, for example 80 percent instead of 20 percent capacity. Instead of adding additional servers, users are now running current servers closer to full capacity, which generally results in less power draw than adding new servers to perform the same amount of computing.

    “While average rack densities still remain below 5 kW worldwide, densities of up to almost 50 kW have been implemented in applications such as supercomputing. It’s no longer uncommon to see rack densities of 20 to 30 kW in some applications”

    Reply
  12. Tomi Engdahl says:

    Intel is shipping an ARM-based FPGA. Repeat, Intel is shipping an ARM-based FPGA
    Nobody tell Linux, okay?
    http://www.theregister.co.uk/2016/10/10/intel_stratix_10_arm_based_fpga/

    Intel’s followed up on its acquisition of Altera by baking a microprocessor into a field-programmable gate array (FPGA).

    The Stratix 10 family is part of the company’s push beyond its stagnating PC-and-servers homeland into emerging markets like high-performance computing and software-defined networking.

    Intel says the quad-core 64-bit ARM Cortex-A53 processor helps position the device for “high-end compute and data-intensive applications ranging from data centres, network infrastructure, cloud computing, and radar and imaging systems.”

    Compared to the Stratix V, Altera’s current generation before the Chipzilla slurp, Intel says the Stratix 10 has five times the density and twice the performance; 70 per cent lower power consumption at equivalent performance; 10 Tflops (single precision); and 1 TBps memory bandwidth.

    The devices will be pitched at acceleration and high-performance networking kit.

    The Stratix 10 “Hyperflex architecture” uses bypassable registers – yes, they’re called “Hyper-Registers”, which are associated with individual routing segments in the chip, and are available at the inputs of “all functional blocks” like adaptive logic modules (ALMs), embedded memory blocks, and digital signal processing (DSP) blocks.

    https://www.altera.com/content/dam/altera-www/global/en_US/pdfs/literature/wp/wp-01220-hyperflex-architecture-fpga-socs.pdf

    Reply
  13. Tomi Engdahl says:

    Verizon CEO dismisses report of $1bn discount on hacked Yahoo as ‘total speculation’
    http://www.ibtimes.co.uk/verizon-ceo-dismisses-report-1bn-discount-hacked-yahoo-total-speculation-1585764?

    Verizon CEO Lowell McAdam has dismissed reports regarding his company seeking to lower the price of Yahoo following the disclosure of the massive 2014 data breach, wherein 500 million user accounts were stolen.

    McAdam also revealed that he was not shocked by the much-discussed hack that compromised sensitive user information including names, email addresses, phone numbers, birth dates and encrypted passwords of “at least” 500 million user accounts in late 2014.

    “We all live in an internet world. It’s not a question of if you’re going to get hacked but when you are going to get hacked,” McAdam said.

    “The industrial logic to doing this merger still makes a ton of sense,” he said. “I have spent a lot of time over the past weeks with folks from Yahoo and I am very impressed by their capability.”

    Reply
  14. Tomi Engdahl says:

    HPE, IBM, ARM, Samsung and pals in plot to weave ‘memory fabric’
    Everyone but Intel and Cisco working together to build storage-class memory
    http://www.theregister.co.uk/2016/10/11/memory_fabric_needed_for_storageclass_memory/

    A group of suppliers have got together as a consortium to develop Gen-Z – a scalable, high-performance bus or interconnect fabric linking computers and memory.

    The Gen-Z consortium is an open, non-proprietary, transparent industry standards body. It says it believes open standards provide a level playing field to promote adoption, innovation and choice.

    Consortium members are AMD, ARM, Broadcom, Cavium Inc, Cray, Dell EMC, Hewlett Packard Enterprise (HPE), Huawei, IBM, IDT, Lenovo, Mellanox Technologies, Micron, Microsemi, Red Hat, Samsung, Seagate, SK Hynix, Western Digital Corporation and Xilinx.

    We’re told “this flexible, high-performance memory semantic fabric provides a peer-to-peer interconnect that easily accesses large volumes of data while lowering costs and avoiding today’s bottlenecks.” It’s meant to enable storage accesses to be closer to memory accesses in speed through the use of storage class memory and new programmatic and architectural ideas.

    Its background thinking is that memory tiers will become increasingly important, and rack-scale composability requires a high bandwidth, low latency fabric which must seamlessly plug into existing ecosystems without requiring OS changes.

    http://genzconsortium.org/

    Reply
  15. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Facebook partners with Google, others to launch Yarn, a new open-source JavaScript package manager, which uses npm’s registry but replaces the npm client

    Facebook partners with Google, others to launch a new JavaScript package manager
    https://techcrunch.com/2016/10/11/facebook-partners-with-google-others-to-launch-a-new-javascript-package-manager/

    Facebook today launched Yarn, a new package manager for JavaScript. If you’ve every worked with JavaScript and Node.js, chances are that you’ve used the npm package manager to find and reuse existing code (or maybe publish your own libraries, too). At Facebook’s scale, though, npm didn’t quite work for the company, though, and it started developing an opinionated alternative for its internal use. Over time, the team got help from developers at Google, Exponent and Tilde.

    It’s worth stressing that Yarn, which promises to even give developers that don’t work at Facebook’s scale a major performance boost, still uses the npm registry and is essentially a drop-in replacement for the npm client.

    Fast, reliable, and secure dependency management for JavaScript. https://yarnpkg.com

    Reply
  16. Tomi Engdahl says:

    Ian Cutress / AnandTech:
    Samsung unveils ArtPC, a cylindrical desktop with 360-degree audio speaker, Intel Skylake-based Core i5/i7; pre-orders open now for $1,200+, ships Oct. 28 in US — For most PC enthusiasts, if you ask them to name a cylindrical machine, the Mac Pro comes immediately to mind.

    Samsung ArtPC: Cylindrical PC with 360º audio, i5/i7 plus NVMe, Preorders from $1200
    by Ian Cutress on October 10, 2016 2:15 PM EST
    http://www.anandtech.com/show/10744/samsung-artpc-cylindrical-pc-with-360-audio-i5-i7-plus-nvme-preorders-from-1200

    Reply
  17. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Stack Overflow launches Developer Story, a free résumé profile page tool for programmers

    Stack Overflow puts a new spin on resumes for developers
    https://techcrunch.com/2016/10/11/stack-overflow-puts-a-new-spin-on-resumes-for-developers/

    Stack Overflow, the community site best known for providing answers for all of your random coding questions, also has a thriving jobs board and provides services to employers looking to hire developers. Today, the team is expanding the jobs side of its business with the launch of Developer Story, a new kind of resume that aims to free developers from the shackles of the traditional resume.

    the team realized that regular resumes put their emphasis on job titles, schools and degrees — but that doesn’t always work for developers. According to Stack Overflow’s latest survey, the majority of developers don’t have degrees in computer science, for example.

    “They are optimized for conveying the importance of your pedigree,” Hanlon told me. The things you’ve achieved, though, tend to be hidden in tiny bullets underneath those headers. So the idea with Developer Story is to pull out your achievements — the problems you’ve solved, the open source projects you’ve contributed to, the apps you’ve written — and highlight those. “Developers, fundamentally, are makers,” Hanlon said. “They are less like a business analyst where a title conveys authority.”

    Developer Story offers two views: a traditional resume view for employers and a more modern timeline view. It’s the timeline view that emphasizes your achievements, but even the traditional view puts its emphasis on which projects you have contributed to, which languages you’ve used, which questions you’ve answered on Stack Overflow, etc.

    “A huge percentage of developers actually never go looking for jobs,” Hanlon said. “They have people come to them. One of the things Developer Story is optimized for is these people.”

    Reply
  18. Tomi Engdahl says:

    PC sales sinking almost as fast as Donald Trump’s poll numbers
    68 million units a quarter and falling as ‘Some consumers may never upgrade a PC again’
    http://www.theregister.co.uk/2016/10/12/q3_2016_pc_sales_data/

    New data from analyst outfits IDC and Gartner suggest the PC market continues to crater.

    The latter firm’s 3Q2016 data records an eighth consecutive quarter of shipment decreases, to 68.9 million units or a 5.7 per cent decline from the third quarter of 2015. IDC found “nearly 68 million units in the third quarter of 2016 (3Q16), a year-on-year decline of 3.9 per cent”, but also noted a little upside as that’s 3.2 per cent better than it expected.

    IDC reckons the less-bad-than-expected performance is a sign that PC-makers have finally started making kit capable of exciting consumers enough to buy a new machine.

    Gartner thinks the opposite, with principal analyst Mikako Kitagawa opining that the firms’s 2016 personal technology survey showed “the majority of consumers own, and use, at least three different types of devices in mature markets. Among these devices, the PC is not a high priority device for the majority of consumers, so they do not feel the need to upgrade their PCs as often as they used to. Some may never decide to upgrade to a PC again.”

    Reply
  19. Tomi Engdahl says:

    Power/Performance Bits: Oct. 11
    http://semiengineering.com/powerperformance-bits-oct-11/

    Data center on chip

    Researchers from Washington State University and Carnegie Mellon University presented a preliminary design for a wireless data-center-on-a-chip at the Embedded Systems Week conference in Pittsburgh.

    Data centers are well known as energy hogs, and they consumed about 91 billion kilowatt-hours of electricity in the U.S. in 2013, which is equivalent to the output of 34 large, coal-fired power plants, according to the National Resources Defense Council. One of their major performance limitations stems from the multi-hop nature of data exchange.

    In recent years, the group designed a wireless network on a computer chip

    The new work expands these capabilities for a wireless data-center-on-a-chip. In particular, the researchers are moving from two-dimensional chips to a highly integrated, three-dimensional, wireless chip at the nano- and microscales that can move data more quickly and efficiently.

    The team believes they will be able to run big data applications on their wireless system three times more efficiently than the best data center servers.

    Wireless data-center-on-a-chip aims to cut energy use
    https://news.wsu.edu/2016/10/06/wireless-data-center-chip-aims-cut-energy-use/

    Personal cloud computing possibilities

    As part of their grant, the researchers will evaluate the wireless data center to increase energy efficiency while also maintaining fast, on-chip communications. The tiny chips, consisting of thousands of cores, could run data-intensive applications orders of magnitude more efficiently compared to existing platforms. Their design has the potential to achieve a comparable level of performance as a conventional data center using much less space and power.

    It could someday enable personal cloud computing possibilities, said Pande, adding that the effort would require massive integration and significant innovation at multiple levels.

    “This is a new direction in networked system design,’’ he said. “This project is redefining the foundation of on-chip communication.”

    Reply
  20. Tomi Engdahl says:

    Facebook Yarn’s for your JavaScript package
    One string to bring them all and in the installation bind them
    http://www.theregister.co.uk/2016/10/12/facebook_yarns_for_your_package/

    Facebook, working with Exponent, Google, and Tilde, has released software to improve the JavaScript development experience, which can use all the help it can get.

    Yarn, introduced on Tuesday under a BSD license and without the patent clause that terminates Facebook’s React license for those involved in patent litigation against the company, is an alternative npm client. It’s not to be confused with Apache Hadoop YARN (Yet Another Resource Negotiator), which is cluster management software.

    For those not steeped in JavaScript or related technology like Node.js, npm is JavaScript’s package manager. Its command line client provides more than five million developers with access to some 300,000 packages in the npm registry, resulting in about five billion downloads every month, at least by Facebook’s measure.

    Package managers help developers by automating the installation, configuration, and management of libraries, frameworks, and other software components. They’re used in Python (pip), PHP (PEAR), Perl (CPAN), Ruby (RubyGems), and Rust (Cargo), among other programming languages.

    https://yarnpkg.com/

    Reply
  21. Tomi Engdahl says:

    Chrome OS still struggles with the basics
    http://www.edn.com/electronics-blogs/brians-brain/4442821/Chrome-OS-still-struggles-with-the-basics?_mc=NL_EDN_EDT_EDN_consumerelectronics_20161012&cid=NL_EDN_EDT_EDN_consumerelectronics_20161012&elqTrackId=29c2dcf823314647841e845b399af9de&elq=bdc526cdbef7407a84066af25a5a13b9&elqaid=34312&elqat=1&elqCampaignId=29940

    As I recently mentioned, I picked up a 2014-era Toshiba Chromebook 2 a few months ago

    I thought I’d give it a bit of a whirl ahead of time to see how its Chrome OS foundation capabilities might have improved.

    Some good news, to begin; things have gotten somewhat better. At the time of my last substantial hands-on review, back in mid-2012, the OS and its associated cloud-centric app-and-storage suite were in the midst of a painful transition away from the Google Gears proprietary API and toward the HTML5-based offline support successor.

    That transition is largely complete, at least for the core Gmail, Docs, Sheets, and Slides applications and the Drive online storage nexus.

    The high-resolution screen is utterly beautiful; I wish there was some robust means of editing photos on it, but I’ll save that gripe for later.

    That’s the good news. Now for the not-so-good. Basic stuff that should work still doesn’t, nearly a decade after Chrome OS’s initial unveiling. Bookmarks, for example; they sync and auto-update just fine among my various Chrome browser instantiations, along with my Android smartphones and tablet. But here on the Chromebook, I ended up with dual identical-named iterations of each bookmark folder, one of them completely empty, along with bookmarks that I deleted long ago elsewhere.

    Extension syncing is also wonky, as is more general extension functionality. FlashBlock is apparently no longer available from Google’s Store, but it still sync’d just fine to the Chrome browser on the MacBook Air I recently set up.

    There’s no meaningful storage manager like File Manager (Windows) or Finder (Mac OS X) analogy for browsing the SSD, although hope springs eternal.

    Bottom line: at the end of the day, once I try to expand beyond the core Google apps experience, I’m left feeling empty

    Back in 2012, I wrote:

    “At the end of the day, as Chrome OS is fundamentally nothing more than yet another data-collection avenue by which Google can insidiously learn more about you, thereby presenting you with customized ads that it can charge companies more money for. At least with Gmail and other historical Google services, the invasion of privacy came with no associated price tag to the user.”

    I suppose I’ve mellowed since then. After all, just last year I admitted that Chrome OS-based devices had validity in schools and other such places. And I fully admit that it’s possible to buy a Chromebook for around 1/5 the price of my 13” MacBook Air. But if I want cheap, I can go with a fuller featured Windows 10-based “netbook” for about the same price.

    Reply
  22. Tomi Engdahl says:

    PC Industry Is Now On a Two-Year Downslide
    https://hardware.slashdot.org/story/16/10/11/2244210/pc-industry-is-now-on-a-two-year-downslide

    According to analyst firm Gartner, PC shipments have declined for eight consecutive quarters — “the longest duration of decline in the history of the PC industry.” The company found that worldwide PC shipments totaled 68.9 million units in the third quart of 2016, a 5.7 percent decline from the third quarter of 2015.

    Gartner Says Worldwide PC Shipments Declined 5.7 Percent in Third Quarter of 2016
    Global PC Shipments Declined for the Eighth Consecutive Quarter
    http://www.businesswire.com/news/home/20161011006705/en/Gartner-Worldwide-PC-Shipments-Declined-5.7-Percent

    “There are two fundamental issues that have impacted PC market results: the extension of the lifetime of the PC caused by the excess of consumer devices, and weak PC consumer demand in emerging markets”

    “In emerging markets, PC penetration is low, but consumers are not keen to own PCs. Consumers in emerging markets primarily use smartphones or phablets for their computing needs, and they don’t find the need to use a PC as much as consumers in mature markets.”

    Reply
  23. Tomi Engdahl says:

    PC industry is now on a two-year downslide
    Its longest decline in history
    http://www.theverge.com/2016/10/11/13250172/pc-industry-shipment-decline-lenovo-hp-dell-asus-acer

    The state of the PC industry is not looking great. According to analyst firm Gartner, worldwide PC shipments fell 5.7 percent in the third quarter of 2016 to 68.9 million units. That marks the “the eighth consecutive quarter of PC shipment decline, the longest duration of decline in the history of the PC industry,” Gartner writes in a press release issued today. The firm cites poor back-to-school sales and lowered demand in emerging markets. But the larger issue, as it has been for quite some time, is more existential than that.

    The threat to the PC industry is an existential one

    PC makers are feeling the pressure. HP, Dell, and Asus each had low single-digit growth, but Acer, Apple, and Lenovo all experienced declines, with Apple and Lenovo each suffering double-digit drops. Meanwhile, the rest of the PC market, which collectively ships more units per quarter than any of the big-name brands, is down more than 16 percent.

    Reply
  24. Tomi Engdahl says:

    Burger barn put cloud on IT menu, burned out its developers
    Move to the cloud and you may need ‘vendor managers’ and more governance
    http://www.theregister.co.uk/2016/10/12/burger_barn_put_cloud_on_it_menu_burned_out_its_developers/

    tale of replacing bespoke business applications with bits of Oracle’s cloud.* Doing so has made for interesting news on his sub-20 IT team, especially for the team of five developers. They’re on the way out because with the old code gone, the old coders can go too.

    In their place, Nolte said he’ll hire “vendor managers,” people skilled in maintaining relationships with vendors, keeping contracts humming along nicely and negotiating for the new stuff that Hungry Jack’s needs. Nolte thinks some of his developers have the brains to make the jump to this new role, but not the proclivity. He characterised his developers as “fiddlers and tweakers” who are unlikely to abandon their coding careers.

    Another quick lesson in Australian institutions: the nation’s dominant auto club is the National Roads and Motorists Association (NRMA)
    Kotatko has just signed up for a marketing cloud and said one of the problems it has created is it’s too easy to run campaigns, because she and her team now have lots of data at their fingertips. She’s therefore been surprised at the amount of governance she has to do, lest marketers go wild with campaigns that target people from the wrong lists, breaching policy or good taste along the way.

    No, software-as-a-service won’t automatically simplify operations and cut costs
    Doing SaaS right needs at least half-a-dozen add-ons
    http://www.theregister.co.uk/2016/10/11/no_softwareasaservice_wont_automatically_simplify_operations_and_cut_costs/

    The Register has been asking around about what it takes to do SaaS right and has come to believe that among the tools you’ll probably need are:

    Backup, which may seem an odd item on a SaaS shopping list given vendors’ promises of super-redundant data centres that never go down.

    Data Loss Protection (DLP) Whether your data is on-premises or in a SaaS application, you need to make sure it can’t fall into the wrong hands. Most SaaS apps don’t have native DLP, the technology that monitors data to ensure sensitive material isn’t being e-mailed to unknown parties, saved onto removable storage media or otherwise exfiltrated. DLP’s become a standard issue on-premises security technology. It’s a no-brainer for SaaS users

    Context-aware security Imagine you work in London and that one afternoon, a few hours after you last logged in on a known good IP address, someone logs into your SaaS account from Eastern Europe with an unrecognised IP address.

    Cloud Access Service Brokers (CASBs) Now imagine you use multiple SaaS applications and that the context-sensitive logon and DLP policy described above needs to be implemented in all of them.

    Interconnect services Users hate even short delays when using software and that doesn’t change with SaaS. On your own networks, you can control the user experience. But SaaS nearly always has to traverse a big slab of the the public internet … unless you pay for interconnect services that the likes of Equinix and Digital Realty offer to pave a fast lane between you and your preferred SaaS applications

    Mobile device management A very good reason to adopt SaaS is that most applications are ready to roll on mobile devices from day one.

    Will SaaS vendors explain this stuff?

    Reply
  25. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft expands availability of its $3K Hololens to Germany, France, UK, Ireland, Australia, and New Zealand; preorders are live now, units ship next month

    Microsoft starts selling its HoloLens in Germany, France, UK, Ireland, Australia and New Zealand
    https://techcrunch.com/2016/10/12/hololens-goes-global/

    HoloLens, Microsoft’s $3,000 mixed-reality goggles (or “the world’s first self-contained holographic computer” in Microsoft’s parlance), was only available in the U.S. and Canada so far. Today, however, the company announced that it will also start selling the devices in Australia, France, Germany, Ireland, New Zealand and the United Kingdom. Preorders start today and the devices will ship in late November.

    We hear that Microsoft’s yield for producing HoloLenses is higher than it expected, so the company is now also able to bring it to new regions faster than it expected. What’s gating an even wider rollout, though, is that Microsoft still needs to get its certifications from the international equivalents of the U.S.’s FCC as it enters new markets. It’s worth noting that even though it’s officially only rolling out in a few European countries, the single European market pretty much means anybody in Europe will be able to get a HoloLens now.

    Reply
  26. Tomi Engdahl says:

    Neural Net Computing Explodes
    http://semiengineering.com/neural-net-computing-explodes/

    Deep-pocket companies begin customizing this approach for specific applications—and spend huge amounts of money to acquire startups.

    Neural networking with advanced parallel processing is beginning to take root in a number of markets ranging from predicting earthquakes and hurricanes to parsing MRI image datasets in order to identify and classify tumors.

    As this approach gets implemented in more places, it is being customized and parsed in ways that many experts never envisioned. And it is driving new research into how else these kinds of compute architectures can be applied.

    Fjodor van Veen, deep learning researcher at The Asimov Institute in the Netherlands, has identified 27 distinct neural net architecture types.

    Reply
  27. Tomi Engdahl says:

    384 cores, the 1536 thread

    Imagination Technologies, now owned by the MIPS-processor architecture is not dead, even if it is going to be able to properly challenge the ARM chips.

    The newest MIPS processor is the real power – new i6500-core heterogeneous in two ways: as well as a cluster machine inside and out.

    At best, the calculation can be hardware level to virtualize as many as 384 CPU cores. This “virtualization” was brought to MIPS cores in the previous generation, namely I6400 processor. When each core can calculate the four threads in parallel, computing power will be 1536 concurrent thread amount.

    Imagination will use the new processor, the upcoming Q5 Mobileye processor, which is being developed that is autonomous robot cars sensor fusion. This chip is expected to enter the market in 2020.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5217:384-ydinta-1536-saietta&catid=13&Itemid=101

    Reply
  28. Tomi Engdahl says:

    PlayStation VR Estimated to Sell More Than 2 Million Units by End of 2016
    http://www.ign.com/articles/2016/10/12/playstation-vr-estimated-to-sell-more-than-2-million-units-by-the-end-of-2016

    Market research company SuperData Research (via Game Informer) has predicted that the PlayStation VR unit will sell over 2 million units by the end of the year.

    More specifically, the company projects PSVR will sell near 2,602,370 units. The headset comes out October 13.

    To get this number, SuperData considered some of the following facts: Sony currently has an install base of 44 million people that own the PlayStation 4. According to the company, PlayStation VR will benefit from this.

    On the flip side, though, SuperData does point out that in the past Sony has stumbled with manufacturing appropriate amounts of new products early in their life cycles. This may be to the detriment of PlayStation VR. It also notes the peripheral doesn’t have a confirmed “killer app” to use as a showpiece game.

    Lastly, with the PlayStation Pro coming out this holiday, SuperData suggests consumers may decide to spend their money on a more powerful console that costs $100 less than the $500 peripheral.

    Reply
  29. Tomi Engdahl says:

    HP cuts up to 4,000 jobs worldwide
    http://www.bbc.com/news/business-37651831

    US computer company HP Inc has said it expects to cut 3,000 to 4,000 jobs over the next three years.

    The hardware business of the former Hewlett-Packard announced the plans as part of a larger restructuring effort.

    It is hoped the cuts will generate some $200m (£163m) to $300m in annual savings for the firm, but they are expected to cost up to $500m in charges.

    HP also issued a lower-than-expected earnings guidance for next year.

    The job cuts come as sales of personal computers around the world continue to decline.

    Earlier this week, research company Gartner said PC shipments declined 5.7% in the third quarter of 2016 compared with a year earlier.

    Reply
  30. Tomi Engdahl says:

    Christine Wang / CNBC:
    HP Inc. to cut between 3K and 4K jobs by 2019, starting in 2017; company expects savings of $200-$300M beginning FY 2020, with $350-$500M in restructuring costs — HP Inc. said on Thursday that it expects 3,000 to 4,000 employees to exit between fiscal 2017 and fiscal 2019.

    HP Inc to cut 3,000 to 4,000 jobs over next three years
    http://www.cnbc.com/2016/10/13/hp-inc-to-cut-3000-to-4000-jobs-over-next-three-years-dj.html

    Reply
  31. Tomi Engdahl says:

    All-flash storage: Tech’s ready, is it safe to move yet?
    Time for suppliers to raise their game
    http://www.theregister.co.uk/2016/10/14/allflash_storage_uncertainty_and_doubt/

    All-flash arrays are arguably coming of age, but in an early market, with lots of vendors jostling for position and making all kinds of promises, you need to be careful when evaluating options. While most of the historical challenges have been largely neutralised or at least made significantly smaller, there are still some uncertainties that need to be taken care of, so it’s important to seek the right kind of guarantees.
    So why the buzz around all-flash arrays?

    The use of flash storage in the data centre has been evolving continuously over the last decade. Manufacturers began by incorporating a flash-based cache into traditional arrays to speed up data access. As prices crept down from the “eye-wateringly extortionate” to simply the “extremely expensive”, vendors started to become more adventurous. This led to the emergence of full hybrid multi-tiered systems which elevated the role of flash to persistent storage, taking the form of a high performance tier of solid state disk sitting in the same box as a pool of high-capacity HDDs.

    Sanity check on the need

    When it comes to drivers, the appeal of all-flash solutions still largely revolves around the needs of high performance applications, with VDI being acknowledged as a specific workload driving demand for some

    Potential brakes on progress

    Some of the caution we are picking up is undoubtedly down to many not totally accepting that all of the historical concerns have been dealt with. The lingering perception of a high cost per capacity stands out prominently at the top of the list. Having said this, there is a clear acknowledgement by a sizeable proportion of our sample base that this depends on the vendor and solution, which by implication suggests that some suppliers at least have been moving in the right direction on this matter

    Most of the other concerns we see listed relate to uncertainties about the readiness of flash to deal with classic enterprise needs for robustness, durability, and predictability in areas such as performance and capacity. Against this backdrop, the results shown suggest that some suppliers are clearly guilty of – let’s just say – extreme optimism when it comes to making claims and promises. Again, however, the evidence is that supplier behaviour varies, so the lesson is to beware of snake oil sales reps.

    Reply
  32. Tomi Engdahl says:

    Google TensorFlow AI bots drafted into Ocado call centre service
    Web retailer builds a better email opener
    http://www.theregister.co.uk/2016/10/14/ocado_ai_call_center_rollout/

    Like so much in Britain, you can credit the weather for this. Ocado has rolled AI using Google’s open-source TensorFlow to improve service at its customer call centre.

    The online food retailer built and installed a system based on machine learning in six months, using Python, C++, Kubernetes on Google Compute with TensorFlow.

    The cloud-based AI will do the heavy lifting on inbound customer emails: opening and scanning 2,000 messages on an ordinary day, doubling at busy times like Christmas, scanning for key words and context before prioritising and forwarding them.

    Ocado employs 250 call centre staff in different shifts and – as you’d expect – they open inbound emails.

    Customer service is key for web firms like Ocado, which depend greatly on word-of-mouth to promote its brand and rapid response to customers is a key part of that.

    Ocado isn’t new to the idea of machine learning – the retailer’s 1,000-strong technology unit has been working on data science and intelligent APIs, employed in recommendations, instant orders and to calculate optimal driver routes for years.

    “It’s the first time we tried machine learning in customer service with natural language,” the unit’s head of technology Dan Nelson told The Reg.

    This involved one of the project’s biggest challenges: cleansing data. Emails fed into the system had to be cleansed so the neural network could learn syntax and words.

    Cleansing of data pre-dates AI and machine learning, it has bedevilled projects running back decades all the way from today’s big data world of data lakes back through data warehouses, business analytics and humble OLAP.

    Reply
  33. Tomi Engdahl says:

    Khari Johnson / VentureBeat:
    The White House open sources President Obama’s Facebook Messenger bot
    http://venturebeat.com/2016/10/14/the-white-house-open-sources-president-obamas-facebook-messenger-bot/

    The White House today shared open source code for President Obama’s Facebook Messenger bot to help other governments build their own bots.

    The White House says it’s sharing the code “with the hope that other governments and developers can build similar services — and foster similar connections with their citizens — with significantly less upfront investment,” according to a post published today by chief digital officer for the White House Jason Goldman.

    In August, the White House launched a Facebook Messenger bot to receive messages from American citizens. The messages are read alongside letters and other communique sent to the president.

    The open source Drupal module for the president’s bot is available to download on Github.

    “While Drupal may not be the platform others would immediately consider for building a bot, this new White House module will allow non-developers to create bot interactions (with customized language and workflows), and empower other governments and agencies who already use Drupal to power their digital experiences,” Goldman said on the White House website today.

    https://github.com/WhiteHouse/fb_messenger_bot

    Reply
  34. Tomi Engdahl says:

    Patrick Moorhead / Forbes:
    Google, IBM, HPE, and others, create new OpenCAPI standard to boost data center server performance; products expected to be released by 2H 2017

    Tech Giants Create New OpenCAPI Standard For The Hottest Server-Accelerated Workloads
    http://www.forbes.com/sites/patrickmoorhead/2016/10/14/tech-giants-create-new-opencapi-standard-for-the-hottest-server-accelerator-workloads/#55381c3a6eeb

    While there has been a lot of debate on what “open” means, what everyone in technology can agree on is that open standards are one of the key drivers of industry growth and prosperity.

    Today, a bevvy of tech industry giants announced a new server standard, called OpenCAPI, and includes support from Advanced Micro Devices, Dell EMC, Google, Hewlett Packard Enterprise, IBM, Mellanox Technologies, Micron Technology, NVIDIA and Xilinx. This is big, really big.

    This announcement comes on the heels of the recently announced datacenter open standards CCIX and Gen-Z, just showing how much is at stake and in motion in the datacenter

    OpenCAPI, a server accelerator standard for the most important workloads

    OpenCAPI is a new standard to enable very high performance accelerators like FPGAs, graphics, network and storage accelerators that perform functions the datacenter server’s general purpose CPU isn’t optimized for. Acceleration is what all the cool kids are doing. The last thing Google CEO Sundar Pichai talked about at Google I/O was the TPU and the last thing Microsoft CEO Satya Nadella talked about at Ignite was FPGAs, all accelerators. Apple said in a recent article that they have the “biggest and baddest GPU farm cranking all the time”. Intel recently bought Altera for $16.7B and Nervana Systems for an estimated $350M.

    Accelerators are required to meet the new computing demands for artificial intelligence, machine learning, big data, analytics, security and high performance computing.

    the word “CAPI” in OpenCAPI should sounds familiar as CAPI (Coherent Accelerator Processor Interface) is an accelerator standard invented by IBM and used across the OpenPower Consortium today.

    While CAPI was governed by IBM and metered across the OpenPOWER Consortium, OpenCAPI is completely open, governed by the OpenCAPI Consortium led by the companies

    Reply
  35. Tomi Engdahl says:

    IBM Weaves New 25G Link
    OpenCAPI attracts eight partners
    http://www.eetimes.com/document.asp?doc_id=1330624&

    IBM provided more detail on a new 25 Gbit/second interconnect that will link its Power 9 processor to accelerators and next-generation memories. OpenCAPI is a new physical-layer and protocol serving the same functions as 25G interfaces announced earlier this week by the CCIX and Gen-Z groups.

    IBM claims OpenCAPI will provide wider bandwidth and lower latency than the alternatives and have a road map that caries it to even higher performance. However, so far OpenCAPI has attracted eight partners compared to about 20 each for CCIX and Gen-Z.

    OpenCAPI targets a raw bandwidth of 150-160 GBytes/s about five times that of PCI Express Gen 4 which is the basis of CCIX. The cache-coherent interface should be able to implement load/store memory operations with a round trip latency of about 100 nanoseconds.

    Backers are AMD, Dell EMC, Google, Hewlett Packard Enterprise, Mellanox, Micron, Nvidia and Xilinx. The consortium needs ARM SoC partners such as Cavium or Qualcomm and more memory vendors if it is to develop an ecosystem that will rival CCIX and Gen-Z.

    Like CCIX and Gen-Z, OpenCAPI aims for use across ARM, Power and x86 processors and any of a variety of emerging storage-class memories. However, so far the group lacks a major ARM SoC maker, and AMD has not made any commitments to use the link on x86 chips.

    All three of the new interconnects share a common motivation. They aim to create open standards at a time when concerns are rising Intel will bundle with proprietary interconnects its Xeon processors which dominate the server market with its Altera FPGAs and 3D XPoint memories.

    OpenCAPI aims to stay inside a server, linking chips on a motherboard. By contrast Gen-Z, based on Ethernet, will be used both inside and between systems.

    Given its Power 9 timeline, IBM clearly was the first mover in the new generation of 25G chip-to-chip links.

    AMD suggested in a press statement it will use OpenCAPI to link its Radeon GPUs as accelerators to Power processors. Its rival Nvidia will ride on the same set of pins but use different data, transaction and protocol layers for a link to Power9 called NVLink 2.0.

    “We created a new PHY layer and a new protocol…We needed a fresh design to get to the extreme low latencies and bandwidth we wanted,” said Brad McCredie, an IBM Fellow and vice president of Power development.

    OpenCAPI will use a similar API and software constructs as the original CAPI, reducing software rework. For its part, IBM expects it will bridge from OpenCAPI to CCIX or Gen-Z devices as needed.

    Reply
  36. Tomi Engdahl says:

    Eugene Kim / Business Insider:
    Intel reports record quarterly revenue of $15.8B, up 9% YoY; Client Computing Group had revenue of $8.9B, up 5% YoY; Q4 guidance is weak at $15.7B

    Intel slips after guidance miss
    http://www.businessinsider.com/intel-earnings-q3-2016-2016-10?op=1%3fr=US&IR=T&IR=T

    Intel just reported its third quarter earnings after the bell on Tuesday.

    It’s a beat on earnings and revenue, but a miss on fourth quarter guidance. Investors aren’t too impressed and Intel’s stock is down ~3.5% in after hours.

    Intel reported record-high quarterly revenue, but gave fourth quarter revenue guidance of $15.7 billion, below analyst estimates of $15.86 billion.

    Intel’s Client Computing Group, which includes its PC and mobile business, had revenue of $8.9 billion, up 5% year-over-year, while its data center business saw revenue of $4.5 billion, up 10% from a year-ago period.

    That 10% growth in the data center is a big jump from last quarter’s 5% year-over-year growth, but still a bit disappointing given that the company’s forecast to record “double-digit” growth for the full year

    Reply
  37. Tomi Engdahl says:

    100Gbit/s Mangstor array blows interconnect cobwebs right away
    Mangstor’s MX-Series drives enter the vSphere to soup up VM performance
    http://www.theregister.co.uk/2016/10/19/100gbits_mangstor_array/

    Mangstor has launched a faster NVMe over fabrics storage array – the NX6325 – based on an HPE ProLiant DL380 2U server platform.

    It builds on the existing NX6320 by adding optimised support for 100Gbit/s network speeds. Customers requiring 40Gbit/s Ethernet or 56Gbit/s InfiniBand can use either the NX6320 or NX6325 storage array platforms, as features and performance are equivalent.

    The NX6325 provides:

    Mellanox ConnectX-4 NICs
    Multiple InfiniBand and Ethernet ports with interconnect speeds of up to 100Gbit/s
    Up to 3 million IOPS
    Sequential read bandwidth of 12GB/sec
    Sequential write bandwidth of 9GB/sec

    Reply
  38. Tomi Engdahl says:

    Think virtual reality is just about games? Think again, friend
    Why we may all be calling movies ‘flatties’ in a few years
    http://www.theregister.co.uk/2016/10/18/virtual_reality_is_not_just_about_games/

    With the launch of PlayStation’s VR headset, we are clearly entering a brave new world of virtual reality – everything from the low-end Google Daydream to the far-too-expensive Oculus Rift.

    But while interest has been focused on the gaming possibilities, an undercurrent of filmmakers has started exploring the storytelling possibilities that VR brings.

    Most famous is Iron Man director Jon Favreau, who has created a “preview” story called Gnomes & Goblins. But at the Austin Film Festival this weekend, a number of other filmmakers took the stage to tell an excited crowd about their experiments with the form, and what they had learned so far.

    First up: Deepak Chetty, a director, cinematographer and VR nerd who has won awards for his 3D short films and has been paid by the Washington Post, among others, to explore what VR can mean for real-world stories.

    Chetty drew a distinction right away between virtual reality – through which games are experienced – and “immersive” content where your perspective is fixed but the content is “non-framed.”

    And that expression – “non-framed” – is perhaps the easiest way of understanding the distinction between the world of cinema as it is now and the world of VR storytelling that people like Chetty hope to create in future.

    Cinema – the movies we watch today – are “framed.” The director decides what fits within a given rectangular space in front of you. With VR, that space is all around you. And it requires a completely different approach.

    “I like to call them ‘flatties’,” said Emily Best, the CEO of Seed&Spark, a crowdfunding site for independent filmmakers. “It also helps you think about VR as a completely different medium.”

    Reply
  39. Tomi Engdahl says:

    Netlist, Inc.’s HybriDIMM Storage Class Memory
    http://www.linuxjournal.com/content/netlist-incs-hybridimm-storage-class-memory?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Netlist, Inc.’s new HybriDIMM Storage Class Memory (SCM), which the company describes as the industry’s first standards-based, plug-and-play SCM solution.

    Based on an industry-standard DDR4 LRDIMM interface, Netlist calls HybriDIMM the first SCM product to operate in current Intel x86 servers without BIOS and hardware changes, as well as the first unified DRAM-NAND solution that scales memory to terabyte storage capacities and accelerates storage to nanosecond memory speeds.

    Netlist adds that HybriDIMM’s breakthrough architecture combines an on-DIMM co-processor with Netlist’s PreSight technology—predictive software-defined data management—to unify memory and storage at near-DRAM speeds.

    HybriDIMM™
    ​Storage at Memory Speeds, Memory at Storage Capacities.
    http://www.netlist.com/products/Storage-Class-Memory/HybriDIMM/default.aspx

    Using an industry standard DDR4 LRDIMM interface, HybriDIMM is the first SCM product to operate in current Intel® x86 servers without BIOS and hardware changes, and the first unified DRAM-NAND solution that scales memory to terabyte storage capacities and accelerates storage to nanosecond memory speeds.

    Reply
  40. Tomi Engdahl says:

    Secure Desktops with Qubes: Compartmentalization
    http://www.linuxjournal.com/content/secure-desktops-qubes-compartmentalization?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    The first concept to understand with Qubes is that it groups VMs into different categories based on their use. Here are the main categories of VMs I refer to in the rest of the article:

    Disposable VM: these also are referred to as dispVMs and are designed for one-time use. All data in them is erased when the application is closed.

    Domain VM: these also often are referred to as appVMs. They are the VMs where most applications are run and where users spend most of their time.

    Service VM: service VMs are split into subcategories of netVMs and proxyVMs. These VMs typically run in the background and provide your appVMs with services (usually network access).

    Template VM: other VMs get their root filesystem template from a Template VM, and once you shut the appVM off, any changes you may have made to that root filesystem are erased (only changes in /rw, /usr/local and /home persist). Generally, Template VMs are left powered off unless you are installing or updating software.

    Reply
  41. Tomi Engdahl says:

    Red Hat eye from the Ubuntu guy: Fedora – how you doin’?
    More than a mere RHEL release testbed
    http://www.theregister.co.uk/2016/10/19/fedora_facelift/

    Red Hat is the biggest – and one of the oldest – companies in the Linux world, but despite the difficulty of accurately measuring Linux usage figures, Ubuntu and its relatives seem to be the most popular Linux distributions. Red Hat isn’t sitting idle, though. Despite its focus on enterprise software, including virtualisation, storage and Java tools, it’s still aggressively developing its family of distros: RHEL, CentOS and Fedora.

    Fedora is the freebie community-supported version, with a short six-month release cycle, but it’s still important.

    It’s getting better as a distro, too, benefitting from the improving fit-and-finish of Linux and its manifold supporting components: desktops, applications and their less-obvious underpinnings. Fedora 24 is significantly more usable than it was five or six releases ago.

    Fedora’s Labs are a more versatile equivalent to Ubuntu’s handful of special-purpose editions. Labs are pre-assembled bundles of functionally related software, which can be installed as standalone distros or added into existing installations.

    As of the version 24, standard Fedora now supports the Raspberry Pi. Fedora is an all-FOSS distro, with no proprietary drivers, firmware or plugins, so it doesn’t support the Pi 3′s Wi-Fi and Bluetooth or 3D acceleration, as these require binary blobs.

    There are still downsides to Fedora relative to Ubuntu. There are no long-term support releases, as that’s the role of the technologically much more conservative CentOS. Ubuntu’s more pragmatic attitude to including proprietary binaries means more hardware works out of the box, and installing the “restricted extras” package enables Flash, MP3 and so on in one easy operation. But the Red Hat family has come a long way.

    Reply
  42. Tomi Engdahl says:

    Microsoft’s Continuum: Game changer or novelty?
    We have a look at the latest cut
    http://www.theregister.co.uk/2016/08/19/continuum_game_changer_or_novelty/

    Microsoft’s Continuum is one of the spookiest computing experiences you can have. Either plug a phone into a dock, or turn on a nearby wireless display and keyboard, and the phone doubles up as an ersatz Windows PC. No more lugging a laptop around.

    Back in January, we described Continuum reviewers as sharing the surprise and disdain that Samuel Johnson had for women preachers. So has much changed? Is Continuum still a limited use novelty or a transformative feature, allowing you to shed multiple devices for one very powerful, pocketable one?

    HP is betting on the latter. HP has put a lot of thought and work into making Continuum usable, and so finding a profitable business niche. It’s about to debut not just a powerful Continuum-capable phone, but a static and mobile dock and a streaming app service that plugs the gaps left by lack of native x86 support.

    The Wrap

    Continuum is clearly a work in progress, and I reckon it has leaped over the first hurdle. If the goal is to allow users to carry only one device, then I’d like to see much more focus on the UX – starting with multiple window support.

    To recap, HP has a beast of a phone – the Elite x3 – with two docks. One is a static desk stand, the other a kind of “hollowed out laptop” – the “Lap Dock”. This looks like a 12.5 inch laptop, and has a HD display and battery. The x3 phone is the “CPU and motherboard”.

    So, Continuum. A dog walking on its hind legs? An expensive way to turn your phone into a slow netbook? There’s a grain of truth to this criticism, but the potential is there, for sure.

    Reply
  43. Tomi Engdahl says:

    Reactive? Serverless? Put to bed? What’s next for Java. Speak up, Oracle
    Less is more, from EE to SE
    http://www.theregister.co.uk/2016/08/10/future_of_java_ee/

    The future of Java Enterprise Edition is on many developers’ minds. After the community came to the conclusion that the platform’s progress has come to a standstill, a plethora of initiatives has arisen with the goal of encouraging Oracle to pick up the work on Java EE 8 again.

    It’s time to take inventory.

    Reply
  44. Tomi Engdahl says:

    Why Your Devices Are Probably Eroding Your Productivity
    https://science.slashdot.org/story/16/10/18/2227259/why-your-devices-are-probably-eroding-your-productivity

    University of California, San Francisco neuroscientist Adam Gazzaley and California State University, Dominguez Hills professor emeritus Larry Rosen explain in their book “The Distracted Mind: Ancient Brains in a High Tech World” why people have trouble multitasking, and specifically why one’s productivity output is lowered when keeping up with emails, for example.

    All That Multitasking is Harming, Not Helping Your Productivity. Here’s Why.
    https://ww2.kqed.org/futureofyou/2016/10/17/your-devices-are-probably-lowering-your-productivity-heres-why/

    I’ll admit it. I even take my phone with me to fire off a few texts when I go to the restroom. Or I’ll scroll through my email when I leave the office for lunch. My eyes are often glued to my phone from the moment I wake up, but I often reach the end of my days wondering what I’ve accomplished.

    My productivity mystery was solved after reading “The Distracted Mind: Ancient Brains in a High Tech World,” by University of California, San Francisco neuroscientist Dr. Adam Gazzaley and California State University, Dominguez Hills professor emeritus Larry Rosen. The book explains why the brain can’t multitask, and why my near-obsessive efforts to keep up on emails is likely lowering my productive output.

    “The prefrontal cortex is the area most challenged,” Gazzely says. “And then visual areas, auditory areas, and the hippocampus — these networks are really what’s challenged when we are constantly switching between multiple tasks that our technological world might throw at us.”

    When you engage in one task at a time, the prefrontal cortex works in harmony with other parts of the brain, but when you toss in another task it forces the left and right sides of the brain to work independently. The process of splitting our attention usually leads to mistakes.

    If you’re working on a project and you stop to answer an email, research shows, it will take you nearly a half-hour to get back on task.

    We think the mind can juggle two or three activities successfully at once, but Gazzaley says we woefully overestimate our ability to multitask.

    “An example is when you attempt to check your email while on a conference call,” says Gazzaley. “The act of doing that makes it so incredibly obvious how you can’t really parallel process two attention-demanding tasks.”

    Answering an Email Takes A Lot Longer Than You Think

    In other words, repetitively switching tasks lowers performance and productivity because your brain can only fully and efficiently focus on one thing at a time.

    Research has found that high-tech jugglers struggle to pay attention, recall information, or complete one task at a time.

    “When they’re in situations where there are multiple sources of information coming from the external world or emerging out of memory, they’re not able to filter out what’s not relevant to their current goal,” says Stanford neuroscientist Anthony Wagner. “That failure to filter means they’re slowed down by that irrelevant information.”

    But don’t worry. Gazzaley says. It’s not about opting out of technology. In fact, there’s a time and place for multitasking. If you’re in the midst of a mundane task that just has to get done, it’s probably not detrimental to have your phone nearby or a bunch of tabs open. The distractions may reduce boredom and help you stay engaged. But if you’re finishing a business plan, or a high-level writing project, then it’s a good idea to set yourself up to stay focused.

    Reply
  45. Tomi Engdahl says:

    Michael Cooney / Network World:
    Microsoft AI research group says its speech recognition tech has reached parity with human-level proficiency, with an error rate of lower than 6% — Microsoft: This marks the first time that human parity has been reported for conversational speech — Microsoft researchers say they have created …

    Microsoft speech recognition technology now understands a conversation as well as a person
    http://www.networkworld.com/article/3132384/microsoft-subnet/microsoft-speech-recognition-technology-now-understands-a-conversation-as-well-as-a-person.html

    Microsoft: This marks the first time that human parity has been reported for conversational speech

    Microsoft researchers say they have created a speech recognition system that understands human conversation as well as the average person does.

    In a paper published this week the Microsoft Artificial Intelligence and Research group said its speech recognition system had attained “human parity” and made fewer errors than a human professional transcriptionist.

    “The error rate of professional transcriptionists is 5.9% for the Switchboard portion of the data, in which newly acquainted pairs of people discuss an assigned topic, and 11.3% for the CallHome portion where friends and family members have open-ended conversations. In both cases, our automated system establishes a new state-of-the-art, and edges past the human benchmark. This marks the first time that human parity has been reported for conversational speech,” the researchers wrote in their paper. Switchboard is a standard set of conversational speech and text used in speech recognition tests.

    Reply
  46. Tomi Engdahl says:

    Bloomberg:
    New report says top five US tech firms spent $49M on Washington lobbyists last year, as the five largest banks spent $19.7M

    Silicon Valley Cozies Up to Washington, Outspending Wall Street 2-1
    http://www.bloomberg.com/news/articles/2016-10-18/outspending-wall-street-2-to-1-silicon-valley-takes-washington

    Big tech is outspending banks, alumni get government jobs
    Wishlist from trade to antitrust poses challenge to regulators

    Over the Obama administration’s eight years, the technology industry has embedded itself in Washington. The president hung out with Facebook Inc.’s Mark Zuckerberg and hired the government’s first chief tech officer. At least at the lower levels of officialdom, the revolving door with companies such as Google is spinning ever faster — as it once did with Wall Street.

    Politicians have played down their connections to finance since the taxpayer bailout of 2008. No such stigma attaches to tech, for now. But as the Valley steps up its lobbying efforts, with a wish-list that ranges from immigration to rules for driverless cars, some critics warn that similar traps lie in wait: It’s not easy for the government to police an industry from which it poaches talent and solicits help with writing laws.

    Reply
  47. Tomi Engdahl says:

    Make Any PC A Thousand Dollar Gaming Rig With Cloud Gaming
    http://hackaday.com/2016/10/19/make-any-pc-a-thousand-dollar-gaming-rig-with-cloud-gaming/

    The best gaming platform is a cloud server with a $4,000 dollar graphics card you can rent when you need it.

    [Larry] has done this sort of thing before with Amazon’s EC2, but recently Microsoft has been offering a beta access to some of NVIDIA’s Tesla M60 graphics cards. As long as you have a fairly beefy connection that can support 30 Mbps of streaming data, you can play just about any imaginable game at 60fps on the ultimate settings.

    Cloudy Gamer: Playing Overwatch on Azure’s new monster GPU instances
    http://lg.io/2016/10/12/cloudy-gamer-playing-overwatch-on-azures-new-monster-gpu-instances.html

    It’s no secret that I love the concept of not just streaming AAA game titles from the cloud, but playing them live from any computer – especially on the underpowered laptops I usually use for work. I’ve done it before using Amazon’s EC2 (and written a full article for how to do it), but this time, things are a little different. Microsoft’s Azure is first to give access to NVIDIA’s new M60 GPUs, completely new beasts that really set a whole new bar for framerate and image quality. They’re based on the newer Maxwell architecture, versus the Kepler cards we’ve used in the past. Hopefully one day we’ll get the fancy new Pascal cards :)

    Basically it’ll come down to this: we’re going to launch an Azure GPU instance, configure it for ultra-low latency streaming, and actually properly play Overwatch, a first-person shooter, from a server over a thousand miles away!

    And yes, it seems I always need to repeat myself when writing these articles: the latency is just fine, the resolution is amazing, it can be very cost-effective (if you don’t forget the machine on), and all very practical for those of you obsessed about minimalism (like me).

    Costs

    Note that this is NV6 beta pricing – it may change when it becomes generally available. I’ll try to update the article then. Either way, remember, there’s $0 upfront cost here. This contrasts dramatically to the thousands of dollars you’d end up paying for a similarly spec-ed gaming rig.

    NV6 Server: $0.73/hr
    Bandwidth at 10MBit/s: $0.41/hr
    HD storage: $0.003/hr

    Total at 10MBit/s: $1.14/hr
    Total at 30Mbit/s: $1.96/hr (recommended tho)

    Azure GPU machines are still in Preview

    Reply
  48. Tomi Engdahl says:

    Nasdaq Selects Drupal 8
    http://www.linuxjournal.com/content/nasdaq-selects-drupal-8

    Dries Buytaert announced today that Nasdaq Corporate Solutions has selected Drupal 8 and will work with Acquia to create its Investor Relations Website Platform. In the words of Angela Byron, a.k.a “Webchick”, “This is a big freakin’ deal.”

    This move means that it won’t just be Nasdaq relying on Drupal 8′s security and scalability, but Nasdaq-listed companies like Google, Facebook, Apple, etc., will have the opportunity to use the new Drupal 8 Nasdaq Investor Relations platform to power their Investor sites next year,

    Reply
  49. Tomi Engdahl says:

    Kyle Orland / Ars Technica:
    Nintendo’s unveils Switch, a console/tablet hybrid that connects to HDTV, comes with detachable controllers, coming in March — Tablet system docks to connect to HDTV, comes with detachable controllers. — In a three minute “Preview Trailer” released this morning (and teased last night) …

    Nintendo’s next console, Switch, is a console/tablet hybrid coming in March
    Tablet system docks to connect to HDTV, comes with detachable controllers.
    http://arstechnica.com/gaming/2016/10/nintendos-next-console-switch-is-a-consoletablet-hybrid/

    Reply
  50. Tomi Engdahl says:

    2016 has been a garbage fire. But 2017′s looking up – there’ll be loads of IPOs, beams Intel
    Head of chip giant’s VC arm bullish about exits
    http://www.theregister.co.uk/2016/10/24/lots_of_tech_ipos_in_2017_says_intel/

    Pretty much everyone can agree that 2016 has been awful all round, but hey here’s something we can look forward come January 1: 2017 is going to be the year of new tech IPOs, according to the CEO of Intel’s venture capital arm.

    Giving the keynote at the Intel Capital Global Summit in San Diego, Wendell Brooks was bullish about what the new year will bring, arguing that the “backlog is building” of tech companies that want to access public markets.

    He put the sluggish 2016 market – where fewer tech IPOs than any time in the past decade took place – down to two factors: a shake-out of the privately valued “unicorns” and fears brought about by the US election cycle.

    Where we’re going

    As for the next few years of tech and tech investment, Brooks highlighted four main areas: drones; automated driving; “new experiences” – largely VR; and “verticals.”

    During the presentation, a number of demonstrations highlighted new virtual reality technologies – and in particular live-streamed VR, often tied into sports. There was a lot of technical work to be done to build up the VR ecosystem, Brooks noted, and it was still in its infancy but the possibilities were enormous.

    Both high-end and low-end systems were demoed. The high-end Voke camera system captures everything at extremely high quality and could see use at big sports events, whereas the lower-end Altia Systems camera setup offers panoramic 4K video with real-time stitching of images for as little as $2,000.

    Brooks also spoke enthusiastically about the “drone economy,”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*