Computer technology trends for 2016

It seems that PC market seems to be stabilizing in 2016. I expect that the PC market to shrinks slightly. While mobile devices have been named as culprits for the fall of PC shipments, IDC said that other factors may be in play. It is still pretty hard to make any decent profits with building PC hardware unless you are one of the biggest players – so again Lenovo, HP, and Dell are increasing their collective dominance of the PC market like they did in 2015. I expect changes like spin-offs and maybe some mergers with with smaller players like Fujitsu, Toshiba and Sony. The EMEA server market looks to be a two-horse race between Hewlett Packard Enterprise and Dell, according to Gartner. HPE, Dell and Cisco “all benefited” from Lenovo’s acquisition of IBM’s EMEA x86 server organisation.

Tablet market is no longer high grow market – tablet maker has started to decline, and decline continues in 2016 as owners are holding onto their existing devices for more than 3 years. iPad sales are set to continue decline and iPad Air 3 to be released in 1st half of 2016 does not change that. IDC predicts that detachable tablet market set for growth in 2016 as more people are turning to hybrid devices. Two-in-one tablets have been popularized by offerings like the Microsoft Surface, with options ranging dramatically in price and specs. I am not myself convinced that the growth will be as IDC forecasts, even though Company have started to make purchases of tablets for workers in jobs such as retail sales or field work (Apple iPads, Windows and Android tablets managed by company). Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits.

All your consumer tech gear should be cheaper come July as shere will be less import tariffs for IT products as World Trade Organization (WTO) deal agrees that tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually.

Happy Computer Laptop

In 2015 the storage was rocked to its foundations and those new innovations will be taken into wider use in 2016. The storage market in 2015 went through strategic foundation-shaking turmoil as the external shared disk array storage playbook was torn to shreds: The all-flash data centre idea has definitely taken off as a vision that could be achieved so that primary data is stored in flash with the rest being held in cheap and deep storage.  Flash drives generally solve the dusk drive latency access problem, so not so much need for hybrid drives. There is conviction that storage should be located as close to servers as possible (virtual SANs, hyper-converged industry appliances  and NVMe fabrics). The existing hybrid cloud concept was adopted/supported by everybody. Flash started out in 2-bits/cell MLC form and this rapidly became standard and TLC (3-bits/cell or triple layer cell) had started appearing. Industry-standard NVMe drivers for PCIe flash cards appeared. Intel and Micron blew non-volatile memory preconceptions out of the water in the second half of the year with their joint 3D XPoint memory announcement. Boring old disk  disk tech got shingled magnetic recording (SMR) and helium-filled drive technology; drive industry is focused on capacity-optimizing its drives.  We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being. Tape industry developed a 15TB LTO-7 format.

The use of SSD will increase and it’s price will drop. SSDs will be in more than 25% of new laptops sold in 2015.  SSDs are expected to be in 31% of new consumer laptops in 2016 and more than 40% by 2017. The prices of mainstream consumer SSDs have fallen dramatically every year over the past three years while HDD prices have not changed much.  SSD prices will decline to 24 cents per gigabyte in 2016. In 2017 they’re expected to drop to 11-17 cents per gigabyte (means a 1TB SSD on average would retail for $170 or less).

Hard disk sales will decrease, but this technology is not dead. Sales of hard disk drives have been decreasing for several years now (118 million units in the third quarter of 2015), but according to Seagate hard disk drives (HDDs) are set to still stay relevant around for at least 15 years to 20 years.  HDDs remain the most popular data storage technology as it is cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. 

Because all-flash storage systems with homogenous flash media are still too expensive to serve as a solution to for every enterprise application workload, enterprises will increasingly turn to performance optimized storage solutions that use a combination of multiple media types to deliver cost-effective performance. The speed advantage of Fibre Channel over Ethernet has evaporated. Enterprises also start  to seek alternatives to snapshots that are simpler and easier to manage, and will allow data and application recovery to a second before the data error or logical corruption occurred.

Local storage and the cloud finally make peace in 2016 as the decision-makers across the industry have now acknowledged the potential for enterprise storage and the cloud to work in tandem. Over 40 percent of data worldwide is expected to live on or move through the cloud by 2020 according to IDC.

Happy Computer Laptop

Open standards for data center development are now a reality thanks to advances in cloud technology. Facebook’s Open Compute Project has served as the industry’s leader in this regard.This allows more consolidation for those that want that. Consolidation used to refer to companies moving all of their infrastructure to the same facility. However, some experts have begun to question this strategy as  the rapid increase in data quantities and apps in the data center have made centralized facilities more difficult to operate than ever before. Server virtualization, more powerful servers and an increasing number of enterprise applications will continue to drive higher IO requirements in the datacenter.

Cloud consolidation starts heavily in 2016: number of options for general infrastructure-as-a-service (IaaS) cloud services and cloud management software will be much smaller at the end of 2016 than the beginning. The major public cloud providers will gain strength, with Amazon, IBM SoftLayer, and Microsoft capturing a greater share of the business cloud services market. Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them so advanced users want to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.

Year 2016 will be harder for legacy IT providers than 2015. In its report, IDC states that “By 2020, More than 30 percent of the IT Vendors Will Not Exist as We Know Them Today.” Many enterprises are turning away from traditional vendors and toward cloud providers. They’re increasingly leveraging open source. In short, they’re becoming software companies. The best companies will build cultures of performance and doing the right thing — and will make data and the processes around it self-service for all their employees. Design Thinking to guide companies who want to change the lives of its customers and employees. 2016 will see a lot more work in trying to manage services that simply aren’t designed to work together or even be managed – for example Whatever-As-A-Service cloud systems to play nicely together with their existing legacy systems. So competent developers are the scarce commodity. Some companies start to see Cloud as a form of outsourcing that is fast burning up inhouse ITops jobs with varying success.

There are still too many old fashioned companies that just can’t understand what digitalization will mean to their business. In 2016, some companies’ boards still think the web is just for brochures and porn and don’t believe their business models can be disrupted. It gets worse for many traditional companies. For example Amazon is a retailer both on the web and increasingly for things like food deliveries. Amazon and other are playing to win. Digital disruption has happened and will continue.
Happy Computer Laptop

Windows 10 is coming more on 2016. If 2015 was a year of revolution, 2016 promises to be a year of consolidation for Microsoft’s operating system. I expect that Windows 10 adoption in companies starts in 2016. Windows 10 is likely to be a success for the enterprise, but I expect that word from heavyweights like Gartner, Forrester and Spiceworks, suggesting that half of enterprise users plan to switch to Windows 10 in 2016, are more than a bit optimistic. Windows 10 will also be used in China as Microsoft played the game with it better than with Windows 8 that was banned in China.

Windows is now delivered “as a service”, meaning incremental updates with new features as well as security patches, but Microsoft still seems works internally to a schedule of milestone releases. Next up is Redstone, rumoured to arrive around the anniversary of Windows 10, midway through 2016. Also Windows servers will get update in 2016: 2016 should also include the release of Windows Server 2016. Server 2016 includes updates to the Hyper-V virtualisation platform, support for Docker-style containers, and a new cut-down edition called Nano Server.

Windows 10 will get some of the already promised features not delivered in 2015 delivered in 2016. Windows 10 was promised coming  to PCs and Mobile devices in 2015 to deliver unified user experience. Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. The promise was same unified interface for PCs, tablets and smart phones – but it was only delivered in 2015 for only PCs and some tablets. Mobile Windows 10 for smart phone is expected to start finally in 2016 – The release of Microsoft’s new Windows 10 operating system may be the last roll of the dice for its struggling mobile platform. Because Microsoft Plan A is to get as many apps and as much activity as it can on Windows on all form factor with Universal Windows Platform (UWP), which enables the same Windows 10 code to run on phone and desktop. Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developer. Can Microsoft keep the developer momentum going? I am not sure. In addition there are also plans for tools for porting iOS apps and an Android runtime, so expect also delivery of some or all of the Windows Bridges (iOS, web app, desktop app, Android) announced at the April 2015 Build conference in hope to get more apps to unified Windows 10 app store. Windows 10 does hold out some promise for Windows Phone, but it’s not going to make an enormous difference. Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater.

Older Windows versions will not die in 2016 as fast as Microsoft and security people would like. Expect Windows 7 diehards to continue holding out in 2016 and beyond. And there are still many companies that run their critical systems on Windows XP as “There are some people who don’t have an option to change.” Many times the OS is running in automation and process control systems that run business and mission-critical systems, both in private sector and government enterprises. For example US Navy is using obsolete operating system Microsoft Windows XP to run critical tasks. It all comes down to money and resources, but if someone is obliged to keep something running on an obsolete system, it’s the wrong approach to information security completely.

Happy Computer Laptop

Virtual reality has grown immensely over the past few years, but 2016 looks like the most important year yet: it will be the first time that consumers can get their hands on a number of powerful headsets for viewing alternate realities in immersive 3-D. Virtual Reality will become the mainstream when Sony, and Samsung Oculus bring consumer products on the market in 2016. Whole virtual reality hype could be rebooted as Early build of final Oculus Rift hardware starts shipping to devs. Maybe HTC‘s and Valve‘s Vive VR headset will suffer in the next few month. Expect a banner year for virtual reality.

GPU and FPGA acceleration will be used in high performance computing widely. Both Intel and AMD have products with CPU and GPU in the same chip, and there is software support for using GPU (learn CUDA and/or OpenCL). Also there are many mobile processors have CPU and GPU on the same chip. FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. There was lots of interest in 2015 for using FPGA for accelerating computations as the nest step after GPU, and I expect that the interest will grow even more in 2016. FPGAs are not quite as efficient as a dedicated ASIC, but it’s about as close as you can get without translating the actual source code directly into a circuit. Intel bought Altera (big FPGA company) in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016.

Artificial intelligence, machine learning and deep learning will be talked about a lot in 2016. Neural networks, which have been academic exercises (but little more) for decades, are increasingly becoming mainstream success stories: Heavy (and growing) investment in the technology, which enables the identification of objects in still and video images, words in audio streams, and the like after an initial training phase, comes from the formidable likes of Amazon, Baidu, Facebook, Google, Microsoft, and others. So-called “deep learning” has been enabled by the combination of the evolution of traditional neural network techniques, the steadily increasing processing “muscle” of CPUs (aided by algorithm acceleration via FPGAs, GPUs, and, more recently, dedicated co-processors), and the steadily decreasing cost of system memory and storage. There were many interesting releases on this in the end of 2015: Facebook Inc. in February, released portions of its Torch software, while Alphabet Inc.’s Google division earlier this month open-sourced parts of its TensorFlow system. Also IBM Turns Up Heat Under Competition in Artificial Intelligence as SystemML would be freely available to share and modify through the Apache Software Foundation. So I expect that the year 2016 will be the year those are tried in practice. I expect that deep learning will be hot in CES 2016 Several respected scientists issued a letter warning about the dangers of artificial intelligence (AI) in 2015, but I don’t worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it’s not ready for. How machine learning will affect your business? MIT has a good free intro to AI and ML.

Computers, which excel at big data analysis, can help doctors deliver more personalized care. Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better. Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Computers excel at searching and combining vastly more data than a human so algorithms can be put to good use in certain areas of medicine. There are also things that can slow down development in 2016: To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.

Internet of Things (IoT) was talked a lot in 2015, and it will be a hot topics for IT departments in 2016 as well. Many companies will notice that security issues are important in it. The newest wearable technology, smart watches and other smart devices corresponding to the voice commands and interpret the data we produce - it learns from its users, and generate appropriate  responses in real time. Interest in Internet of Things (IoT) will as bring interest to  real-time business systems: Not only real-time analytics, but real-time everything. This will start in earnest in 2016, but the trend will take years to play out.

Connectivity and networking will be hot. And it is not just about IoT.  CES will focus on how connectivity is proliferating everything from cars to homes, realigning diverse markets. The interest will affect job markets: Network jobs are hot; salaries expected to rise in 2016  as wireless network engineers, network admins, and network security pros can expect above-average pay gains.

Linux will stay big in network server marker in 2016. Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world’s busiest sites. Linux will also run many parts of out Internet infrastructure that moves the bits from server to the user. Linux will also continue to rule smart phone market as being in the core of Android. New IoT solutions will be moist likely to be built mainly using Linux in many parts of the systems.

Microsoft and Linux are not such enemies that they were few years go. Common sense says that Microsoft and the FOSS movement should be perpetual enemies.  It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. Microsoft is already partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.

Happy Computer Laptop

Web browsers are coming more and more 64 bit as Firefox started 64 bit era on Windows and Google is killing Chrome for 32-bit Linux. At the same time web browsers are loosing old legacy features like NPAPI and Silverlight. Who will miss them? The venerable NPAPI plugins standard, which dates back to the days of Netscape, is now showing its age, and causing more problems than it solves, and will see native support removed by the end of 2016 from Firefox. It was already removed from Google Chrome browsers with very little impact. Biggest issue was lack of support for Microsoft’s Silverlight which brought down several top streaming media sites – but they are actively switching to HTML5 in 2016. I don’t miss Silverlight. Flash will continue to be available owing to its popularity for web video.

SHA-1 will be at least partially retired in 2016. Due to recent research showing that SHA-1 is weaker than previously believed, Mozilla, Microsoft and now Google are all considering bringing the deadline forward by six months to July 1, 2016.

Adobe’s Flash has been under attack from many quarters over security as well as slowing down Web pages. If you wish that Flash would be finally dead in 2016 you might be disappointed. Adobe seems to be trying to kill the name by rebranding trick: Adobe Flash Professional CC is now Adobe Animate CC. In practive it propably does not mean much but Adobe seems to acknowledge the inevitability of an HTML5 world. Adobe wants to remain a leader in interactive tools and the pivot to HTML5 requires new messaging.

The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.

Java will still stick around, but Java’s decline as a language will accelerate as new stuff isn’t being written in Java, even if it runs on the JVM. We will  not see new Java 9 in 2016 as Oracle’s delayed the release of Java 9 by six months. The register tells that Java 9 delayed until Thursday March 23rd, 2017, just after tea-time.

Containers will rule the world as Docker will continue to develop, gain security features, and add various forms of governanceUntil now Docker has been tire-kicking, used in production by the early-adopter crowd only, but it can change when vendors are starting to claim that they can do proper management of big data and container farms.

NoSQL databases will take hold as they be called as “highly scalable” or “cloud-ready.” Expect 2016 to be the year when a lot of big brick-and-mortar companies publicly adopt NoSQL for critical operations. Basically NoSQL could be seem as key:value store, and this idea has also expanded to storage systems: We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being.

In the database world Big Data will be still big but it needs to be analyzed in real-time. A typical big data project usually involves some semi-structured data, a bit of unstructured (such as email), and a whole lot of structured data (stuff stored in an RDBMS). The cost of Hadoop on a per-node basis is pretty inconsequential, the cost of understanding all of the schemas, getting them into Hadoop, and structuring them well enough to perform the analytics is still considerable. Remember that you’re not “moving” to Hadoop, you’re adding a downstream repository, so you need to worry on systems integration and latency issues. Apache Spark will also get interest as Spark’s multi-stage in-memory primitives provides more performance  for certain applications. Big data brings with it responsibility – Digital consumer confidence must be earned.

IT security continues to be a huge issue in 2016. You might be able to achieve adequate security against hackers and internal threats but every attempt to make systems idiot proof just means the idiots get upgraded. Firms are ever more connected to each other and the general outside world. So in 2016 we will see even more service firms accidentally leaking critical information and a lot more firms having their reputations scorched by incompetence fuelled security screw-ups. Good security people are needed more and more – a joke doing the rounds of ITExecs doing interviews is “if you’re a decent security bod, why do you need to look for a job”

There will still be unexpected single points of failures in big distributed networked system. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop. That’s not a single point of failure in the classical sense but it’s really hard to manage unless you go for full cloud agnosticism – which is costly. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail. Oh, and no one seems to test their mission-critical data centre properly, because it’s mission criticalSo they just over-specify where they can and cross their fingers (= pay twice and get the half the coverage for other vulnerabilities).

For IT start-ups it seems that Silicon Valley’s cash party is coming to an end. Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money could be over and valuation expectations are re-calibrating down. The cheap capital party is over. It could mean trouble for weaker startups.

 

933 Comments

  1. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft releases Visual Studio for Mac, which is a rebranded version of Xamarin Studio, and Visual Studio 2017 for Windows in preview — Microsoft is releasing a first Visual Studio for Mac preview, as well as a near-final Release Candidate of Visual Studio 2017 for Windows.

    Microsoft delivers test builds of Visual Studio for Mac, Visual Studio 2017 for Windows
    http://www.zdnet.com/article/microsoft-delivers-test-builds-of-visual-studio-for-mac-visual-studio-2017-for-windows/

    Microsoft is releasing a first Visual Studio for Mac preview, as well as a near-final Release Candidate of Visual Studio 2017 for Windows.

    Reply
  2. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Google announces new AI group, to be headed by former head of Stanford’s Artificial Intelligence Lab Fei-Fei Li and former head of research at Snapchat, Jia Li

    Google Cloud is launching GPU-backed VM instances early in 2017
    http://venturebeat.com/2016/11/15/google-cloud-is-launching-gpu-backed-vm-instances-early-in-2017/

    Competitors Amazon Web Services (AWS), IBM SoftLayer, and Microsoft Azure have launched GPU-backed instances in the past. Google is looking to stand out by virtue of its per-minute billing, rather than per-hour, and its variety of GPUs available: the Nvidia Tesla P100 and Tesla K80 and the AMD FirePro S9300 x2.

    This cloud infrastructure can be used for a type of artificial intelligence (AI) called deep learning. It’s in addition to Google’s custom-made tensor processing units (TPUs), which will be powering Google’s Cloud Vision application programming interface (API). The joint availability of GPUs and TPUs should send a signal that Google doesn’t see TPUs as being a one-to-one alternative to GPUs.

    Also today Google announced the formation of a new Cloud Machine Learning group. Google cloud chief Diane Greene named the two leaders of the group: Jia Li, the former head of research at Snapchat, and Fei-Fei Li, the former head of Stanford’s Artificial Intelligence Lab and also the person behind the ImageNet image recognition data set and competition. As Greene pointed out, both of the leaders are women, and also respected figures in the artificial intelligence field.

    Reply
  3. Tomi Engdahl says:

    3D XPoint – Reality, Opportunity, and Competition
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330829&

    3D XPoint was debuted with big claims in 2015. However, there are many wild guess and speculations because details have not been shared in public domain

    Early this year, details about 3D XPoint came out from EE Times interview with Guy Blalock, co-CEO of IM Flash. 3D XPoint is well-known phase change memory and switch (PCMS).

    According to Micron, 3D XPoint has many technical and operational challenges, such as 100 new materials raising supply chain issues, cutting fab throughput by 15%, 3x-5x increase in capital expenses compared to planar NAND, and heavily depending on lithography tools. Therefore, 3D XPoint becomes expensive. The 2nd generation 3D XPoint with 4-layer stacking is expected to be about 5 times more expensive than planar NAND. So, it seems difficult to be an affordable storage device. Instead, 3D NAND will serve the storage market.

    According to Intel/Micron, 3D XPoint is aiming at high-end SSD and DDR4 NVDIMM markets. Though, 3D XPoint-based SSD will serve niche market because of cost issue. The sweet spot of 3D XPoint should be the DDR4 NVDIMM because of low read latency (i.e. ~100ns).

    Reply
  4. Tomi Engdahl says:

    Rick Merritt / EE Times:
    Intel details its vision for AI chips, mostly based on Nervana technology, but the strategy remains incomplete until the Movidius deal closes — AI strategy incomplete without Movidius — SAN FRANCISCO – Intel rolled out its intentions for a soup-to-nuts offering in artificial intelligence …

    Intel’s Nervana Attacks GPUs
    AI strategy incomplete without Movidius
    http://www.eetimes.com/document.asp?doc_id=1330854

    Intel rolled out its intentions for a soup-to-nuts offering in artificial intelligence, but at least one of the key dishes is not yet cooked.

    The PC giant will serve up the full range of planned products it acquired from Nervana Systems. They will take on mainly high-end jobs especially in training neural networks, an area now dominated by Nvidia’s graphics processors.

    Intel’s acquisition of Movidius has not yet closed, leaving a wide opening in computer vision and edge networks. Separately, the company announced several AI software products, services and partnerships.

    Reply
  5. Tomi Engdahl says:

    Not a Bad Quarter To Be a GPU Vendor
    https://slashdot.org/story/16/11/18/2051213/not-a-bad-quarter-to-be-a-gpu-vendor

    Compared to Q2 2016, total GPU shipments including discrete and integral chips in the mobile and desktop markets increased by 20%; good but not enough to recover from the volume we saw in Q3 2015. Individually, total AMD sales increased by 15%, and Intel saw 18% boost, but it was NVIDIA that was the most successful with an impressive 39% increase.

    Not a bad quarter to be a GPU vendor, though some fared better than others
    https://www.pcper.com/news/General-Tech/Not-bad-quarter-be-GPU-vendor-though-some-fared-better-others

    Reply
  6. Tomi Engdahl says:

    Julie Bort / Business Insider:
    Programmers share stories about being asked to write code for illegal or unethical purposes, and ponder the need for self-regulation

    Programmers are having a huge discussion about the unethical and illegal things they’ve been asked to do
    http://nordic.businessinsider.com/programmers-confess-unethical-illegal-tasks-asked-of-them-2016-11?op=1&r=US&IR=T

    Earlier this week, a post written by programmer and teacher Bill Sourour went viral. It’s called “Code I’m Still Ashamed Of.”

    In it he recounts a horrible story of being a young programmer who landed a job building a website for a pharmaceutical company. The whole post is worth a read, but the upshot is he was duped into helping the company skirt drug advertising laws in order to persuade young women to take a particular drug.

    He later found out the drug was known to worsen depression and at least one young woman committed suicide while taking it. He found out his sister was taking the drug and warned her off it.

    Decades later, he still feels guilty about it, he told Business Insider.

    Software developers ‘kill people’

    Martin argues in that talk that software developers better figure out how to self-regulate themselves and fast.

    “Let’s decide what it means to be a programmer,”Martin says in the video. “Civilization depends on us. Civilization doesn’t understand this yet.”

    His point is that in today’s world, everything we do like buying things, making a phone call, driving cars, flying in planes, involves software. And dozens of people have already been killed by faulty software in cars, while hundreds of people have been killed from faulty software during air travel.

    “We are killing people,” Martin says. “We did not get into this business to kill people. And this is only getting worse.”

    Martin finished with a fire-and-brimstone call to action in which he warned that one day, some software developer will do something that will cause a disaster that kills tens of thousands of people.

    Programmers confess

    Sourour’s “ashamed” post went viral on Hacker News and Reddit and it unleashed a long list of confessions from programmers about the unethical and, sometimes, illegal things they’ve been asked to do.

    Bootcamps without ethics

    A common theme among these stories was that if the developer says no to such requests, the company will just find someone else do it. That may be true for now, but it’s still a cop-out, Martin points out.

    “We rule the world,” he said. “We don’t know it yet. Other people believe they rule the world but they write down the rules and they hand them to us. And then we write the rules that go into the machines that execute everything that happens.”

    The code I’m still ashamed of
    https://medium.freecodecamp.com/the-code-im-still-ashamed-of-e4c021dff55e#.udx5c5ba1

    Reply
  7. Tomi Engdahl says:

    US Sets Plan To Build Two Exascale Supercomputers
    https://hardware.slashdot.org/story/16/11/21/2225233/us-sets-plan-to-build-two-exascale-supercomputers

    The U.S. believes it will be ready to seek vendor proposals to build two exascale supercomputers — costing roughly $200 to $300 million each — by 2019. The two systems will be built at the same time and be ready for use by 2023

    U.S. sets plan to build two exascale supercomputers
    http://www.computerworld.com/article/3143551/high-performance-computing/us-sets-plan-to-build-two-exascale-supercomputers.html

    Both systems, using different architectures, will be developed simultaneously in 2019 — if the Trump administration goes along with the plan

    Reply
  8. Tomi Engdahl says:

    AI is all trendy and fun – but it’s still a long way from true intelligence, Facebook boffins admit
    To be fair, the same goes for many humans, too
    http://www.theregister.co.uk/2016/11/22/facebooks_ai_paper_machines_cant_reason_yet/

    Researchers at Facebook have attempted to build a machine capable of reasoning from text – but their latest paper shows true machine intelligence still has a long way to go.

    The idea that one day AI will dominate Earth and bring humans to their knees as it becomes super-intelligent is a genuine concern right now. Not only is it a popular topic in sci-fi TV shows such as HBO’s Westworld and UK Channel 4’s Humans – it features heavily in academic research too.

    Research centers such as the University of Oxford’s Future of Humanity Institute and the recently opened Leverhulme Centre for the Future of Intelligence in Cambridge are dedicated to studying the long-term risks of developing AI.

    The key to potential risks about AI mostly stem from its intelligence. The paper, which is currently under review for 2017′s International Conference on Learning Representations, defines intelligence as the ability to predict.

    “An intelligent agent must be able to predict unobserved facts about their environment from limited percepts (visual, auditory, textual, or otherwise), combined with their knowledge of the past”

    Although EntNet shows machines are far from developing automated reasoning, and can’t take over the world yet, it is a pretty nifty way of introducing memory into a neural network.
    Machines are still pretty dumb

    https://openreview.net/forum?id=rJTKKKqeg

    Reply
  9. Tomi Engdahl says:

    Intel’s Nervana Attacks GPUs
    AI strategy incomplete without Movidius
    http://www.eetimes.com/document.asp?doc_id=1330854

    Intel rolled out its intentions for a soup-to-nuts offering in artificial intelligence, but at least one of the key dishes is not yet cooked.

    The PC giant will serve up the full range of planned products it acquired from Nervana Systems. They will take on mainly high-end jobs especially in training neural networks, an area now dominated by Nvidia’s graphics processors.

    Intel’s acquisition of Movidius has not yet closed, leaving a wide opening in computer vision and edge networks. Separately, the company announced several AI software products, services and partnerships.

    Movidius’ chief executive made a brief appearance in a break-out session at an Intel AI event here, but could not say when the acquisition will close or what hurdles lay ahead. “We look forward to joining the family,” he said, after sketching out his plans for low-power inference chips for cars, drones, security cameras and other products.

    “AI will transform most industries we know today, so we want to be the trusted leader and developer of it,” said Intel chief executive Brian Krzanich in a keynote launching the half-day event.

    Reply
  10. Tomi Engdahl says:

    Constructing The Pillars Of The ARM HPC Ecosystem
    http://semiengineering.com/constructing-the-pillars-of-the-arm-hpc-ecosystem/

    Alternative HPC architectures will only happen if a strong supporting software ecosystem is in place.

    Efforts were already underway from ARM to build up our HPC software ecosystem and we immediately saw that OpenHPC aligned well with those efforts. In June, we were officially announced as a founding member of OpenHPC and less than six months later, I’m pleased to announce that ARMv8-A will be the first alternative architecture with OpenHPC support. The initial baseline release of OpenHPC for ARMv8-A will be available as part of the forthcoming OpenHPC v1.2 release at SC16. This is yet another milestone that levels the playing field for the ARM server ecosystem and will accelerate choice within the HPC community.

    Community building blocks for HPC systems
    http://www.openhpc.community/

    Welcome to the OpenHPC site. OpenHPC is a collaborative, community effort that initiated from a desire to aggregate a number of common ingredients required to deploy and manage High Performance Computing (HPC) Linux clusters including provisioning tools, resource management, I/O clients, development tools, and a variety of scientific libraries. Packages provided by OpenHPC have been pre-built with HPC integration in mind with a goal to provide re-usable building blocks for the HPC community. Over time, the community also plans to identify and develop abstraction interfaces between key components to further enhance modularity and interchangeability. The community includes representation from a variety of sources including software vendors, equipment manufacturers, research institutions, supercomputing sites, and others.

    Reply
  11. Tomi Engdahl says:

    An Easier Path To Faster C With FPGAs
    http://semiengineering.com/an-easier-path-to-faster-c-with-fpgas/

    Using FPGAs to optimize high-performance computing, without specialized knowledge.

    For most scientists, what is inside a high-performance computing platform is a mystery. All they usually want to know is that a platform will run an advanced algorithm thrown at it. What happens when a subject matter expert creates a powerful model for an algorithm that in turn automatically generates C code that runs too slowly? FPGA experts have created an answer.

    A more promising approach for workload optimization using considerably less power is hardware acceleration using FPGAs. Much as in the early days of FPGAs where they found homes in reconfigurable compute engines for signal processing tasks, technology is coming full circle and the premise is again gaining favor. The challenge with FPGA technology in the HPC community has always been how the scientist with little to no hardware background translates their favorite algorithm into a reconfigurable platform.

    Most subject-matter experts today are first working out system-level algorithms in a modeling tool such as MATLAB. It’s a wonderful thing to be able to grab high-level block diagrams, state diagrams, and code fragments, and piece together a data flow architecture that runs like a charm in simulation. Using MATLAB Coder, C code can be generated directly from the model, and even brought back into the model as MEX functions to speed up simulation in many cases. The folks at the MathWorks have been diligently working to optimize auto-generated code for particular processors, such as leveraging Intel Integrated Performance Primitives.

    While some algorithms vectorize well, many simply don’t, and more processor cores may not help at all unless a careful multi-threading exercise is undertaken. Parallel GPU programming is also not for the faint of heart.

    Moving from the familiar territory of MATLAB models and C code to the unfamiliar regions of LUTs, RTL, AXI, and PCI Express surrounding an FPGA is a lot to ask of most scientists. Fortunately, other experts have been solving the tool stack issues surrounding Xilinx technology, facilitating a move from unaided C code to FPGA-accelerated C code.

    The Xilinx Virtex-7 FPGA offers an environment that addresses the challenges of integrating FPGA hardware with an HPC host platform.

    A typical acceleration flow partitions code into a host application running on the HPC platform, and a section of C code for acceleration in an FPGA. Partitioning is based on code profiling, identifying areas of code that deliver maximum benefit when dropped into executable FPGA hardware. The two platforms are connected via PCI Express, but a communication link is only part of the solution

    To keep the two platforms synchronized, AXI messages can be used from end-to-end. Over a PCI Express x8 interface, AXI throughput between a host and an acceleration board exceeds 2GB/sec. Since AXI is a common protocol used in most Virtex-7 intellectual property (IP) blocks, it forms a natural high-bandwidth interconnect between the host and the Virtex-7 device including the C-accelerated Compute Device block. A pair of Virtex-7 devices are also easily interconnected using AXI as shown.

    This is the same concept used in co-simulation, where event-driven simulation is divided between a software simulator and a hardware acceleration platform.

    Reply
  12. Tomi Engdahl says:

    The Limits Of Parallelism
    http://semiengineering.com/the-limits-of-parallelism/

    Tools and methodologies have improved, but problems persist.

    Parallelism used to be the domain of supercomputers working on weather simulations or plutonium decay. It is now part of the architecture of most SoCs.

    But just how efficient, effective and widespread has parallelism really become? There is no simple answer to that question. Even for a dual-core implementation of a processor on a chip, results can vary greatly by software application, operating system, and use case. Tools have improved, and certain functions that can be done in parallel are better defined, but gaps remain in many areas with no simple way to close them.

    That said, there is at least a better understanding of what issues remain and how to solve them, even if that isn’t always possible or cost-effective.

    “To achieve parallelism it is necessary to represent at some level of granularity what needs to be done concurrently,”

    Concurrency and parallelism used to be almost synonymous terms when parallel architectures were first introduced.

    “If you look at how architectures have evolved, parallelism and concurrency have gone on to mean different things,”

    Reply
  13. Tomi Engdahl says:

    Homogeneous And Heterogeneous Computing Collide
    http://semiengineering.com/homogeneous-and-heterogeneous-computing-collide/

    Part one in a series. Processing architectures continue to become more complex, but is the software industry getting left behind? Who will help them utilize the hardware being created?

    Eleven years ago processors stopped scaling due to diminishing returns and the breakdown of Dennard’s Law. That set in motion a chain of events from which the industry has still not fully recovered.

    The transition to homogeneous multi-core processing presented the software side with a problem that they did not know how to solve, namely how to optimize the usage of the compute capabilities made available to them. They continue to struggle with that problem even today. At the same time, many systems required the usage of processing cores with more specialized functionality. The mixing of these processing elements gave us heterogeneous computing, a problem that sidestepped many of the software problems.

    Reply
  14. Tomi Engdahl says:

    2016. AI boffins picked a hell of a year to train a neural net by making it watch the news
    Lipreading software must think us humans are maniacs
    http://www.theregister.co.uk/2016/11/23/lipreading_neural_network_better_than_pros/

    LipNet, the lipreading network developed by researchers at the University of Oxford and DeepMind, can now lipread from TV shows better than professional lipreaders.

    The first LipNet paper, which is currently under review for International Conference on Learning Representations – ICLR 2017, a machine learning conference, was criticised for using a limited dataset to test LipNet’s accuracy. The GRID corpus is made up of sentences that have a strict word order and make no sense on their own.

    Reply
  15. Tomi Engdahl says:

    HPE core servers and storage under pressure
    Public cloud and Huawei cited among culprits
    http://www.theregister.co.uk/2016/11/23/hpe_core_servers_and_storage_under_pressure/

    HPE’s latest results show a company emerging slimmer and fitter through diet (cost-cutting) and exercise (spin-merger deals) but facing tougher markets in servers and storage – the new normal, as CEO Meg Whitman says.

    A look at the numbers and the earnings call from the servers and storage points of view shows a company with work to do.

    The server business saw revenue of $3.5bn in the quarter, down 7 per cent year-on-year and up 5 per cent quarter-on-quarter. High-performance compute (Apollo) and SGI servers did well. Hyper-converged is growing and has more margin than the core ISS (Industry Standard Servers). Synergy and mission critical systems also did well.

    But the servers business was affected by strong pressure on the core ISS ProLiant racks, a little in the blade server business, and also low or no profitability selling Cloudline servers, the ones for cloud service providers and hyperscale customers.

    “Cloudline is a pretty big business for us. And when done correctly, we actually make money on Cloudline. But we just have to be sure every deal has to be looked at on a one-off basis, which is what’s the forward pricing going to look like? And I basically said to the team, listen, we do not want to be doing negative deals here for the most part. What’s the point in selling things at loss?”

    Although HPE’s CEO said hyper-converged was doing well, there is some way to go. Gartner ranks HPE as the leader in the hyper-converged and integrated systems magic quadrant, with EMC second and Nutanix third.

    In the all-flash array (AFA) business, HPE grew 3PAR AFA revenues 100 per cent year-on-year to a $750m annual run rate, which compares with NetApp at $1bn and Pure at $631m. Our sense is that Dell-EMC leads this market, followed by NetApp, then HPE, with Pure in fourth place.

    Comparing HPE to other AFA suppliers we see Dell EMC with five AFA products: XtremIO, DSSD, all-flash VMAX and Unity, and an all-flash Isilon product.

    Rakers thinks “HPE has $5bn+ in excess cash” and is wondering about “the company’s next move given a healthy excess net operating cash position”. Its merger and acquisitions strategy is a key focus for him.

    Reply
  16. Tomi Engdahl says:

    Non-Linux FOSS: Scripts in Your Menu Bar!
    http://www.linuxjournal.com/content/non-linux-foss-scripts-your-menu-bar?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29#

    There are hundreds of applications for OS X that place information in the menu bar. Usually, I can find one that almost does what I want, but not quite. Thankfully I found BitBar, which is an open-source project that allows you to write scripts and have their output refreshed and put on the menu bar.

    You can download the binary or the source code here. There is also a huge library of user-contributed scripts so you don’t have to start from scratch.

    BitBar
    Put anything in your Mac OS X menu bar
    https://getbitbar.com/

    Reply
  17. Tomi Engdahl says:

    Hal Hodson / New Scientist:
    Google’s DeepMind and University of Oxford create lipreading AI that annotated 46.8% of spoken words without error, compared to 12.4% by a professional

    Google’s DeepMind AI can lip-read TV shows better than a pro
    https://www.newscientist.com/article/2113299-googles-deepmind-ai-can-lip-read-tv-shows-better-than-a-pro/

    Artificial intelligence is getting its teeth into lip reading. A project by Google’s DeepMind and the University of Oxford applied deep learning to a huge data set of BBC programmes to create a lip-reading system that leaves professionals in the dust.

    The AI system was trained using some 5000 hours from six different TV programmes, including Newsnight, BBC Breakfast and Question Time. In total, the videos contained 118,000 sentences.

    By only looking at each speaker’s lips, the system accurately deciphered entire phrases, with examples including “We know there will be hundreds of journalists here as well” and “According to the latest figures from the Office of National Statistics”.

    The AI vastly outperformed a professional lip-reader who attempted to decipher 200 randomly selected clips from the data set.

    The professional annotated just 12.4 per cent of words without any error. But the AI annotated 46.8 per cent of all words in the March to September data set without any error.

    “We believe that machine lip readers have enormous practical potential, with applications in improved hearing aids, silent dictation in public spaces (Siri will never have to hear your voice again) and speech recognition in noisy environments,” says Assael.

    Reply
  18. Tomi Engdahl says:

    Neural Network Keeps it Light
    http://hackaday.com/2016/11/24/neural-network-keeps-it-light/

    Neural networks ought to be very appealing to hackers. You can easily implement them in hardware or software and relatively simple networks can perform powerful functions. As the jobs we ask of neural networks get more complex, the networks require more artificial neurons. That’s why researchers are pursuing dense integrated neuron chips that could do for neural networks what integrated circuits did for conventional computers.

    Researchers at Princeton have announced the first photonic neural network. We recently talked about how artificial neurons work in conventional hardware and software. The artificial neurons look for inputs to reach a threshold which causes them to “fire” and trigger inputs to other neurons.

    Silicon Photonic Neural Network Unveiled
    Neural networks using light could lead to superfast computing.
    https://www.technologyreview.com/s/602938/silicon-photonic-neural-network-unveiled/

    Reply
  19. Tomi Engdahl says:

    Microsoft Survey Reveals Some Users Started Drinking Because of Computer Errors
    Redmond conducts new survey to show how important it is to have a new computer that performs flawlessly
    Read more: http://news.softpedia.com/news/microsoft-survey-reveals-some-users-started-drinking-because-of-computer-errors-510463.shtml#ixzz4R1NnkxZ9

    Reply
  20. Tomi Engdahl says:

    Ask Slashdot: Has Your Team Ever Succumbed To Hype Driven Development?
    https://ask.slashdot.org/story/16/11/27/2017235/ask-slashdot-has-your-team-ever-succumbed-to-hype-driven-development

    Someone reads a blog post, it’s trending on Twitter, and we just came back from a conference where there was a great talk about it. Soon after, the team starts using this new shiny technology (or software architecture design paradigm), but instead of going faster (as promised) and building a better product, they get into trouble. They slow down, get demotivated, have problems delivering the next working version to production.

    Describing behind-schedule teams that “just need a few more days to sort it all out,” he blames all the hype surrounding React.js, microservices, NoSQL, and that “Test-Driven Development Is Dead” blog post by Ruby on Rails creator David Heinemeier Hansson.

    Hype Driven Development
    https://blog.daftcode.pl/hype-driven-development-3469fc2e9b22#.syclh6uao

    Software development teams often make decisions about software architecture or technological stack based on inaccurate opinions, social media, and in general on what is considered to be “hot”, rather than solid research and any serious consideration of expected impact on their projects. I call this trend Hype Driven Development, perceive it harmful and advocate for a more professional approach I call “Solid Software Engineering”. Learn more about how it works and find out what you can do instead.

    Reply
  21. Tomi Engdahl says:

    Technique developed to explain computer thinking
    http://www.controleng.com/single-article/technique-developed-to-explain-computer-thinking/61d782923ec25c819d733431767bc908.html

    MIT and CSAIL researchers have developed a training technique designed to train neural networks so they provide not only predictions and classifications but rationales for their decisions.

    In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

    Neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque.

    At the Association for Computational Linguistics’ Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new way to train neural networks so that they provide not only predictions and classifications but rationales for their decisions.

    “In real-world applications, sometimes people really want to know why the model makes the predictions it does,”

    Reply
  22. Tomi Engdahl says:

    Bad news: SSD drives prices rise

    Flash-circuit manufacturers are currently unable to meet the demand. As a result prices are rising

    Currently, about 30 percent of new computers use flash-based solid state disk. Next and in the next year SSDs is growing by more than 50 per cent, or old mechanical hard drive will remain a minority.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5515:huonoja-uutisia-ssd-levyjen-hinnat-nousevat&catid=13&Itemid=101

    Reply
  23. Tomi Engdahl says:

    Brad Smith / The Official Microsoft Blog:
    Microsoft’s acquisition of LinkedIn approved by EU, to close in the coming days as it has now received all the regulatory approvals needed — It was roughly six months ago, on June 13, that we announced that Microsoft would acquire LinkedIn. At that time, we said that we aimed to close the deal by the end of the year.

    Microsoft-LinkedIn deal cleared by regulators, opening new doors for people around the world
    Read more at http://blogs.microsoft.com/blog/2016/12/06/microsoft-linkedin-deal-cleared-regulators-opening-doors-people-around-world/#yvBrS9gEYjEURxxq.99

    Reply
  24. Tomi Engdahl says:

    “Bullsh*t and spin”: Autonomy founder mocks HP’s $5B fraud suit against him
    https://techcrunch.com/2016/12/05/fraudtonomy/

    How could Dr Michael Lynch raise a $1 billion venture capital fund while being sued for $5 billion over alleged fraud in the $11 billion sale of his company Autonomy to HP? “The reality is, that doesn’t take much time” since he has a team of lawyers on the case, Lynch said on stage during TechCrunch Disrupt London.

    HP originally paid Lynch $730 million for his stake in Autonomy. Now its trying to recover that money and what it thinks it overpaid for the big data company. HP ended up having to write-down nearly $9 billion of the $11 billion buyout after Autonomy fell apart in its arms. Lynch is countersuing for $160 million, claiming the fraud suit ruined his reputation.

    Autonomy must have been working if clients were paying it hundreds of millions in cash. “If something’s wrong with a business, people don’t pay you”

    Reply
  25. Tomi Engdahl says:

    China and Russia aren’t ready to go it alone on tech, but their threats are worryingly plausible
    Vendors caught between risks and fear of missing out on growth markets
    http://www.theregister.co.uk/2016/12/07/russia_china_tech_policy/

    China and Russia are populous, wealthy nations that the technology industry has long-regarded as exceptional growth prospects.

    And then along came Edward Snowden, whose suggestions that American vendors were complicit in the United States’ surveillance efforts gave governments everywhere a reason to re-think their relationship with big technology companies.

    Russia and China both responded by citing a combination of national security concerns and a desire to grow their own technology industries as the twin motivations for policies that make it harder for foreign technology companies to access their markets.

    Both nations now operate approved vendor lists that government agencies and even business must consider when shopping for technology. Russia’s forcing web companies to store personal data on its soil. China demands to see vendors’ source code and has made the price of admission to its market a joint venture with a local firm, along with a technology transfer deal. Last month China also passed a security law requiring vendors to assist local authorities with investigations while further restricting internet freedoms.

    “If China were a smaller market there is no way the government would get away with the controls of the internet, supporting domestic industry and requiring technology transfer,” he says. “You could not get away with it and still be part of the global supply chain.”

    Reply
  26. Tomi Engdahl says:

    We grill another storage startup that’s meshing about with NVMe
    Says virtual NVMe-based SAN beats shared NVMe drive aaray
    http://www.theregister.co.uk/2016/12/06/excelero_meshing_about_with_nvme/

    Storage startup Excelero is supportive of NVMe drives and of NVMe over fabrics-style networking. It has a unique way of using NVMe drives to create a virtual SAN accessed by RDMA. An upcoming NASA Ames case study will describe how its NVMesh technology works in more detail.

    Reply
  27. Tomi Engdahl says:

    Broadcom quietly dismantles its ‘Vulcan’ ARM server chip project
    Avago refuses to beam 64-bit CPU aboard, sources claim
    http://www.theregister.co.uk/2016/12/07/broadcom_arm_processor_vulcan/

    Broadcom is shutting down efforts to develop its own server-class 64-bit ARM system-on-chip, multiple sources within the semiconductor industry have told The Register.

    It appears the secretive project, codenamed Vulcan, did not survive Broadcom’s acquisition by Avago and is gradually being wound down. Engineering resources have been quietly moved to other product areas and staff have left or been let go, we’re told. Vulcan’s designers have been applying for jobs at rival semiconductor giants, revealing that Broadcom’s server-grade processor dream is all but dead, it is claimed.

    All traces of Vulcan have been scrubbed from Broadcom’s website

    Meanwhile, AMD is keenly focused on its x86 Zen processor at the moment, leaving its ARM server chip plans on the shelf for now. So who’s left standing? Well, there’s Cavium and its ThunderX ARM data center silicon, and Qualcomm’s Centriq server system-on-chip that is due to start sampling this quarter and arrive on the market in the second half of 2017.

    “ARM-based servers have been hyped in the market for 6-plus years, with little to show for it in terms of real customer adoption,” Gina Longoria, a senior analyst at Moor Insights and Strategy, told The Register on Tuesday.

    Reply
  28. Tomi Engdahl says:

    HP’s new supercomputer is up to 8,000 times faster than existing PCs
    http://www.sciencealert.com/hp-s-new-supercomputer-is-up-to-8-000-times-faster-than-existing-pcs

    This week, HP unveiled a working prototype of an ambitious new computer system, dubbed ‘the Machine’, which the company claims is the world’s first demonstration of what it calls memory-driven computing.

    The idea is that the Machine – which was first announced back in 2014 – will massively outperform existing technology, by placing extra reliance on memory to perform calculations, with less dependence on computer processors.

    And while the Machine prototype we have so far is only being shown as a proof-of-concept of what the technology could ultimately be, there appears to be some truth to the performance claims.

    HP Enterprise – the business-focused side of the corporation – says its simulations show that memory-driven computing can achieve improved execution speeds up to 8,000 times faster than conventional computers.

    But before we get too excited, the Machine is likely to be years away from a commercial release, and its primary market is high-end servers that companies use to bring you things like Facebook and YouTube, not consumer PCs.

    But that doesn’t mean we shouldn’t get excited, because while the Machine is ultimately a business tool, HP says the architecture it runs – memory-driven computing – could one day find a home in consumer products, down to even the smart devices such as internet-connected cameras and lighting systems that make up the Internet of Things.

    At its core, the Machine uses photonics – the transmission of information via light, rather than the electrons of conventional PCs – to help processors access data from a massive memory pool.

    The prototype system currently uses 8 terabytes of memory in total – about 30 times the amount a conventional server might hold, and hundreds of times more memory than the amount of RAM a typical consumer computer would have.

    HP plans to eventually develop systems with hundreds of terabytes of memory

    Memory-driven computing looks like it will provide a huge performance boost when it hits – we’ll have to wait and see just when that will be.

    Reply
  29. Tomi Engdahl says:

    Will China Grab ARM Servers?
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1330920&

    China’s data center giants have become the next big hope to give traction to ARM’s server initiative.

    When Macom bought Applied Micro last week and said it would sell off its X-Gene ARM server unit, the writing was on the wall. Applied has a solid business with big U.S. data centers and in 2017 and beyond they are buying bandwidth in the form of 100-400G Ethernet — not ARM servers.

    In the wake of the news I heard multiple reports Broadcom was ending Vulcan, its plan for a beefy ARM server SoC made in a FinFET process with a custom core. The risky product was expected to be cancelled ever since penny-pinching Avago bought the company. (A former Broadcom engineer told me the company also canceled plans for a set-top processor using custom ARM cores.)

    A representative of Cavium said he is evaluating whether to bid on the Applied X-Gene 3 and Broadcom Vulcan IP, both now up for sale. He already hired some engineers let go from both programs. Cavium’s ThunderX2 is riding high expectations in this space but may not be available in volume until 2018.

    A representative from Qualcomm said he had seen some resumes from the Broadcom and Applied processor engineers. His company had already decided not to buy the ARM server IP from either company. Qualcomm is poised to soon launch its own chip, announced nearly four years ago.

    Few other companies are left driving the initiative to put a dent in Intel’s Xeon processor, which commands the majority of server sockets these days.

    Reply
  30. Tomi Engdahl says:

    Micron’s 3D NAND Hits Enterprise SSDs
    http://www.eetimes.com/document.asp?doc_id=1330926&

    Micron Technology is declaring spinning disk dead with the introduction of its first solid state drives (SSDs) using its 3D NAND for the enterprise market.

    All-flash storage array vendors such as Violin Memory and others have been pushing the message that hard drives are dead for a number of years now, Micron sees spinning media winding down because its new 5100 line of enterprise SATA SSDs are able to offer a lower total cost of ownership (TCO), said Scott Shadley, the company’s principal technologist for its storage business.

    In a telephone interview with EE Times, he said the launch of the 5100 series comes on the heels of the company’s success in the client segment with 1100 series of SSDs using Micron’ 3D technology. Shadley acknowledges it isn’t the first to then enterprise market with 3D NAND SSDs, but said Micron is looking to be strategic with its offerings.

    Even though “NVMe is the trend of the day,” he added, there isn’t enough support for it yet. Micron will be looking at offer SAS and PCIe SSDs using its 3D NAND down the road, however.

    Reply
  31. Tomi Engdahl says:

    Virtual Reality Group to Define APIs
    Specs could bridge fragmenting products
    http://www.eetimes.com/document.asp?doc_id=1330937&

    A dozen key stakeholders announced an effort to define cross-vendor, royalty-free, open application programming interfaces for virtual reality. It aims to reduce fragmentation and make it easier to write applications that run well across a growing range of VR products.

    The Khronos Group announced a call for participation in the effort it will host to specify application- and driver-level APIs. The full scope of the effort has yet to be defined. However, the work is expected to include APIs for tracking headsets, controllers, and other objects, as well as rendering content on range of displays.

    “As well as providing application portability, a goal of the standard is to enable new and innovative sensors and devices to implement up to a standard driver interface to be used easily by VR runtimes,” said Neil Trevett, president of Khronos, in an email exchange with EE Times.

    So far, the group includes VR chip and system vendors such as AMD, ARM, Google, Intel, Nvidia, Oculus, and Valve. The group also includes the developers of the open-source VR products, software developer Epic Games, and eye-tracking specialist Tobii.

    Reply
  32. Tomi Engdahl says:

    Micron Accelerates Adoption of All-Flash Data Centers With Highest-Capacity Enterprise SATA Solid State Drive
    Micron 5100 Enterprise SSD: Foundation for Big Data, Analytics and Streaming with Flexibility, Performance and Value
    http://investors.micron.com/releasedetail.cfm?ReleaseID=1002593

    Reply
  33. Tomi Engdahl says:

    Double-DIMMed XPoint wastes sockets
    It makes me cross
    http://www.theregister.co.uk/2016/11/29/double_dimmed_xpoint_wastes_sockets/

    A Xitore white paper compares coming XPoint DIMMs and Xitore’s own flash DIMMs, and claims each XPoint DIMM needs a companion DRAM cache DIMM, obviously halving XPoint DIMM density.

    The startup has its own tech to push – NVDIMM-X – but, even so, is revealing about XPoint DIMMery.

    Future 3D XPoint DIMMs may make it practical for main memory to hold terabytes – 6TB (6,000GB) is predicted. 3D XPoint DIMMs will probably have a slower bandwidth than double data rate (DDR) DIMMs, perhaps with their contents cached in MCDRAM (multi-channel DRAM), HBM memory to compensate for this. Such DDR DIMM caches could be about 10 per cent of the capacity of the main memory, so these caches can be 600GB in size – a far cry from the 4KB main memory on the machines from the early 1970s.

    If this pairing of XPoint DIMM and a DRAM cache DIMM is correct then several consequences follow:

    1. For every XPoint DIMM two DIMM slots are needed, effectively halving the potential XPoint DIMM capacity on a host.
    2. Memory bus capacity is needed to transfer data from XPoint DIMM to cache DIMM.
    3. XPoint is a backing store to a cache DIMM and effective caching algorithms can make alternative and less expensive backing stores more attractive.

    Let’s further assume DRAM access speed equals 1 time unit and XPoint access speed equals 5 time units. Then the total access time can be calculated as:

    ((950,000 x 1) = 950,000) + ((50,00 x 5) = 2,500) = 1,200,000 time units.

    The average time per access is 1.2 time units.

    Let’s employ flash DIMMs instead of XPoint ones, with an access time of 50 time units, 10 times slower, and use the same DRAM caching scheme and hit rate. What is the total access time for 1 million IOs?

    ((950,000 x 1) = 950,000) + ((50,00 x 50) = 25,000) = 3,450,000 time units.

    The average time per access is 3.45 time units. The difference from 1.2 is significant

    Finke has this to say about XPoint cost: “It is touted to have a cost of about one-half that of DRAM, but still 5x that of NAND.”

    Will you pay five times as much for a near 3X speed boost? We imagine that any accompanying DRAM cache DIMM would cost extra, effectively putting up the XPoint DIMM cost. So you might have to pay more than 5X the NAND price.”

    Comparison of the NVDIMM-X with 3D Xpoint in a DIMM Form Factor
    http://xitore.com/wp-content/uploads/2016/11/Comparison-Against-3D-Xpoint-November-16-2016.pdf

    Reply
  34. Tomi Engdahl says:

    Worldwide PC shipments are forecast to decline by 6.4% in 2016, according to IDC. This is an improvement over August’s projection for a decline of 7.2% in 2016. While IDC’s outlook for 2017 remains at minus 2.1% year-over-year growth, the absolute volumes are slightly higher based on stronger 2016 shipments.

    Source: http://semiengineering.com/the-week-in-review-manufacturing-140/

    Reply
  35. Tomi Engdahl says:

    Dave Gershgorn / Quartz:
    Leaked slides from an invitation-only event on December 6 reveal Apple’s LiDAR and AI research efforts, including more efficient neural networks — Apple has long been secretive about the research done within its Cupertino, California, labs. It’s easy to understand why.

    Inside the secret meeting where Apple revealed the state of its AI research
    http://qz.com/856546/inside-the-secret-meeting-where-apple-aapl-revealed-the-state-of-its-ai-research/

    Apple has long been secretive about the research done within its Cupertino, California, labs. It’s easy to understand why. Conjecture can spin out of even the most benign of research papers or submissions to medical journals, especially when they’re tied to the most valuable company on the planet.

    But it seems Apple is starting to open up about its work, at least in artificial intelligence.

    Reply
  36. Tomi Engdahl says:

    Shadow IT is a poison for digitalisation

    Grey IT. Shadow IT, Shadow IT. There are several names, but it is the same phenomenon. It manifests itself in a variety of ways and there is a need for the business to which the IT organization from a solution can not be found or finding a solution from your IT organization is made difficult.

    When a business has received too many requests answers “no”, it leaves itself to seek a solution. This may result in excel, which is distributed or referral by e-mail to different parts of the organization, or it can be found in the need for a satisfactory cloud service, which will be introduced over data management.

    Customer experience optimization is based on the knowledge that all the company’s customers are applied in the contact surfaces and channels, thus creating a unified customer experience possible. The company in terms of shadow-IT will lead to fragmentation of knowledge and the increase in the number of platforms. If the information is scattered shadow-IT As a result, it takes time and energy integration systems, and cleaning up the master data, rather than to focus on creating solutions that improve the customer experience.

    Almost every larger organization uses bureaucratic gate process, which must go through before a new project receives approval. This process causes the slow pace of decision-making and increases the workload of the business.

    When such an undertaking the business recognize the need for the new app, the answer information management is often “no” or else the business will have to follow the lead given approval process, which causes extra work and delay the solution of the problem.

    The more shadow-IT, which are produced, the greater the data to a different location. This has several disadvantages: data security is difficult to take care of in a complex environment and the information has been destroyed. Employees in different roles operate with partial data, which reduces efficiency and makes it difficult to do the job.

    Source: http://www.tivi.fi/Kumppaniblogit/salesforce/varjo-it-on-myrkkya-digitalisaatiolle-6602931

    Reply
  37. Tomi Engdahl says:

    The first 10-nanometer server processors

    The first is that the processor is not based on the x86 architecture. Second, the fact that the manufacturer is not Intel. The new product is in fact Centriq Qualcomm 2400 processor, which has already been displayed in deliveries. Commercial distribution circuit will be next year in the second half.

    Centriq 2400 Family Concept is based on Qualcomm’s data centers to tailor ARMv8 processor, which the company calls the Falkor. One chip can be up to 48 cores.

    Qualcomm has already demonnut new processors in a typical server application, which combines Linux, Java and Apache Spark.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5544:ensimmainen-10-nanometrin-palvelinprosessori&catid=13&Itemid=101

    Reply
  38. Tomi Engdahl says:

    Google, HTC, Oculus, Samsung, Sony Join Forces To Create Global VR Association
    https://games.slashdot.org/story/16/12/07/1918233/google-htc-oculus-samsung-sony-join-forces-to-create-global-vr-association

    Google, HTC, Oculus, Samsung, Sony and Acer have teamed up to form the Global Virtual Reality Association (GVRA) in an effort to reduce fragmentation and failure in the industry. GVRA aims to “unlock and maximize VR’s potential,” but there are little details as to what this may mean for consumers

    Google, HTC, Oculus, Samsung, Sony join forces to create Global VR Association
    https://techcrunch.com/2016/12/07/google-htc-oculus-samsung-sony-join-forces-to-create-global-vr-association/

    Reply
  39. Tomi Engdahl says:

    Talking Neural Nets
    http://hackaday.com/2016/12/03/talking-neural-nets/

    Speech synthesis is nothing new, but it has gotten better lately. It is about to get even better thanks to DeepMind’s WaveNet project. The Alphabet (or is it Google?) project uses neural networks to analyze audio data and it learns to speak by example. Unlike other text-to-speech systems, WaveNet creates sound one sample at a time and affords surprisingly human-sounding results.

    https://deepmind.com/blog/wavenet-generative-model-raw-audio/

    Reply
  40. Tomi Engdahl says:

    AI Will Disrupt How Developers Build Applications and the Nature of the Applications they Build
    https://developers.slashdot.org/story/16/12/08/2047214/ai-will-disrupt-how-developers-build-applications-and-the-nature-of-the-applications-they-build?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    AI will soon help programmers improve development, says Diego Lo Giudice, VP and principal analyst at Forrester, in an article published on ZDNet today. He isn’t saying that programmers will be out of jobs soon and AIs will take over. But he is making a compelling argument for how AI has already begun disrupting how developers build applications.

    Developers: Will AI run you out of your job?
    Forrester: AI will disrupt how developers build applications and the nature of the applications they build.
    http://www.zdnet.com/article/developers-will-ai-run-you-out-of-your-job/

    Much has been written about how artificial intelligence (AI) will put white-collar workers out of a job eventually. Will robots soon be able to do what programmers do best — i.e., write software programs? Actually, if you are or were a developer, you’ve probably already written or used software programs that can generate other software programs. That’s called code generation; in the past, it was done through “next” generation programming languages (such as a second-, third-, fourth-, or even fifth-generation languages), today are called low code IDEs. But also Java, C and C++ geeks have been turning high level graphical models like UML or BPML into code.

    But that’s not what I am talking about: I am talking about a robot (or bot) or AI software system that, if given a business requirement in natural language, can write the code to implement it — or even come up with its own idea and write a program for it.

    Don’t panic! This is still science fiction, but it won’t be too long before we can use AI to improve development

    We can see early signs of this: Microsoft’s Intellisense is integrated into Visual Studio and other IDEs to improve the developer experience. HPE is working on some interesting tech previews that leverage AI and machine learning to enable systems to predict key actions for participants in the application development and testing life cycle, such as managing/refining test coverage, the propensity of a code change to disrupt/break a build, or the optimal order of user story engagement.

    Interestingly, our interviewees saw testing as the most popular phase of the software delivery life cycle in which to apply AI. This makes sense, as quality is of paramount importance in the age of digital acceleration; it’s hard to both guarantee quality and speed to keep up with continuous delivery or the growing delivery cadence of modern development teams; and to achieve high quality it is expensive.

    Reply
  41. Tomi Engdahl says:

    IBM’s Watson Used In Life-Saving Medical Diagnosis
    https://science.slashdot.org/story/16/12/12/0059213/ibms-watson-used-in-life-saving-medical-diagnosis?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    How IBM Watson saved the life of a woman dying from cancer, exec says
    http://www.businessinsider.co.id/how-ibm-watson-helped-cure-a-womans-cancer-2016-12/

    IBM CEO Ginni Rometty has called health care IBM’s “moonshot.”

    IBM has spent years training its super-smart, learning, reasonsing computer service Watson to do things like analyze massive amounts of data to help improve the patient diagnosis.

    On stage at Business Insider’s Ignition conference taking place this week in New York, David Kenny, General Manager of IBM Watson, gave one example of how Watson is changing health care.

    Reply
  42. Tomi Engdahl says:

    Cray Sets Deep Learning Milestone
    Deep Learning in Minutes
    http://www.eetimes.com/document.asp?doc_id=1330965&

    Cray, using Microsoft’s brain-like neural-network software on its XC50 supercomputer with 1,000 Nvidia Tesla P100 graphic processing units (GPU), claimed a deep learning milestone this week, saying it can perform deep learning tasks that once took days in a matter of days and hours through a collaboration with the Swiss National Supercomputing Centre.

    The Swiss National Supercomputing Centre, a high-performance computing (HPC) center located in the Swiss city of Lugano that goes by the accronym CSCS, is making the deep learning applications used available to open source so that anyone with an XC50 can use them.

    Reply
  43. Tomi Engdahl says:

    Microsoft: Surface Hub demand is strong; product is now in stock
    http://www.zdnet.com/article/microsoft-surface-hub-demand-is-strong-product-is-now-in-stock/

    Microsoft officials say they’ve sold Surface Hub conferencing systems to more than 2,000 customers since March, and that they’ve now got their supply situation under control.

    Microsoft has not revealed any unit shipment numbers for any of its Surface-branded devices. But on December 12, the company did provide a few related data points around demand, especially around its Surface Hub conferencing systems.

    Reply
  44. Tomi Engdahl says:

    Cray Sets Deep Learning Milestone
    Deep Learning in Minutes
    http://www.eetimes.com/document.asp?doc_id=1330965&

    Cray, using Microsoft’s brain-like neural-network software on its XC50 supercomputer with 1,000 Nvidia Tesla P100 graphic processing units (GPU), claimed a deep learning milestone this week, saying it can perform deep learning tasks that once took days in a matter of days and hours through a collaboration with the Swiss National Supercomputing Centre.

    The Swiss National Supercomputing Centre, a high-performance computing (HPC) center located in the Swiss city of Lugano that goes by the accronym CSCS, is making the deep learning applications used available to open source so that anyone with an XC50 can use them.

    Reply
  45. Tomi Engdahl says:

    How Lean IT impacts business outcomes
    New research proves just how big an impact a Lean methodology can have an enterprise’s bottom-line.
    http://www.cio.com/article/3147850/it-industry/how-lean-it-impacts-business-outcomes.html

    In the years since Lean first revolutionized the manufacturing sector, the basic principles have also shown benefits in other industries and other departments, most notably within technology. But new research emphasizes the major impact Lean can have not just in your IT departments, but across your entire organization.

    The ultimate goal and guiding principle of Lean is creating perfect value for customers through a perfect value creation process with zero waste. In the day-to-day implementations of Lean, that translates to creating more value with fewer resources and inefficiencies.

    Reply
  46. Tomi Engdahl says:

    How to be a more ‘authentic’ IT leader
    In today’s digital workforce, it’s not enough simply to manage. Workers are looking for authentic leadership.
    http://www.cio.com/article/3128080/leadership-management/how-to-be-a-more-authentic-it-leader.html

    A funeral seems like the last place to find professional leadership lessons, but at the service celebrating her mother’s life, LaVerne Council found inspiration she brings every day in her role as assistant secretary for Information and Technology and CIO, Office of Information and Technology, U.S. Department of Veterans Affairs.

    Today’s IT organizations need authentic and bold leadership to guide their digital transformation and drive innovation and growth. But it’s also key to solving another corporate puzzle: how to attract, hire and retain talent.

    Inspiring loyalty

    In today’s job market, companies can’t promise lifelong job security, and employees don’t expect it. But what organizations can offer, and what more and more workers are looking for, is purpose, mission and shared values. That starts with authentic leadership

    Reply
  47. Tomi Engdahl says:

    Meet Hyper.is – the terminal written in HTML, JS and CSS
    Run Bash on Windows and perform other feats of command line magic
    http://www.theregister.co.uk/2016/12/13/hyper/

    Zeit, a San Francisco-based software startup, has released the 1.0 version of Hyper, a terminal emulator written in JavaScript, HTML, and CSS. Why? Well, why not?

    Hyper is based on Electron, an open source framework for creating cross-platform desktop applications using HTML and JS.

    https://hyper.is/

    Reply
  48. Tomi Engdahl says:

    Ryan Smith / AnandTech:
    AMD announces new Radeon Instinct machine learning chips for the server market to rival NVIDIA, coming in first half of 2017

    AMD Announces Radeon Instinct: GPU Accelerators for Deep Learning, Coming In 2017
    by Ryan Smith on December 12, 2016 9:00 AM EST
    http://www.anandtech.com/show/10905/amd-announces-radeon-instinct-deep-learning-2017

    With the launch of their Polaris family of GPUs earlier this year, much of AMD’s public focus in this space has been on the consumer side of matters. However now with the consumer launch behind them, AMD’s attention has been freed to focus on what comes next for their GPU families both present and future, and that is on the high-performance computing market. To that end, today AMD is taking the wraps off of their latest combined hardware and software initiative for the server market: Radeon Instinct. Aimed directly at the young-but-quickly-growing deep learning/machine learning/neural networking market, AMD is looking to grab a significant piece of what is potentially a very large and profitable market for the GPU vendors.

    Broadly speaking, while the market for HPC GPU products has been slower to evolve than first anticipated, it has at last started to arrive in the last couple of years.

    Reply
  49. Tomi Engdahl says:

    EPA begins process to improve computer server efficiency
    http://www.edn.com/electronics-blogs/eye-on-efficiency/4443147/EPA-begins-process-to-improve-computer-server-efficiency?_mc=NL_EDN_EDT_EDN_today_20161213&cid=NL_EDN_EDT_EDN_today_20161213&elqTrackId=e45918942e9e4cf0975d6ac8afcc6ff3&elq=ebad62c42ca141668eadcc8c35861e14&elqaid=35145&elqat=1&elqCampaignId=30702

    The U.S. Environmental Protection Agency (EPA) is aiming to improve the energy efficiency of future computer servers. A few months ago, the agency published Draft 1, Version 3 of its ENERGY STAR Computer Server Specification.

    In order to be eligible for the program, a server must meet all of the following criteria:

    Marketed and sold as a computer server
    Packaged and sold with at least one AC-DC or DC-DC power supply
    Designed for and listed as supporting at least one or more computer server operating systems and/or hypervisors
    Targeted to run user-installed enterprise applications
    Provide support for ECC and/or buffered memory
    Designed so all processors have access to shared system memory and are visible to a single OS or hypervisor

    Excluded products include fully fault tolerant servers, server appliances, high performance computing systems, large servers, storage products including blade storage, and network equipment.

    Reply
  50. Tomi Engdahl says:

    AI Ethics Effort Starts at IEEE
    Group targets moral issues in machine learning
    http://www.eetimes.com/document.asp?doc_id=1330978&

    Today, the IEEE kicked off a broad initiative to make ethics a part of the design process for systems using artificial intelligence. The effort, in the works for more than a year, hopes to spark conversations that lead to consensus-driven actions and has already generated three standards efforts.

    The society published a 138-page report today that outlines a smorgasbord of issues at the intersection of AI technology and values. They range from how to identify and handle privacy around personal information to how to define and audit human responsibilities for autonomous weapon systems.

    The report raises a laundry list of provocative questions, such as whether mixed-reality systems could be used for mind control or therapy. It also provides some candidate recommendations and suggests a process for getting community feedback on them.

    “The point is to empower people working on this technology to take ethics into account,”

    http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

    Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems represents the collective input of over one hundred global thought leaders from academia, science, government and corporate sectors in the fields of Artificial Intelligence, ethics, philosophy, and policy.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*