Computer technology trends for 2016

It seems that PC market seems to be stabilizing in 2016. I expect that the PC market to shrinks slightly. While mobile devices have been named as culprits for the fall of PC shipments, IDC said that other factors may be in play. It is still pretty hard to make any decent profits with building PC hardware unless you are one of the biggest players – so again Lenovo, HP, and Dell are increasing their collective dominance of the PC market like they did in 2015. I expect changes like spin-offs and maybe some mergers with with smaller players like Fujitsu, Toshiba and Sony. The EMEA server market looks to be a two-horse race between Hewlett Packard Enterprise and Dell, according to Gartner. HPE, Dell and Cisco “all benefited” from Lenovo’s acquisition of IBM’s EMEA x86 server organisation.

Tablet market is no longer high grow market – tablet maker has started to decline, and decline continues in 2016 as owners are holding onto their existing devices for more than 3 years. iPad sales are set to continue decline and iPad Air 3 to be released in 1st half of 2016 does not change that. IDC predicts that detachable tablet market set for growth in 2016 as more people are turning to hybrid devices. Two-in-one tablets have been popularized by offerings like the Microsoft Surface, with options ranging dramatically in price and specs. I am not myself convinced that the growth will be as IDC forecasts, even though Company have started to make purchases of tablets for workers in jobs such as retail sales or field work (Apple iPads, Windows and Android tablets managed by company). Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits.

All your consumer tech gear should be cheaper come July as shere will be less import tariffs for IT products as World Trade Organization (WTO) deal agrees that tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually.

Happy Computer Laptop

In 2015 the storage was rocked to its foundations and those new innovations will be taken into wider use in 2016. The storage market in 2015 went through strategic foundation-shaking turmoil as the external shared disk array storage playbook was torn to shreds: The all-flash data centre idea has definitely taken off as a vision that could be achieved so that primary data is stored in flash with the rest being held in cheap and deep storage.  Flash drives generally solve the dusk drive latency access problem, so not so much need for hybrid drives. There is conviction that storage should be located as close to servers as possible (virtual SANs, hyper-converged industry appliances  and NVMe fabrics). The existing hybrid cloud concept was adopted/supported by everybody. Flash started out in 2-bits/cell MLC form and this rapidly became standard and TLC (3-bits/cell or triple layer cell) had started appearing. Industry-standard NVMe drivers for PCIe flash cards appeared. Intel and Micron blew non-volatile memory preconceptions out of the water in the second half of the year with their joint 3D XPoint memory announcement. Boring old disk  disk tech got shingled magnetic recording (SMR) and helium-filled drive technology; drive industry is focused on capacity-optimizing its drives.  We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being. Tape industry developed a 15TB LTO-7 format.

The use of SSD will increase and it’s price will drop. SSDs will be in more than 25% of new laptops sold in 2015.  SSDs are expected to be in 31% of new consumer laptops in 2016 and more than 40% by 2017. The prices of mainstream consumer SSDs have fallen dramatically every year over the past three years while HDD prices have not changed much.  SSD prices will decline to 24 cents per gigabyte in 2016. In 2017 they’re expected to drop to 11-17 cents per gigabyte (means a 1TB SSD on average would retail for $170 or less).

Hard disk sales will decrease, but this technology is not dead. Sales of hard disk drives have been decreasing for several years now (118 million units in the third quarter of 2015), but according to Seagate hard disk drives (HDDs) are set to still stay relevant around for at least 15 years to 20 years.  HDDs remain the most popular data storage technology as it is cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. 

Because all-flash storage systems with homogenous flash media are still too expensive to serve as a solution to for every enterprise application workload, enterprises will increasingly turn to performance optimized storage solutions that use a combination of multiple media types to deliver cost-effective performance. The speed advantage of Fibre Channel over Ethernet has evaporated. Enterprises also start  to seek alternatives to snapshots that are simpler and easier to manage, and will allow data and application recovery to a second before the data error or logical corruption occurred.

Local storage and the cloud finally make peace in 2016 as the decision-makers across the industry have now acknowledged the potential for enterprise storage and the cloud to work in tandem. Over 40 percent of data worldwide is expected to live on or move through the cloud by 2020 according to IDC.

Happy Computer Laptop

Open standards for data center development are now a reality thanks to advances in cloud technology. Facebook’s Open Compute Project has served as the industry’s leader in this regard.This allows more consolidation for those that want that. Consolidation used to refer to companies moving all of their infrastructure to the same facility. However, some experts have begun to question this strategy as  the rapid increase in data quantities and apps in the data center have made centralized facilities more difficult to operate than ever before. Server virtualization, more powerful servers and an increasing number of enterprise applications will continue to drive higher IO requirements in the datacenter.

Cloud consolidation starts heavily in 2016: number of options for general infrastructure-as-a-service (IaaS) cloud services and cloud management software will be much smaller at the end of 2016 than the beginning. The major public cloud providers will gain strength, with Amazon, IBM SoftLayer, and Microsoft capturing a greater share of the business cloud services market. Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them so advanced users want to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.

Year 2016 will be harder for legacy IT providers than 2015. In its report, IDC states that “By 2020, More than 30 percent of the IT Vendors Will Not Exist as We Know Them Today.” Many enterprises are turning away from traditional vendors and toward cloud providers. They’re increasingly leveraging open source. In short, they’re becoming software companies. The best companies will build cultures of performance and doing the right thing — and will make data and the processes around it self-service for all their employees. Design Thinking to guide companies who want to change the lives of its customers and employees. 2016 will see a lot more work in trying to manage services that simply aren’t designed to work together or even be managed – for example Whatever-As-A-Service cloud systems to play nicely together with their existing legacy systems. So competent developers are the scarce commodity. Some companies start to see Cloud as a form of outsourcing that is fast burning up inhouse ITops jobs with varying success.

There are still too many old fashioned companies that just can’t understand what digitalization will mean to their business. In 2016, some companies’ boards still think the web is just for brochures and porn and don’t believe their business models can be disrupted. It gets worse for many traditional companies. For example Amazon is a retailer both on the web and increasingly for things like food deliveries. Amazon and other are playing to win. Digital disruption has happened and will continue.
Happy Computer Laptop

Windows 10 is coming more on 2016. If 2015 was a year of revolution, 2016 promises to be a year of consolidation for Microsoft’s operating system. I expect that Windows 10 adoption in companies starts in 2016. Windows 10 is likely to be a success for the enterprise, but I expect that word from heavyweights like Gartner, Forrester and Spiceworks, suggesting that half of enterprise users plan to switch to Windows 10 in 2016, are more than a bit optimistic. Windows 10 will also be used in China as Microsoft played the game with it better than with Windows 8 that was banned in China.

Windows is now delivered “as a service”, meaning incremental updates with new features as well as security patches, but Microsoft still seems works internally to a schedule of milestone releases. Next up is Redstone, rumoured to arrive around the anniversary of Windows 10, midway through 2016. Also Windows servers will get update in 2016: 2016 should also include the release of Windows Server 2016. Server 2016 includes updates to the Hyper-V virtualisation platform, support for Docker-style containers, and a new cut-down edition called Nano Server.

Windows 10 will get some of the already promised features not delivered in 2015 delivered in 2016. Windows 10 was promised coming  to PCs and Mobile devices in 2015 to deliver unified user experience. Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. The promise was same unified interface for PCs, tablets and smart phones – but it was only delivered in 2015 for only PCs and some tablets. Mobile Windows 10 for smart phone is expected to start finally in 2016 – The release of Microsoft’s new Windows 10 operating system may be the last roll of the dice for its struggling mobile platform. Because Microsoft Plan A is to get as many apps and as much activity as it can on Windows on all form factor with Universal Windows Platform (UWP), which enables the same Windows 10 code to run on phone and desktop. Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developer. Can Microsoft keep the developer momentum going? I am not sure. In addition there are also plans for tools for porting iOS apps and an Android runtime, so expect also delivery of some or all of the Windows Bridges (iOS, web app, desktop app, Android) announced at the April 2015 Build conference in hope to get more apps to unified Windows 10 app store. Windows 10 does hold out some promise for Windows Phone, but it’s not going to make an enormous difference. Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater.

Older Windows versions will not die in 2016 as fast as Microsoft and security people would like. Expect Windows 7 diehards to continue holding out in 2016 and beyond. And there are still many companies that run their critical systems on Windows XP as “There are some people who don’t have an option to change.” Many times the OS is running in automation and process control systems that run business and mission-critical systems, both in private sector and government enterprises. For example US Navy is using obsolete operating system Microsoft Windows XP to run critical tasks. It all comes down to money and resources, but if someone is obliged to keep something running on an obsolete system, it’s the wrong approach to information security completely.

Happy Computer Laptop

Virtual reality has grown immensely over the past few years, but 2016 looks like the most important year yet: it will be the first time that consumers can get their hands on a number of powerful headsets for viewing alternate realities in immersive 3-D. Virtual Reality will become the mainstream when Sony, and Samsung Oculus bring consumer products on the market in 2016. Whole virtual reality hype could be rebooted as Early build of final Oculus Rift hardware starts shipping to devs. Maybe HTC‘s and Valve‘s Vive VR headset will suffer in the next few month. Expect a banner year for virtual reality.

GPU and FPGA acceleration will be used in high performance computing widely. Both Intel and AMD have products with CPU and GPU in the same chip, and there is software support for using GPU (learn CUDA and/or OpenCL). Also there are many mobile processors have CPU and GPU on the same chip. FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. There was lots of interest in 2015 for using FPGA for accelerating computations as the nest step after GPU, and I expect that the interest will grow even more in 2016. FPGAs are not quite as efficient as a dedicated ASIC, but it’s about as close as you can get without translating the actual source code directly into a circuit. Intel bought Altera (big FPGA company) in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016.

Artificial intelligence, machine learning and deep learning will be talked about a lot in 2016. Neural networks, which have been academic exercises (but little more) for decades, are increasingly becoming mainstream success stories: Heavy (and growing) investment in the technology, which enables the identification of objects in still and video images, words in audio streams, and the like after an initial training phase, comes from the formidable likes of Amazon, Baidu, Facebook, Google, Microsoft, and others. So-called “deep learning” has been enabled by the combination of the evolution of traditional neural network techniques, the steadily increasing processing “muscle” of CPUs (aided by algorithm acceleration via FPGAs, GPUs, and, more recently, dedicated co-processors), and the steadily decreasing cost of system memory and storage. There were many interesting releases on this in the end of 2015: Facebook Inc. in February, released portions of its Torch software, while Alphabet Inc.’s Google division earlier this month open-sourced parts of its TensorFlow system. Also IBM Turns Up Heat Under Competition in Artificial Intelligence as SystemML would be freely available to share and modify through the Apache Software Foundation. So I expect that the year 2016 will be the year those are tried in practice. I expect that deep learning will be hot in CES 2016 Several respected scientists issued a letter warning about the dangers of artificial intelligence (AI) in 2015, but I don’t worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it’s not ready for. How machine learning will affect your business? MIT has a good free intro to AI and ML.

Computers, which excel at big data analysis, can help doctors deliver more personalized care. Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better. Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Computers excel at searching and combining vastly more data than a human so algorithms can be put to good use in certain areas of medicine. There are also things that can slow down development in 2016: To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.

Internet of Things (IoT) was talked a lot in 2015, and it will be a hot topics for IT departments in 2016 as well. Many companies will notice that security issues are important in it. The newest wearable technology, smart watches and other smart devices corresponding to the voice commands and interpret the data we produce - it learns from its users, and generate appropriate  responses in real time. Interest in Internet of Things (IoT) will as bring interest to  real-time business systems: Not only real-time analytics, but real-time everything. This will start in earnest in 2016, but the trend will take years to play out.

Connectivity and networking will be hot. And it is not just about IoT.  CES will focus on how connectivity is proliferating everything from cars to homes, realigning diverse markets. The interest will affect job markets: Network jobs are hot; salaries expected to rise in 2016  as wireless network engineers, network admins, and network security pros can expect above-average pay gains.

Linux will stay big in network server marker in 2016. Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world’s busiest sites. Linux will also run many parts of out Internet infrastructure that moves the bits from server to the user. Linux will also continue to rule smart phone market as being in the core of Android. New IoT solutions will be moist likely to be built mainly using Linux in many parts of the systems.

Microsoft and Linux are not such enemies that they were few years go. Common sense says that Microsoft and the FOSS movement should be perpetual enemies.  It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. Microsoft is already partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.

Happy Computer Laptop

Web browsers are coming more and more 64 bit as Firefox started 64 bit era on Windows and Google is killing Chrome for 32-bit Linux. At the same time web browsers are loosing old legacy features like NPAPI and Silverlight. Who will miss them? The venerable NPAPI plugins standard, which dates back to the days of Netscape, is now showing its age, and causing more problems than it solves, and will see native support removed by the end of 2016 from Firefox. It was already removed from Google Chrome browsers with very little impact. Biggest issue was lack of support for Microsoft’s Silverlight which brought down several top streaming media sites – but they are actively switching to HTML5 in 2016. I don’t miss Silverlight. Flash will continue to be available owing to its popularity for web video.

SHA-1 will be at least partially retired in 2016. Due to recent research showing that SHA-1 is weaker than previously believed, Mozilla, Microsoft and now Google are all considering bringing the deadline forward by six months to July 1, 2016.

Adobe’s Flash has been under attack from many quarters over security as well as slowing down Web pages. If you wish that Flash would be finally dead in 2016 you might be disappointed. Adobe seems to be trying to kill the name by rebranding trick: Adobe Flash Professional CC is now Adobe Animate CC. In practive it propably does not mean much but Adobe seems to acknowledge the inevitability of an HTML5 world. Adobe wants to remain a leader in interactive tools and the pivot to HTML5 requires new messaging.

The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.

Java will still stick around, but Java’s decline as a language will accelerate as new stuff isn’t being written in Java, even if it runs on the JVM. We will  not see new Java 9 in 2016 as Oracle’s delayed the release of Java 9 by six months. The register tells that Java 9 delayed until Thursday March 23rd, 2017, just after tea-time.

Containers will rule the world as Docker will continue to develop, gain security features, and add various forms of governanceUntil now Docker has been tire-kicking, used in production by the early-adopter crowd only, but it can change when vendors are starting to claim that they can do proper management of big data and container farms.

NoSQL databases will take hold as they be called as “highly scalable” or “cloud-ready.” Expect 2016 to be the year when a lot of big brick-and-mortar companies publicly adopt NoSQL for critical operations. Basically NoSQL could be seem as key:value store, and this idea has also expanded to storage systems: We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being.

In the database world Big Data will be still big but it needs to be analyzed in real-time. A typical big data project usually involves some semi-structured data, a bit of unstructured (such as email), and a whole lot of structured data (stuff stored in an RDBMS). The cost of Hadoop on a per-node basis is pretty inconsequential, the cost of understanding all of the schemas, getting them into Hadoop, and structuring them well enough to perform the analytics is still considerable. Remember that you’re not “moving” to Hadoop, you’re adding a downstream repository, so you need to worry on systems integration and latency issues. Apache Spark will also get interest as Spark’s multi-stage in-memory primitives provides more performance  for certain applications. Big data brings with it responsibility – Digital consumer confidence must be earned.

IT security continues to be a huge issue in 2016. You might be able to achieve adequate security against hackers and internal threats but every attempt to make systems idiot proof just means the idiots get upgraded. Firms are ever more connected to each other and the general outside world. So in 2016 we will see even more service firms accidentally leaking critical information and a lot more firms having their reputations scorched by incompetence fuelled security screw-ups. Good security people are needed more and more – a joke doing the rounds of ITExecs doing interviews is “if you’re a decent security bod, why do you need to look for a job”

There will still be unexpected single points of failures in big distributed networked system. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop. That’s not a single point of failure in the classical sense but it’s really hard to manage unless you go for full cloud agnosticism – which is costly. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail. Oh, and no one seems to test their mission-critical data centre properly, because it’s mission criticalSo they just over-specify where they can and cross their fingers (= pay twice and get the half the coverage for other vulnerabilities).

For IT start-ups it seems that Silicon Valley’s cash party is coming to an end. Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money could be over and valuation expectations are re-calibrating down. The cheap capital party is over. It could mean trouble for weaker startups.

 

933 Comments

  1. Tomi Engdahl says:

    Open Sourcers Race to Build Better Versions of Slack
    http://www.wired.com/2016/03/open-source-devs-racing-build-better-versions-slack/

    Real-time chat applications have been around since the earliest days of the Internet. Yet somehow, despite the enormous number of options, the workplace chat app Slack has surged in popularity. After just two years in business, the company now boasts 675,000 paid users, 2.3 million users overall, and annual revenue of more than $64 million.

    Slack’s growth has shown that even seemingly ancient technologies like chat can still be improved, particularly when it comes to using instant messaging for work. But Slack has the limitations that all proprietary cloud apps do. Your data lives on someone else’s servers. Customization is limited. You have to trust that Slack the company will make the changes you want to Slack the app and not make changes you don’t want.

    That’s why the open source community has been racing to build better versions of Slack, even though countless open source chat apps exist already.

    Reply
  2. Tomi Engdahl says:

    Stack Overflow:
    Stack Overflow Developer Survey 2016: Mac OS overtakes Linux as primary OS, JavaScript most popular language even for back-end developers

    Developer Survey Results
    2016
    http://stackoverflow.com/research/developer-survey-2016

    Reply
  3. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft pushes back Windows 7 and 8 on Skylake support cut-off date from 2017 to 2018 — Microsoft is clarifying and softening a bit its stance on how long and thoroughly it will support Windows 7 and 8 users who want to run those operating systems on Intel Skylake-based devices.

    Microsoft pushes back Windows 7 and 8 on Skylake support cut-off date from 2017 to 2018
    http://www.zdnet.com/article/microsoft-pushes-back-windows-7-on-skylake-support-cut-off-date-from-2017-to-2018/

    Microsoft is softening its stance on how long and how completely it will continue to support Windows 7 and Windows 8.1 users running Skylake-based devices.
    windows7supportchanges.jpg

    In a March 18 “Windows for IT Pros” blog post, Microsoft officials outlined the updated terms and conditions.

    Instead of cutting off full, extended support for Windows 7 and Windows 8.1 on Skylake on July 17, 2017, Microsoft will now guarantee full extended support to July 17, 2018 on the set list of devices it provided in February.

    “After July 2018, all critical Windows 7 and Windows 8.1 security updates will be addressed for Skylake systems until extended support ends for Windows 7, January 14, 2020 and Windows 8.1 on January 10, 2023,” according to today’s blog post. (Again, this is only for machines on the list of Microsoft and OEM-supported devices.)

    Microsoft is clarifying and softening a bit its stance on how long and thoroughly it will support Windows 7 and 8 users who want to run those operating systems on Intel Skylake-based devices.

    Reply
  4. Tomi Engdahl says:

    5 Wide and Tall Monitors with Hacked Bezels for Wall of Awesome
    http://hackaday.com/2016/03/20/5-wide-and-tall-monitors-with-hacked-bezels-for-wall-of-awesome/

    If two is better than one, what about five? [Omnicrash] has posted a nice analysis of his monitor setup, which uses 5 portrait mounted monitors side-by-side. To minimize the bezel size between them, he removed the casing and built a custom stand that placed them all closely together for a surround viewing approach. He’s been using this setup for a couple of years and has posted a nice analysis of making it work for multiple purposes. On the upside, he says it is awesome for gaming and watching videos.

    On the downside, NVidia’s drivers and multi-monitor setup are a pain, and some tasks just didn’t work with the bezels. He couldn’t, for instance, run a standard-sized remote desktop screen anywhere without having the bezel get in the way.

    DIY custom 5-monitor setup
    https://www.omnicrash.net/2016/03/13/diy-custom-5-monitor-setup/

    I upgraded to a DIY mounted 5-monitor setup almost two years ago. Since I’m planning to switch to another setup I figured it was about time I’d share my build log, as well as the pros and cons of using an all-portrait 5400×1920 (or as I like to refer to it: 5K) setup on a daily basis for programming, browsing, gaming, video and general productivity.

    Using a few HDMI adapters my 780 Ti drives the center 3 and my TV, while the outer 2 screens are driven using the on-board GPU.

    Advantages

    Lots of screen estate
    Perfect for programming
    Amazing for browsing the web
    Great for using a lot of tools at the same time
    Ideal DPI ratio for fonts like ProFont
    Peripheral vision filling display
    Easy to lose track of mouse cursor

    Disadvantages

    Setup process
    Re-configuration process when updating, RDP sessions and resolution changes
    Low DPI compared to modern ultrawides/4K displays, though this can be an advantage
    BIOS and setup is a pain, have to hold your head sideways
    Bezels can get in the way of game/video/RDP 16:9 content
    Manual window positioning and re-positioning

    Reply
  5. Tomi Engdahl says:

    Software-defined FPGA computing with QuickPlay: Product how-to
    http://www.edn.com/design/integrated-circuit-design/4441611/Software-defined-FPGA-computing-with-QuickPlay–Product-how-to?_mc=NL_EDN_EDT_EDN_review_20160318&cid=NL_EDN_EDT_EDN_review_20160318&elqTrackId=aa98994cafb04692a3ef93ca04013218&elq=a31bf67fa4884ef18038368756f62577&elqaid=31389&elqat=1&elqCampaignId=27436

    Data-centre equipment manufacturers have long been keen to take advantage of the massive parallelism possible with FPGAs to achieve the processing performance and I/O bandwidth needed to keep pace with demand, within a highly efficient power budget. Traditionally however, implementing a hardware computing platform in an FPGA has been a complex challenge that has required designers to deal with some of the lowest levels of hardware implementation.

    Although some recent FPGA-design methodologies incorporating High-Level Synthesis (HLS) tools and software programming languages such as OpenCL, C, and C++ have simplified the task, they have not eliminated the need for specialist FPGA-design expertise. There is a need for a high-level workflow that allows software engineers to use an FPGA as a software-defined computing platform without the pain of hardware design. To satisfy this need, such a workflow should be able to:

    Create functional hardware from pure software code
    Incorporate existing hardware IP blocks if needed
    Infer and create all of the support hardware (interfaces, control, clocks, etc.)
    Support the use of commercial, off-the-shelf boards and custom platforms
    Eliminate hardware debug by ensuring generated hardware is correct by construction
    Support debug of functional blocks using standard software debug tools only

    Familiar challenges

    A software developer without specific hardware expertise could generate Kernel 1 and Kernel 2, using a high-level synthesis tool such as Vivado HLS to compile the software functions Function1() and Function2() as written in C or C++ into FPGA hardware descriptions in VHDL or Verilog. However, the non-algorithmic elements of the design, such as interfaces, control, clocks, and resets could not be generated with HLS tools. Hardware designers would be needed to create these as custom IP.

    A new approach

    PLDA Group, a developer of embedded electronic systems and IP, has created QuickPlay to allow software developers to accomplish these tasks, and hence implement applications intended for CPUs, partially or fully, on FPGA hardware. In this software-centric methodology, the designer first develops a C/C++ functional model of the hardware engine, and then verifies the functional model with standard C/C++ debug tools. The target FPGA platform and I/O interfaces (PCIe, Ethernet, DDR, QDR, etc.) are then specified, and finally the hardware engine is compiled and built.

    https://www.quickplay.io/

    QuickPlay is an Open FPGA development platform that exposes a standard C/C++ framework and API allowing developers across the board to build FPGA augmented applications in no time and with no hardware expertise. – See more at: https://www.quickplay.io/#sthash.miEmB8xo.dpuf

    An Open FPGA Development Platform

    The QuickPlay development platform provides the software and hardware infrastructure that enable tight integration between design IP, FPGA hardware, application software and design software, allowing developers to build FPGA enabled applications in record time. To facilitate adoption and enable a strong ecosystem of IP vendors, board vendors, and service providers, QuickPlay was engineered as an open platform that leverages industry standards: C/C++ and HDL for kernel design, AXI4 as the interconnect architecture, YAML, IP-XACT as the component/board description formats and Python as the scripting language.
    - See more at: https://www.quickplay.io/product#sthash.BKiPnFPR.dpuf

    Reply
  6. Tomi Engdahl says:

    Running Out Of Energy?
    http://semiengineering.com/running-out-of-energy/

    The anticipated and growing energy requirements for future computing needs will hit a wall in the next 24 years if the current trajectory is correct. At that point, the world will not produce enough energy for all of the devices that are expected to be drawing power.

    A report issued by the Semiconductor Industry Association and Semiconductor Research Corp., bases its conclusions on system-level energy per bit operation, which are a combination of many components such as logic circuits, memory arrays, interfaces and I/Os. Each of those contributes to the total energy budget.

    For the benchmark energy per bit, as shown in the chart below, computing will not be sustainable by 2040. This is when the energy required for computing is estimated to exceed the estimated world’s energy production. As such, significant improvement in the energy efficiency of computing is needed.

    “It’s not realistic to expect that the world’s energy production is going to be devoted 100% to computing so the question is, how do we do more with less and where are the opportunities for squeezing more out of the system?”

    “Anytime someone looks at a growth rate in a relatively young segment and extrapolates it decades into the future, you run into a problem that nothing can continue to grow at exponential rates forever,”

    Case in point: Once upon a time gasoline cost 5 cents a gallon and cars were built that got 5 or 10 miles to the gallon. If it were extrapolated to say that if the whole world was driving as much as the Americans do in their cars at 5 miles to the gallon, we are going to run out of oil by 2020. But as time goes on, technology comes along that changes the usage patterns and cars are not built to get 5 miles to the gallon.

    He noted that the core of the argument here is really a formula about the relationship between computing performance and energy, and it is primarily tracking the evolution of CPUs and capturing the correct observation that in order to go really really fast it takes more than linear increases in power and complexity.

    “To push the envelope you have to do extraordinary things and the core assumption of the whole report is that you will continue on that same curve as you ramp up computing capability further still.”

    And a key area of focus today is the interface between design and manufacturing where there is a constant need to keep focusing on how to contribute to getting more done with less power, and ultimately with less total energy consumed.

    That also requires adapting to the new hardware architectures that bring more parallelism and more compute efficiency, and then working with very large distributed systems. Given the immense challenges of lowering the energy requirements of computing in the future, it is obvious the task will be accomplished with all hands on deck. And given the impressive accomplishments of the semiconductor industry in the past 50 years, there’s no doubt even more technological advancements will emerge to hold back the threat of hitting the energy wall.

    Reply
  7. Tomi Engdahl says:

    9.7-Inch iPad Pro Is Apple’s Last Chance To Save the iPad Line
    https://hardware.slashdot.org/story/16/03/22/1930203/97-inch-ipad-pro-is-apples-last-chance-to-save-the-ipad-line

    The iPad occupies a unique place in the annals of tech history. Upon its release in 2010, Apple’s first stab at a tablet quickly set sales records. Not only did early iPad sales outpace early iPhone sales, but the iPad quickly became one of the fastest selling consumer electronics products of all time. The iPad’s once-auspicious journey, however, would eventually take an unexpected detour. In what seemed like a blink of an eye, soaring sales began to taper off

    Today, iPad sales are still slumping.

    year over year iPad sales fell by 25% while iPad related revenue dropped by 20%.

    Apple’s new 9.7-inch iPad Pro is the company’s last chance to save the iPad line
    http://bgr.com/2016/03/22/9-7-inch-ipad-pro-apple/

    “I really believe,” Tim Cook said during a January 2012 earnings conference call, “as do many others in the company believe, that there will come a day when the tablet market, in units, is larger than the PC market.”

    And for a while, it was hard not to get on board with Cook’s optimism.

    The iPad’s once-auspicious journey, however, would eventually take an unexpected detour. In what seemed like a blink of an eye, soaring sales began to taper off, even as Apple began to introduce newer and more advanced models. Today, iPad sales are still slumping. During Apple’s most recent earnings report, the company revealed that year over year iPad sales fell by 25% while iPad related revenue dropped by 20%. Hardly an aberration, iPad sales have been dropping for well over two years at this point.

    Yesterday, Apple released a new 9.7-inch iPad Pro and it stands to reason that this is Apple’s last chance to truly inject a bit of life into a faltering product line. The iPad Air 2 was a solid device, but again, consumers have made it clear that it will take a whole a lot more than a thinner and faster device to increase sales.

    As for what’s behind the ongoing and persistent drop in iPad sales, it’s hard to say.

    Reply
  8. Tomi Engdahl says:

    IBM LinuxONE: Who Needs the Cloud?
    http://www.linuxjournal.com/content/ibm-linuxone-who-needs-cloud

    IBM has long been a stalwart supporter of, and participant in the Open Source community. So IBM’s announcement of the LinuxONE platform last year should have come as a surprise to no one. The ultimate goal for LinuxONE, however, may be a bit more surprising.

    LinuxONE is a computing platform designed specifically to take optimum advantage of any or all of the major distributions of Linux; SUSE, Red Hat and starting in April Canonical’s Ubuntu as well. All models have just undergone a significant refresh, adding even more features and capabilities including faster processors, more memory and support for larger amounts of data. There are two LinuxONE models: The LinuxONE Emperor is designed primarily for large enterprises. According to IBM, it can run up to 8,000 virtual servers, over a million Docker containers and 30 billion RESTful web interactions per day supporting millions of active users. The Emperor can have up to 141 cores, 10 terabytes of shared memory, and 640 dedicated I/O (input/output) processors. The LinuxONE Rockerhopper model is a more entry-level platform aimed at mid-sized businesses. Available with up to 20 cores, running at 4.3 GHz, and 4 TBs of memory for performance and scaling advantages. It is capable of supporting nearly a thousand virtual Linux servers on a single footprint. Both LinuxONE systems support KVM (Kernel-based Virtual Machine) with the initial port being supported by SUSE’s distribution.

    Reply
  9. Tomi Engdahl says:

    Chris Williams / The Register:
    Thousands of web apps dependent on JavaScript module Left-Pad broken for a few hours after developer yanks it from NPM in protest — How one developer just broke Node, Babel and thousands of projects in 11 lines of JavaScript — left-pad pulled from NPM – which everyone was using

    How one developer just broke Node, Babel and thousands of projects in 11 lines of JavaScript
    left-pad pulled from NPM – which everyone was using
    http://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/

    Programmers were left staring at broken builds and failed installations on Tuesday after someone toppled the Jenga tower of JavaScript.

    A couple of hours ago, Azer Koçulu unpublished more than 250 of his modules from NPM, which is a popular package manager used by JavaScript projects to install dependencies.

    Koçulu yanked his source code because, we’re told, one of the modules was called Kik and that apparently attracted the attention of lawyers representing the instant-messaging app of the same name.

    According to Koçulu, Kik’s briefs told him to take down the module, he refused, so the lawyers went to NPM’s admins claiming brand infringement.

    “This situation made me realize that NPM is someone’s private land where corporate is more powerful than the people, and I do open source because Power To The People,” Koçulu blogged.

    With left-pad removed from NPM, these applications and widely used bits of open-source infrastructure were unable to obtain the dependency, and thus fell over. Thousands, worldwide. Left-pad was fetched 2,486,696 downloads in just the last month, according to NPM. It was that popular.

    To fix the internet, Laurie Voss, CTO and cofounder of NPM, took the “unprecedented” step of restoring the unpublished left-pad 0.0.3 that apps required. Normally, when a particular version is unpublished, it’s gone and cannot be restored.

    Reply
  10. Tomi Engdahl says:

    Redox OS
    http://www.redox-os.org/

    Redox is a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications.

    Reply
  11. Tomi Engdahl says:

    Ownership is Theft: Experiences Building an Embedded OS in Rust
    http://iot.stanford.edu/pubs/levy-tock-plos15.pdf

    Reply
  12. Tomi Engdahl says:

    Continuous Lifecycle: Making a big noise about microservices
    And how to avoid alert fatigue…
    http://www.theregister.co.uk/2016/03/23/microservices_at_continuous_lifecycle/

    They may be small, but microservices are having a real impact on the way real world organisations are developing, deploying and maintaining their software.

    Microservices
    https://en.wikipedia.org/wiki/Microservices

    In computing, microservices is a software architecture style in which complex applications are composed of small, independent processes communicating with each other using language-agnostic APIs. These services are small building blocks, highly decoupled and focus on doing a small task facilitating a modular approach to system-building.[

    Properties of microservices architecture (MSA):

    The services are easy to replace
    Services are organized around capabilities, e.g., user interface front-end, recommendation, logistics, billing, etc.
    Services can be implemented using different programming languages, databases, hardware and software environment, depending on what fits best
    Architectures are symmetrical rather than hierarchical (producer – consumer)

    A microservices-based architecture

    lends itself to a continuous delivery software development process[citation needed]
    is distinct from a service-oriented architecture (SOA) in that the latter aims at integrating various (business) applications whereas several microservices belong to one application only

    Dr. Peter Rodgers introduced the term “Micro-Web-Services” during a presentation at Cloud Computing Expo in 2005. On slide #4 of the conference presentation he states that “Software components are Micro-Web-Services”.

    A workshop of software architects held near Venice in May 2011 used the term “microservice” to describe what the participants saw as a common architectural style that many of them had been recently exploring. In May 2012, the same group decided on “microservices” as the most appropriate name.

    Philosophy of microservices architecture essentially equals the Unix philosophy of “Do one thing and do it well”. It is described as follows:

    The services are small – fine-grained to perform a single function.
    The organization culture should embrace automation of deployment and testing. This eases the burden on management and operations.
    The culture and design principles should embrace failure and faults, similar to anti-fragile systems.
    Each service is elastic, resilient, composable, minimal, and complete.

    The microservices architecture is subject to criticism for a number of issue

    Reply
  13. Tomi Engdahl says:

    Ubuntu Tablet Will Be Available To Pre-Order On Monday
    http://www.omgubuntu.co.uk/2016/03/ubuntu-tablet-m10-goes-sale-monday

    The world’s first Ubuntu Tablet will go on pre-sale this coming Monday, March 28.

    The Aquaris M10 Ubuntu Edition tablet will be available to pre-order in two versions: a HD (1280 x 800) model and a high-spec FHD (1920 x 1200) model.

    Pricing will be announced on Monday. The tablet will, as with the phone, be sold direct by Bq through its international website.

    The Meizu Pro 5 Ubuntu Edition may have been met by a frosty reception during its MWC debut. but the M10 was dealt a far warmer response.

    Reply
  14. Tomi Engdahl says:

    Enterprise revenues power Red Hat past $2bn barrier
    Linux spinner claims hybrid cloud growth
    http://www.theregister.co.uk/2016/03/23/red_hat_2_billion_revenue_q4_fy_2016_results/

    Red Hat is in the enviable position of having become the first open-source firm to break the $2bn revenue barrier.

    The Linux spinner has reported full-year revenue $2.05bn, an increase of 14 per cent from subscriptions, training and services. Net income was up 10 per cent to $199m.

    For its fourth quarter Red Hat reported $543m in revenue – growing 17 per cent year on year – with net income of $53m, up 11 per cent on 2015.

    Red Hat was the first open-source firm to break the psychologically important – for software firms – $1bn barrier in its fiscal year 2011, announced in March 2012.

    Red Hat has achieved its targets by focusing squarely on the enterprise and on the server, unlike its consumer and/or desktop-obsessed rivals, and other distros.

    Red Hat has promoted independent, vendor-neutral cloud with CloudForms – its Infrastructure as a Service – and Red Hat Enterprise Linux OpenStack Platform.

    Cloud, particularly independent cloud, is a tough road in the world of AWS and Microsoft Azure, neither of which are open source, as Red Hat discovered with CloudForms. RHEL is, though, an option for Penguins on both.

    “Our revenue from private Infrastructure-as-a-Service, PaaS and cloud management technologies is growing at nearly twice as fast as our public cloud revenue did when it was at the same size,”

    Reply
  15. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Google debuts Cloud Machine Learning Platform to assist in developing pre-trained machine learning models and building new models from scratch — Google launches new machine learning platform — Google today announced a new machine learning platform for developers at its NEXT Google Cloud Platform user conference in San Francisco.

    Google launches new machine learning platform
    http://techcrunch.com/2016/03/23/google-launches-new-machine-learning-platform/

    Google today announced a new machine learning platform for developers at its NEXT Google Cloud Platform user conference in San Francisco. As Google chairman Eric Schmidt stressed during today’s keynote, Google believes machine learning is “what’s next.” With this new platform, Google will make it easier for developers to use some of the machine learning smarts Google already uses to power features like Smart Reply in Inbox.

    The service is now available in limited preview.

    “Major Google applications use Cloud Machine Learning, including Photos (image search), the Google app (voice search), Translate and Inbox (Smart Reply),” the company says. “Our platform is now available as a cloud service to bring unmatched scale and speed to your business applications.”

    Google’s Cloud Machine Learning platform basically consists of two parts: one that allows developers to build machine learning models from their own data, and another that offers developers a pre-trained model.

    TechCrunch:
    Google opens access to its speech recognition API, going head to head with Nuance
    http://techcrunch.com/2016/03/23/google-opens-access-to-its-speech-recognition-api-going-head-to-head-with-nuance/

    Google is planning to compete with Nuance and other voice recognition companies head on by opening up its speech recognition API to third-party developers. To attract developers, the app will be free at launch with pricing to be introduced at a later date.

    We’d been hearing murmurs about this service developing for weeks now. The company formally announced the service today during its NEXT cloud user conference, where it also unveiled a raft of other machine learning developments and updates, most significantly a new machine learning platform.

    The Google Cloud Speech API, which will cover over 80 languages and will work with any application in real-time streaming or batch mode, will offer full set of APIs for applications to “see, hear and translate,” Google says. It is based on the same neural network tech that powers Google’s voice search in the Google app and voice typing in Google’s Keyboard.

    Reply
  16. Tomi Engdahl says:

    Ron Miller / TechCrunch:
    Google Stackdriver helps IT get unified view across AWS and Google Cloud — Today at the GCPNext16 event in San Francisco, Google announced the launch of Google StackDriver, a tool that gives IT a unified tool for monitoring, alerting, incidents management and logging complete …

    Google Stackdriver helps IT get unified view across AWS and Google Cloud
    http://techcrunch.com/2016/03/23/google-stackdriver-helps-it-get-unified-view-across-aws-and-google-cloud/

    Today at the GCPNext16 event in San Francisco, Google announced the launch of Google StackDriver, a tool that gives IT a unified tool for monitoring, alerting, incidents management and logging complete with dashboards providing visual insights across each category.

    The logging capabilities let you search across your GCP and AWS clusters from a single interface.

    It’s trying to differentiate itself from the competition, particularly AWS, even while supporting AWS in this tool.

    Reply
  17. Tomi Engdahl says:

    Alex Kantrowitz / BuzzFeed:
    How the internet manipulated Microsoft’s AI chatbot into learning and repeating hate speech
    How The Internet Turned Microsoft’s AI Chatbot Into A Neo-Nazi
    http://www.buzzfeed.com/alexkantrowitz/how-the-internet-turned-microsofts-ai-chatbot-into-a-neo-naz#.ntZ694o2K
    “Tay” was tricked by a bunch of people exploiting a dead-simple but glaring flaw.

    Peter Lee / The Official Microsoft Blog:
    Microsoft apologizes for Tay’s hurtful tweets, says coordinated attack exploited vulnerability in Tay — Learning from Tay’s introduction — As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay …
    Learning from Tay’s introduction
    http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

    Reply
  18. Tomi Engdahl says:

    NTT to buy Dell’s services division for $3.05 billion
    http://techcrunch.com/2016/03/28/ntt-to-buy-dells-services-division-for-3-05-billion/

    You may know Dell as a computer and server maker, but Dell also operates a substantial IT services division — at least it did until today. NTT Data, the IT services company of NTT, is acquiring Dell Systems for $3.05 billion.

    The main reason why Dell sold off its division is that the company needs cash, and quickly. When Dell acquired EMC for $67 billion, the company promised that it would find ways to help finance the debt needed for the EMC acquisition.

    Reply
  19. Tomi Engdahl says:

    Hadoop rebels unleash spec to battle the Cloudera/MapR empire
    ODPi publishes runtime spec and test suite
    http://www.theregister.co.uk/2016/03/29/hadoop_rebels_unleash_spec_to_battle_the_clouderamapr_empire/

    ODPi, the group formerly known as the Open Data Platform initiative and set up last year as an attempt to standardise Hadoop applications, has published its first runtime specification.

    Backed by Hortonworks but kicked into the corner by heavyweights MapR and Cloudera, ODPi was set up last year to try and make sure applications would work across multiple Apache Hadoop distributions.

    The ODPi technical working group says its objectives are:

    For consumers: ability to run any “ODPi-compatible” software on any “ODPi-compliant” platform and have it work.
    For ISVs: compatibility guidelines that allow them to “test once, run everywhere.”
    For Hadoop platform providers: compliance guidelines that enable ODPi-compatible software to run successfully on their solutions. But the guidelines must allow providers to patch their customers in an expeditious manner, to deal with emergencies.

    ODPi Runtime Specification: 1.0
    https://github.com/odpi/specs/blob/master/ODPi-Runtime.md

    Reply
  20. Tomi Engdahl says:

    Software automation and AI in DevOps aren’t the fast track to Skynet
    Behind every robot lies a good human
    http://www.theregister.co.uk/2016/03/29/dev_ops_robots_to_destroy_the_earth/

    Software automation is becoming intelligent, going deep into systems and is even going “autonomic” and helping create self-healing systems.

    If we engineer too much automation into our new notion of intelligently provisioned DevOps, then are we at risk of breaking the Internet and causing global nuclear meltdown?

    Maybe it’s not quite as dangerous as typing Google into Google (please don’t ever do that), but you get the point.

    Reply
  21. Tomi Engdahl says:

    Oculus Rift review-gasm round-up: The QT on VR
    We read what people had to say, so you don’t have to
    http://www.theregister.co.uk/2016/03/29/oculus_rift_review_roundup/

    The much-hyped virtual reality headset Oculus Rift is finally shipping to its first customers this week, and the Facebook-owned company dished out a few of them ahead of time to select publications.

    The embargo lifted Monday morning and we have waded through tens of thousands of words contained in nine reviews so you don’t have to in order to bring you:

    The Final Word on the Oculus Rift.

    And that word is: Wait.

    Reply
  22. Tomi Engdahl says:

    Agdubs / The npm Blog:
    npm no longer allowing developers to automatically unpublish packages over 24 hours old — changes to npm’s unpublish policy — One of Node.js’ core strengths is the community’s trust in npm’s registry. As it’s grown, the registry has filled with packages that are more and more interconnected.

    changes to npm’s unpublish policy
    http://blog.npmjs.org/post/141905368000/changes-to-npms-unpublish-policy

    One of Node.js’ core strengths is the community’s trust in npm’s registry. As it’s grown, the registry has filled with packages that are more and more interconnected.

    A byproduct of being so interdependent is that a single actor can wreak significant havoc across the ecosystem. If a publisher unpublishes a package that others depend upon, this breaks every downstream project that depends upon it, possibly thousands of projects.

    Last Tuesday’s events revealed that this danger isn’t just hypothetical, and it’s one for which we already should have been prepared. It’s our mission to help the community succeed, and by failing to protect the community, we didn’t uphold that mission.

    We’re sorry.

    Reply
  23. Tomi Engdahl says:

    Dina Bass / Bloomberg Business:
    Microsoft is building a variety of AI chat bots that manage tasks via discussion, as part of a “conversation as a platform” strategy

    Clippy’s Back: The Future of Microsoft Is Chatbots
    bloomberg.com/features/2016-microsoft-future-ai-chatbots/

    CEO Satya Nadella bets big on artificial intelligence that will be fast, smart, friendly, helpful, and (fingers crossed) not at all racist.

    Predictions about artificial intelligence tend to fall into two scenarios. Some picture a utopia of computer-augmented superhumans living lives of leisure and intellectual pursuit. Others believe it’s just a matter of time before software coheres into an army of Terminators that harvest humans for fuel. After spending some time with Tay, Microsoft’s new chatbot software, it was easy to see a third possibility: The AI future may simply be incredibly annoying.

    So is Satya Nadella, 48, who succeeded Steve Ballmer as Microsoft’s chief executive officer two years ago. “I’m petrified to even ask it anything, because who knows what it may say,”

    Bots aren’t just a novelty; unlike Tay, some of them do things. They’ll act as your interface with computers and smartphones, helping you book a trip or send a message to a colleague, and do that through a conversation instead of a mouse click or finger tap. Microsoft believes the world will soon move away from apps—where Apple and Google rule—into a phase dominated by chats with bots. “When you start early, there’s a risk you get it wrong,” Cheng said in March, in the lunch area of her lab building on Microsoft’s campus. “I know we will get it wrong. Tay is going to offend somebody.”

    Reply
  24. Tomi Engdahl says:

    Microsoft unveils Desktop App Converter, a developer tool for bringing existing Win32 apps to the Windows Store
    http://venturebeat.com/2016/03/30/microsoft-unveils-desktop-app-converter-a-developer-tool-for-bringing-existing-win32-apps-to-the-windows-store/

    Microsoft today unveiled the Desktop App Converter, which lets developers bring existing Windows applications to the Windows Universal Platform (UWP). The company is hoping to bring the 16 million existing Win32/.Net applications to the Windows Store.

    UWP allows developers to build a single app that changes based on your device and screen size. One app can work on your Windows 10 computer, Windows 10 tablet, Windows 10 Mobile smartphone, Xbox One console, and eventually HoloLens headset.

    Being able to publish the resulting app in the Windows Store means another channel for distribution and sale. It also means users can cleanly install and uninstall the app or game. Furthermore, the apps get access to all of UWP’s APIs, so developers can add more functionality that is specific to Windows 10.

    Reply
  25. Tomi Engdahl says:

    Andrew Cunningham / Ars Technica:
    Microsoft says Windows 10 is now on over 270M active devices, up from 200M in January, making it the fastest growing version of Windows

    Microsoft: Windows 10 has over 270 million active users
    http://arstechnica.com/gadgets/2016/03/microsoft-windows-10-has-over-270-million-active-users/
    Brisk adoption rate continues eight months after Windows 10′s initial launch.

    Reply
  26. Tomi Engdahl says:

    Dina Bass / Bloomberg Business:
    Microsoft is building a variety of AI chatbots that manage tasks via discussion, as part of a “conversation as a platform” strategy — Clippy’s Back: The Future of Microsoft Is Chatbots — Predictions about artificial intelligence tend to fall into two scenarios.

    Clippy’s Back: The Future of Microsoft Is Chatbots
    http://www.bloomberg.com/features/2016-microsoft-future-ai-chatbots/
    CEO Satya Nadella bets big on artificial intelligence that will be fast, smart, friendly, helpful, and (fingers crossed) not at all racist.

    Reply
  27. Tomi Engdahl says:

    Jacob Kastrenakes / The Verge:
    Skype to let you book trips, shop, more by chatting with third-party chatbots, with conversations brokered by Cortana — Skype is getting Cortana and crazy bot messaging — You’ll soon be able to use Skype to books trips, shop, and plan your schedule, just by chatting with Cortana.

    Skype is getting Cortana and crazy bot messaging
    http://www.theverge.com/2016/3/30/11332424/skype-cortana-bot-interactions-messaging

    Reply
  28. Tomi Engdahl says:

    Microsoft launches Bot Framework to let developers build their own chatbots
    http://venturebeat.com/2016/03/30/microsoft-bot-framework/

    Reply
  29. Tomi Engdahl says:

    iFixit:
    Consumer-ready Oculus Rift teardown: two OLED displays with a combined resolution of 2160×1200 mounted to adjustable lenses, 90 FPS refresh rate — Oculus Rift CV1 Teardown — Teardown — Teardowns provide a look inside a device and should not be used as disassembly instructions.

    Oculus Rift CV1 Teardown
    https://www.ifixit.com/Teardown/Oculus+Rift+CV1+Teardown/60612

    Reply
  30. Tomi Engdahl says:

    Microsoft GitHubs BotBuilder framework behind Tay chatbot
    Hey kids! Now you can write your own bodgy bot!
    http://www.theregister.co.uk/2016/03/31/microsoft_githubs_botbuilder_framework/

    So this is what @TayAndYou was supposed to be about: Microsoft CEO Satya Nadella has used his keynote at the Build conference to launch an open source chatbot framework.

    Instead of setting the buzz before the big reveal, Microsoft’s shot at a public Twitter-bot was derailed when 4chan users worked out how to game “Tay”, turning it into a racist Nazi-sympathising troll.

    However, with the code ready to go and the BotFramework website registered, CEO Satya Nadella went ahead and unveiled the framework.

    The three-part framework has connectors, SDKs, and has a directory of published bots on the “coming soon” list.

    The connectors let DIY bots respond to Skype, Slack, text messages, Office 365 e-mail, GroupMe and Telegram. Connectors handle message routing, language translation, and user state management, and there’s also a connector providing embeddable Web chat control.

    For those that want to create a bot for a not-yet-supported channel, there’s also a direct line API.

    The SDK, on Github, includes libraries, samples, and tools, and Redmond says it will eventually include a directory of bots built using the software.

    Developers who want to avoid replicating @TayAndYou can try their hand running up bots in either C# or Node.js.

    https://github.com/Microsoft/BotBuilder

    Reply
  31. Tomi Engdahl says:

    Microsoft cracks open Visual Studio to Linux C++ coders
    Plug in. Go, go, go
    http://www.theregister.co.uk/2016/03/31/microsoft_visual_studio_c_plus_plus_for_linux/

    BUILD 2016 Microsoft’s love of Linux is extending to its flagship Visual Studio suite.

    Redmond has released for download an extension its developed that lets you roll C++ code for Linux servers, desktops and devices.

    Visual Studio will copy and remote build source and launch the application with a debugger. There’s added support in the project system for architectures like ARM.

    The extension only support remote builds, with Microsoft citing dependencies on the presence of certain tools – openssh-server, g++, gdb, gdbserver.

    The plug-in comes with three templates – Blink for IoT for devices such as Raspberry Pi, Console Application, and Empty to add source and configure.

    Visual Studio is the default IDE for the vast majority of Microsoft programmers.

    Visual C++ for Linux Development Microsoft
    Tools for Linux C++ development in Visual Studio
    https://visualstudiogallery.msdn.microsoft.com/725025cf-7067-45c2-8d01-1e0fd359ae6e

    Reply
  32. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    Microsoft integrates Xamarin into Visual Studio for free, will open source Xamarin runtime — Microsoft today announced that Xamarin is now available for free for every Visual Studio user. This includes all editions of Visual Studio, including the free Visual Studio Community Edition …

    Microsoft integrates Xamarin into Visual Studio for free, will open source Xamarin runtime
    http://venturebeat.com/2016/03/31/microsoft-integrates-xamarin-into-visual-studio-will-open-source-xamarin-runtime/

    Microsoft today announced that Xamarin is now available for free for every Visual Studio user. This includes all editions of Visual Studio, including the free Visual Studio Community Edition, Visual Studio Professional, and Visual Studio Enterprise.

    Furthermore, Xamarin Studio for OS X is being made available for free as a community edition, and Visual Studio Enterprise subscribers will get access to Xamarin’s enterprise capabilities at no additional cost.

    The company also promised to open-source Xamarin’s SDK, including its runtime, libraries, and command line tools, as part of the .NET Foundation “in the coming months.” Both the Xamarin SDK and Mono will be available under the MIT License. Speaking of the .NET Foundation, Microsoft also announced that Unity, JetBrains, and RedHat have all joined.

    Xamarin cofounder Miguel de Icaza was on stage at Microsoft’s Build 2016 developer conference today demoing using his tools for writing iOS and Android apps in Visual Studio. It was like a dream come true for him: “I am happy to have finally completed the longest job interview in my career.”

    Reply
  33. Tomi Engdahl says:

    Michael del Castillo / CoinDesk:
    Microsoft teams with ConsenSys to build Ethereum-based distributed apps in Visual Studio

    Microsoft Adds Ethereum to Windows Platform For Over 3 Million Developers
    http://www.coindesk.com/microsoft-ethereum-3-million-developers/

    Reply
  34. Tomi Engdahl says:

    Best NASes: Q1 2016
    by Ganesh T S on March 30, 2016 5:00 PM EST
    http://www.anandtech.com/show/9813/best-nases

    Reply
  35. Tomi Engdahl says:

    IT freely, a true tale: One night a project saved my life
    Staying human in an automated lifecycle
    http://www.theregister.co.uk/2016/03/31/anonymous_survivor/

    Everyone knows that IT is a byword for burnout. Admins, coders and hardware jocks frequently keep unsociable hours. Putting in 60-hour weeks is something of a norm. Such punishing workloads can and do push people over the edge. Everyone deals with stress in different ways.

    Some people snap and end up taking it to the extreme, as we witnessed last month when one user ended up shooting his PC. The Anxiety and Depression Association of America estimates that 72 per cent of workers in general suffer from stress that impacts their daily lives and wellbeing.

    I have always been pre-disposed to depression but my newly landed job could hardly have pushed me any further. Things started out fine; I was happy. I did what was needed and my reviews were positive and I went home feeling I had achieved something that day.

    But gradually, more and more work started being heaped upon the small team. Workload increases averaged around 100 per cent quarter on quarter.

    This was compounded by the fact that in large-scale IT environments, stress is part of the job. Losses from downtime are frequently counted in the mid five figures per hour. When it is on you to fix it, you do tend to get a bit nervous. Some outages can cost a whole lot more, especially if financial penalties are included. Just look back at what happened to RBS not so long ago to see what can go wrong.

    Quite serious failures can have duration of days and sometimes even weeks. Everyone working on such an event will be subject to stress. This is on top of normal day-to-day activities.

    Some staff cracked under the pressure. Members of the team were there one day, gone the next. A few never returned.

    Check yourself

    Typical warning signs that people are suffering from work-related stress can include lack of appetite and lack of sleep – but more people will probably relate to drinking and potentially destructive habits.

    Find something to call your own

    This is without doubt what saved me. I think a lot of people find that part of the issue is repetition and nothing ever moving forward, stuck in endless bureaucracy.

    Ignore(ish) what the boss says

    A lot of the time part of the bigger problem is that workloads and expectations can be very ill-defined. Having a pile of tasks and no real order just adds to your stress and frustration.

    Work on the priority items and fuck the rest. If something more urgent comes along, make sure management are aware of it.

    Strike a balance and stick to it

    Part of the issue that can contribute to the situation is lack of sleep and stimulants or depressants

    Talk to your boss or HR department

    If your workload is truly too much, you need to speak to your boss. They have a legal requirement to safeguard your health and wellbeing. If nothing comes of it, I suggest visiting your doctor. It may reflect badly on you, some may say, but your mental health comes first.

    If not HR, then someone

    If you really don’t want to talk to HR or the doctor, there are alternatives. Some companies have an independent counselling service

    Reply
  36. Tomi Engdahl says:

    First USB-Powered 8TB Drive Is as Portable as a Flash Drive
    http://gizmodo.com/first-usb-powered-8tb-drive-is-as-portable-as-a-flash-d-1767954348

    Seagate’s new Innov8 drive packs 8TB of storage into an external closure that doesn’t need to draw power from an outlet.

    Available sometime next month for $350, the Innov8 relies only on a USB-C connection to your computer to work.

    Seagate Launches World’s First USB-powered Desktop Hard Drive With Innov8
    http://www.seagate.com/gb/en/about-seagate/news/seagate-launches-worlds-first-usb-powered-master-pr/

    Reply
  37. Tomi Engdahl says:

    Save it, devs. Red Hat doesn’t want your $99 for RHEL
    Free pilot’s license for immutable infrastructure nuts
    http://www.theregister.co.uk/2016/03/31/red_hat_rhel_free_dev_license/

    Red Hat has cut the $99 price of its Linux developer subscription to zero, for penguins building cloud microservices using containers.

    The company today is expected to start giving away its Red Hat Enterprise Linux (RHEL) subscription for free as part of the existing Red Hat Developer Program.

    The free license runs in tandem with the existing $99 developer license for those already paying, with Red Hat assessing whether it should continue charging.

    It follows the introduction of a free developer license for the JBoss application server, FUSE, Drools and BPM suite 18 months ago.

    Like these, the RHEL license applies only to development, not production environments. Unlike these, it’s RHEL that’s turned Red Hat into a profitable Linux vendor and the first past $1bn and then, last year, $2bn in revenue.

    Reply
  38. Tomi Engdahl says:

    Intel flops out four 3D flash SSDs – and says they’re the densest ever
    SSD chip fashionistas adopt the layering system
    http://www.theregister.co.uk/2016/03/31/intels_quad_3d_nand_ssd_splurge/

    Intel has introduced its first 3D NAND SSDs, updating three planar NVMe SSDs with four new models, and claiming to have the industry’s highest density 3D NAND.

    The existing DC P3500, P3600 and P3700 products used 20nm MLC flash technology, with the P3500 and P3600 dating from June 2014 and the P3700 being introduced in September last year along with a P3608 (two P3600s in one SSD package).

    Reply
  39. Tomi Engdahl says:

    Gartner has updated its forecast of how the sales of PC PCs and smart phones will develop in the coming years. Cell phone sales are no longer growing, and the dreaded shrinkage of sales of PCs stops.

    laptops a year continued to decline grind to a halt finally. This year, sold 228 million desktop and laptop, and next year to 223 million. When called at the same time. ultra-portable sold more and more, to reach overall market increase slightly

    This year, the computers are expected to be sold 284 million. Next year, the number grows 296 million and in 2018 already 306 million.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4197:pc-lasku-taittuu-kannykkakasvu-hiipuu&catid=13&Itemid=101

    Reply
  40. Tomi Engdahl says:

    Hyperkonverged solutions will take over data centers

    Continuous change in the IT market and the fast development cycles lead to uncertainty for customers. At the same time the business needs the support of IT services at very short notice.

    If a company’s IT department or service provider is unable to provide them, purchased services, often through other channels.

    Challenges the traditional infrastructure is sufficient. The complexity of architectures, island solutions for networks, servers, storage, backup, virtualization and management software needed for all these are only the beginning.

    Add to that the automation systems as well as the mutual lack of integration and programmability, which wastes the IT budget, mainly maintenance of the systems. Such as monitoring, problem statement, capacity management, and software installations and updates.

    Hyperkonverged systems are one solution to managing complexity. These servers, storage and backups are built into a single operational system.

    Their main advantage is the simple and quick implementation and far simpler management compared to traditional systems. Manageable and upgradeable components is much less and the solution is optimized to work as a whole.

    Hyperkonvergoituja solutions can not be applied to all applications. They are subject to a rule founded for the use of a new generation of agile virtualization environments.

    The Client Virtualization (VDI = Virtual Desktop Infrastructure) is also one of the best uses.

    The solutions are designed to support the addition of executable on top of a traditional server virtualization applications in the future “cloud native” applications that are running, say, Linux container platform using the object-based storage.

    All hyperkonvergoiduissa advanced solutions, network layer and its management is also integrated into the whole. By combining servers, storage, backup, network and services as well as comprehensive automation and orchestration of the whole, one can already talk about a complete new generation of Hybrid Cloud solution.

    IDC estimates companies using hyperkonvergoituja solutions to reach more than 20 percent savings related exclusively to continuous IT operating costs.

    Source: http://www.tivi.fi/Kumppaniblogit/cisco/hyperkonvergoidut-ratkaisut-valtaavat-datakeskukset-6534115

    Reply
  41. Tomi Engdahl says:

    Alison Griswold / Quartz:
    Zero tech companies went public in the US in Q1 2016, which hasn’t happened since Q1 2009 during the recession — Zero tech companies went public in the US in Q1 2016, which hasn’t happened since Q1 2009 — The US market for tech IPOs has totally frozen over.

    The market for tech IPOs hasn’t been this awful since the Great Recession
    http://qz.com/652261/the-market-for-tech-ipos-hasnt-been-this-awful-since-the-great-recession/

    The US market for tech IPOs has totally frozen over.

    Zero Internet or tech companies went public on US exchanges in the first quarter of 2016. The last time that happened was in the first quarter of 2009, during the depths of the Great Recession, according to data from Dealogic.

    Just two years ago, the picture looked quite different.

    Startups began steering clear of IPOs last year, even as many of them continued to raise money at tremendous valuations. The billion-dollar startup club has grown to include more than 140 members, of which nearly 90 are based in the US. Uber is the biggest, with its $62.5 billion valuation, followed by Chinese electronics company Xiaomi ($46 billion) and Airbnb ($25.5 billion).

    But lately a chill has also settled over financing in Silicon Valley. Startup funding fell 30% in the fourth quarter of 2015 from the one prior, to $27.7 billion.

    Reply
  42. Tomi Engdahl says:

    Michael D. Shear / New York Times:
    The White House is undergoing its first major IT overhaul in over a decade, now has improved WiFi and color printers, allows employees to use iPhones, more

    Technology Upgrades Get White House Out of the 20th Century
    http://www.nytimes.com/2016/04/04/us/politics/technology-upgrades-get-white-house-out-of-the-20th-century.html?_r=0

    WASHINGTON — Can you run the country with spotty Wi-Fi, computers that power on and off randomly and desktop speakerphones from Radio Shack, circa 1985?

    It turns out you can. But it is not ideal, as President Obama’s staff has discovered during the past seven years. Now, as Mr. Obama prepares to leave the White House early next year, one of his legacies will be the office information technology upgrade that his staff has finally begun.

    Until very recently, West Wing aides were stuck in a sad and stunning state of technological inferiority: desktop computers from the last decade, black-and-white printers that could not do double-sided copies, aging BlackBerries (no iPhones), weak wireless Internet and desktop phones so old that few staff members knew how to program the speed-dial buttons.

    Reply
  43. Tomi Engdahl says:

    Smartphone Sales Growth Projected to Slip to 7%
    http://www.eetimes.com/document.asp?doc_id=1329335&

    Smartphone sales will grow at the lowest rate on record and PC sales will decline again in 2016, according to market research firm Gartner Inc.

    Gartner (Stamford, Conn.) said it expects smartphone sales to grow 7% this year to reach 1.5 billion units. It would mark the first time that smartphone sales grew at less than 10% in a year, according to the firm.

    “The double-digit growth era for the global smartphone market has come to an end,” said Ranjit Atwal, research director at Gartner, in a statement.

    Atwal added that worsening economic conditions have historically had a negligible impact on smartphone sales. But that is no longer the case, he said. Smartphone sales in both North America and China are forecast to be roughly flat this year.

    The total mobile phone market is forecast to reach 1.9 billion units in 2016, Gartner said.

    Combined, global sales of PCs, tablets, “ultramobiles” and mobile phones are projected to reach 2.4 billion in 2016, an increase of less than 1% from 2015, Gartner said.

    Reply
  44. Tomi Engdahl says:

    Arik Hesseldahl / Re/code:
    Shake-up at Intel as Kirk Skaugen, head of PC business, and Doug Davis, head of IoT effort, depart

    Shake-up at Intel as veteran execs Davis and Skaugen leave
    http://recode.net/2016/04/04/intel-doug-davis-kirk-skaugen-depart/

    Chipmaker Intel just announced the departure plans of two longtime executives, one heading up its business devoted to personal computers, another heading up its Internet of Things efforts.

    Kirk Skaugen, Intel’s senior VP in its Client Computing Group (Intel refers to PCs as clients), is leaving the company for a new job elsewhere, according to an internal memo released today.

    Doug Davis, the general manager of the Internet of Things unit, is also leaving after 32 years with the chip giant.

    Reply
  45. Tomi Engdahl says:

    Memory and storage boundary changes
    Two transitions starting that will radically speed up storage
    http://www.theregister.co.uk/2016/04/04/memory_and_storage_boundary_changes/

    Latency is always the storage access bete noir. No one likes to wait, least of all VMs hungry for data access in multi-threaded, multi-core, multi-socket, virtualized servers. Processors aren’t getting that much faster as Moore’s Law runs out of steam, so attention is turning to fixing IO delays as a way of getting our expensive IT to do more work.

    Two technology changes are starting to be applied and both could have massive latency reduction effects at the two main storage boundary points: between memory and storage on the one hand, and between internal and external, networked storage on the other.

    The internal/external boundary is moving because of NVMe-over-fabric (NVMeF) access. How and why is this happening?

    Internal:external storage boundary

    Internal storage is accessed over the PCIe bus and then over SAS or SATA hardware adapters and protocol stacks, whether the media be disk or solid state drives (SSDs). Direct NVMe PCIe bus access is the fastest internal storage access method, but has not yet generally replaced SAS/SATA SSD and HDD access.

    To get access to much higher-capacity and shared storage we need networked external arrays, linked to servers by Fibre Channel (block access) or Ethernet (iSCSI block and/or file access). Again, we’ll set object storage and Hadoop storage off to one side, as we’re concentrating on the generic server-external storage situation.

    Network-crossing time adds to media access time. When hard disk drives (HDDs) were the primary storage media in arrays, contributing their own seek time and rotational delays to data access time, then network delays weren’t quite so obvious. Now that primary external array storage is changing to much faster SSDs, the network transit time is more prominent.

    NVMeF access latencies are 200 microseconds or less.

    The memory:storage boundary

    The memory storage boundary occurs where DRAM, which is volatile or non-persistent – data being lost when power is turned off – meets storage, which is persistent or non-volatile, retaining its data contents when power is turned off.

    There is a speed penalty here, as access to the fastest non-volatile media, flash, is very much slower than memory access (0.2 microsecs vs 30/100 microsecs for read/write access to NVMe SSD.)

    This can be mitigated by putting the flash directly on the memory bus, using a flash DIMM (NVDIMM‑N), giving us a 5 microsecond latency. This is what Diablo Technology does, and also SanDisk with its ULLtraDIMM. But this is still 25 times slower than a memory access.

    Intel and Micron’s 3D XPoint technology is claimed to provide much faster non-volatile storage access than flash. The two say XPoint is up to 1,000 times faster than SSD access but not as fast as DRAM, but they don’t supply actual latency time.

    Reply
  46. Tomi Engdahl says:

    Memory-based storage? Yes, please
    Just a few compromises first
    http://www.theregister.co.uk/2016/04/04/memorybased_storage_yes_please/

    Memory-based storage? Yes please, And I’m not talking about flash memory here; well, not in the way we usually use flash, at least.

    I wrote about this a long time ago: in-memory storage makes sense. Not only does it make sense now, but it’s becoming a necessity. The number of applications taking advantage of large memory capacity is growing like crazy, especially in the Big Data analytics and HPC fields, and in all those cases where you need large amounts of data as close as possible to the CPU.

    Yes, memory, but not for everyone

    We could argue that any application can benefit from a larger amount of memory, but that’s not always true. Some applications are not written to take advantage of all the memory available or they simply don’t need it because of the nature of their algorithm.

    Scale-out distributed applications can benefit the most from large memory servers. In this case, the more memory they can address locally, the less they need to access remote and slower resources (another node or shared storage, for example).

    Compromises

    RAM is fast, but it doesn’t come cheap. Access speed is measured in nanoseconds but it is prohibitively expensive if you think in terms of TBs instead of GBs.

    On the other hand, flash memory (NAND) brings latency at microseconds but is way cheaper. There is another catch, however – if it is used through the wrong interface it could become even slower and less predictable.

    Leading-edge solutions

    I want to mention just two examples here: Diablo Technologies Memory1 and PlexiStor SDM.

    The first is a hardware solution; a memory DIMM which uses NAND chips. From the outside it looks like a normal DIMM, just slower – but a DIMM none the less (as close as possible to the CPU).

    A special Linux driver is used to mitigate the speed difference, through a smart tiering mechanism, between actual and “fake” RAM.

    Reply
  47. Tomi Engdahl says:

    Casey Newton / The Verge:
    Facebook announces Automatic Alternative Text on iOS, that uses AI to automatically describe images to blind users — Facebook begins using artificial intelligence to describe photos to blind users — Ask a member of Facebook’s growth team what feature played the biggest role in getting …

    Facebook begins using artificial intelligence to describe photos to blind users
    Second sight
    http://www.theverge.com/2016/4/5/11364914/facebook-automatic-alt-tags-blind-visually-impared

    Ask a member of Facebook’s growth team what feature played the biggest role in getting the company to a billion daily users, and they’ll likely tell you it was photos. The endless stream of pictures, which users have been able to upload since 2005, a year after Facebook’s launch, makes the social network irresistible to a global audience. It’s difficult to imagine Facebook without photos. Yet for millions of blind and visually impaired people, that’s been the reality for over a decade.

    Not anymore. Today Facebook will begin automatically describing the content of photos to blind and visually impaired users. Called “automatic alternative text,” the feature was created by Facebook’s 5-year-old accessibility team.

    Reply
  48. Tomi Engdahl says:

    Intel’s Xeon E5 in the Clouds
    3D NAND SSDs Sweeten Deal
    http://www.eetimes.com/document.asp?doc_id=1329339&

    Intel has completed its yearly redesign of its workhorse E5-2600 processor, now in version four, which Intel claims has hardware features that enhance its use in modern software-defined infrastructure (SDI) in the cloud. Intel also announced a new ultra-large and -fast three-dimensional (3D) NAND solid-state drivers (SSDs) optimized for cloud deployment and collaborations with cloud software and service providers for turn-key availability of cloud solutions, said Lisa Spelman, vice president and general manager of Intel Xeon Processors and Data Center Marketing Group in an advance peak at its offerings last month.

    “The more agile cloud architectures in the new Xeon E5-2600v4 make using the public, private or hybrid cloud and easy choice,” said Spelman. “Our software-defined infrastructure solution set put enterprises on the fast track to cloud deployments.”

    Reply
  49. Tomi Engdahl says:

    Tech Firms Have An Obsession With ‘Female’ Digital Servants
    https://news.slashdot.org/story/16/04/04/2349223/tech-firms-have-an-obsession-with-female-digital-servants

    Alexa, Tay, Siri, Cortana, Xiaoice, and Google Now. These technologies all have one thing in common — they are digital servants aimed at a mass-market audience that feature a “female” voice or persona. And it’s not just the voice or persona of the digital persona we interact with that is biased. The results of those interactions also demonstrate male favoritism.”

    Reply
  50. Tomi Engdahl says:

    Samsung kind of cracks the 10nm barrier with new 8GB DDR4 slabs
    Re-write your RAM cram plan, server scalers, there’s 128GB modules on the way
    http://www.theregister.co.uk/2016/04/05/samsung_10nm_ddr4_ram/

    Samsung Electronics has announced it’s started baking RAM using a “10-nanometer (nm) class*” process and says the 8GB chips it’s emitting are the first in the world to be manufactured in this way.

    Don’t start trying to figure out how 10nm compares to the width of a human hair or the head of a really small pin, because that asterisk up there is Samsung’s and leads to a disclaimer to the effect that “10nm-class denotes a process technology node somewhere between 10 and 19 nanometers, while 20nm-class means a process technology node somewhere between 20 and 29 nanometers.” Samsung’s not saying just how big, or small, this RAM is.

    Even if Samsung is building at 19.9999999999 nanometers, the product is impressive because it involves “quadruple patterning lithography” and “ultra-thin dielectric layer deposition”.

    Samsung’s promising 10nm RAM in modules from 4GB to 128GB

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*