Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Microsoft Details Windows 10 Upgrade Patches for Windows 7 and 8.1
    http://news.softpedia.com/news/Microsoft-Details-Windows-10-Upgrade-Patches-for-Windows-7-and-8-1-478663.shtml

    A few weeks ago, we came across an update that was supposedly installing on Windows 7 and 8.1 computers to prepare them for the upgrade to Windows 10, and at that time, it all seemed to be a way to nag users and more or less “force” them to switch to the new OS.

    While Microsoft hasn’t said anything about KB3035583, the update that has often been referred to as a “Windows 10 downloader,” the company has revealed that there are two more patches that make sure that your computer is ready for the new OS.

    KB2990214 for Windows 7 and KB3044374 for Windows 8.1 are both being shipped to users starting this month, and according to information provided by a company employee, you have no other option than to install them.

    Reply
  2. Tomi Engdahl says:

    Ubuntu Now Has Over 20 Million Users, According to Canonical
    http://news.softpedia.com/news/Ubuntu-Now-Has-Over-20-Millions-of-Users-According-to-Canonical-478787.shtml

    In a recent article entitled “Tendering with Ubuntu,” Canonical, the commercial sponsor of Ubuntu, revealed the fact that the world’s most popular free operating system now counts over 20 millions of user, as well as that the Ubuntu Linux is adopted by more and more people each day.

    According to Canonical, it appears that there’s a growing demand on the Ubuntu operating system from both public and private entities worldwide. As such, the company builds new tools that will help these entities get Ubuntu running on their hardware of choice.

    Reply
  3. Tomi Engdahl says:

    Internet of Things: A Fancy Way of Saying “More Embedded Linux, Please”
    http://intelligentsystemssource.com/internet-of-things-a-fancy-way-of-saying-more-embedded-linux-please/

    Spanning server to deployed device, configurations of Linux have reached levels of flexibility, performance and real-time capability that enable it to function at all levels in the Internet of Things.

    To recap the past decade or so of embedded computing:: Linux is the dominant operating system in embedded devices and its use is increasing, while the deployment of custom or in-house operating systems has taken a sharp downward turn, according to the 2014 Embedded Market Study by UMB Tech. This remains true, even when taking Android and all of its associated “Linux or not” questions out of the equation, with the next contenders being Windows, custom OSes, and RTOS as a distant fourth.

    Why is Linux so dominant? It’s not the smallest, fastest, or lowest energy OS. Memory requirements ranging from 2 MB to 512 MB preclude its use on many smaller devices. A custom OS that has been pared down and optimized for a given microprocessor or System-on-Chip (SoC) will probably deliver better performance.

    Embedded Linux Keeps Growing Amid IoT Disruption, Says Study
    https://www.linux.com/news/embedded-mobile/mobile-linux/818011-embedded-linux-keeps-growing-amid-iot-disruption-says-study

    A new VDC Research study projects that Linux and Android will continue to increase embedded market share through 2017 while Windows and commercial real-time operating systems (RTOSes) will lose ground. The study suggests that the fast growth of IoT is accelerating the move toward open source Linux.

    “Open source, freely, and/or publicly available” Linux will grow from 56.2 percent share of embedded unit shipments in 2012 to 64.7 percent in 2017, according to VDC’s “The Global Market for IoT and Embedded Operating Systems.” That represents a CAGR of 16.7 percent for open Linux, says VDC.

    The surging open source Linux growth more than compensates for the decline from 6.3 percent to 5.0 percent in commercial Linux shipments. In 2013, there were more than 1.7 times more shipments of embedded devices based on open source OSes, including free RTOSes, than for commercial platforms, says VDC.

    “Linux, in particular, continues to grow its developer base and support from leading vendors,” says Daniel Mandell, an analyst at VDC Research. “Linux is the primary OS for new connected device classes such as IoT gateways.”

    Reply
  4. Tomi Engdahl says:

    AMD reveals Windows 10 will launch in late July
    http://www.theverge.com/2015/4/20/8456049/windows-10-launch-late-july-claims-amd

    Microsoft’s launch of Windows 10 will likely take place in late July, according to AMD. During AMD’s latest earnings call last week, president and CEO Lisa Su revealed the launch timing for Microsoft’s Windows 10 operating system. Answering a question on inventory plans, Su said “with the Windows 10 launch at the end of July, we are watching sort of the impact of that on the back-to-school season, and expect that it might have a bit of a delay to the normal back-to-school season inventory build-up.”

    Reply
  5. Tomi Engdahl says:

    Microsoft Promises ‘Universal’ Office App For Phones Running Windows 10 This Month
    http://techcrunch.com/2015/04/17/microsoft-promises-universal-office-app-for-phones-running-windows-10-this-month/

    Microsoft promised this morning to release by the end of April a set of Office applications that it calls “Universal” for smartphones running Windows 10.

    The company has a two-prong productivity strategy in place for Windows: Office 2016 for desktop use, and, for all other Windows 10 experiences, its touch-focused Office Universal apps. The latter apps, according to Microsoft, will function across tablets and smartphones, dynamically changing their design to allow users to better use them based on their current screen size.

    Reply
  6. Tomi Engdahl says:

    NVM Express SSDs Hit Servers, Workstations
    http://www.eetimes.com/document.asp?doc_id=1326385&

    NVM Express (NVMe) is gaining ground as vendors ship new solid-state drives (SSDs) using the specification for both servers and workstations.

    NVMe is a standardized register interface, command, and feature set for PCIe-based storage technologies such as SSDs, designed specifically for non-volatile memory. It is optimized for high performance and low latency, scaling from client to enterprise segments.

    HGST is focusing its efforts on the server side, having begun shipping its NVMe-compliant Ultrastar SN100 Series PCIe SSDs, which are particularly aimed at mission-critical data center workloads such as scale-out databases, said Jeff Sosa, director of product line management at HGST.

    Reply
  7. Tomi Engdahl says:

    Is Formal Verification Artificial Intelligence?
    http://www.eetimes.com/author.asp?section_id=31&doc_id=1326394&

    Artificial intelligence or not, formal verification is a technology that has become a must-have in the modern verification flow.

    I recently started reading a book, Super Intelligence: Paths, Dangers, Strategies, by Nick Bostrom. I was surprised to find the following text in the chapter describing state-of-the-art artificial intelligence (AI) applications:

    Theorem-proving and equation-solving are by now so well established that they are hardly regarded as AI any more. Equation solvers are included in scientific computing programs such as Mathematica. Formal verification methods, including automated theorem provers, are routinely used by chip manufacturers to verify the behavior of circuit designs prior to production.

    Reply
  8. Tomi Engdahl says:

    How GameStop Plans to Sell Classic Games and Hardware
    http://www.wired.com/2015/04/gamestop-classic-retro-games/

    GameStop told IGN it would send every game it takes in to its refurbishment center in Grapevine, Texas, then offer them for sale through its website. If you walk in to a GameStop location, they’ll help you order classic games—drop your dough in the store and GameStop will ship the game to your door.

    What happens then? Once the games reach Grapevine, Haes says GameStop will “do thorough evaluations—testing, repair if necessary.” The testers will make sure the hardware functions, but they’ll also open up the games to check the status of the batteries. You want to be able to save your game in The Legend of Zelda, after all. If a battery needs replacing, they’ll do it there before it’s offered for sale. If something is “beyond repair,” it’ll get junked.

    Reply
  9. Tomi Engdahl says:

    Flash dead end is deferred by TLC and 3D
    Behold, data centre bods, the magical power of three
    http://www.theregister.co.uk/2015/04/21/flash_dead_end_is_deferred_by_tlc_and_3d/

    The arrival of a flash dead-end is being delayed by two technologies, both involving the number three – three-level cell (TLC) flash and three-dimensional (3D) flash – with the combination promising much higher flash chip capacities.

    As ever with semi-conductor technology, users want more data in the same space and faster access to it too, please.

    Progress means making devices faster and denser: getting more transistors in flash dies, and hence more cells, with no access time penalty or shortened working life.

    Flash data access can be speeded up by using PCIe NVMe interfaces, with several lanes active simultaneously, and so going faster than SAS or SATA disk-based interfaces.

    But the core issue is flash chip capacity: how can we get denser chips and hence larger capacity SSDs?

    With flash memory this has been achieved by adding a bit to the original single-level cell (SLC) flash, and by making the process geometry smaller.

    It is currently in the 1X area, meaning cell sizes in the range of 19nm to 10nm.

    Smaller cells don’t last as long as larger cells as they sustain fewer write cycles. With 2-layer cell technology, called MLC, the cell stores two bits through two levels of charge and this adds to the process shrink problem.

    It has been managed successfully with better error detection and the use of digital signal processing techniques by the flash controllers so weaker signals can be processed successfully with 2X-class flash (29-20nm cell geometry).

    You can have another bite at the same cherry by layering existing planar – 2D – flash dies in a 3D way, stacking them one above the other to create much higher-capacity chips.

    TLC technology has been around for some years. It gives an immediate 50 per cent increase in capacity over MLC flash – so why isn’t it popular in enterprise flash storage products?

    A serious problem is detecting the level of charge in the cell. What happens is that there are eight possible levels, double the four levels of MLC flash, which is double the two levels of SLC flash.

    Reply
  10. Tomi Engdahl says:

    Thinking of following Facebook and going DIY? Think again
    Brand new data centre off the shelf? Suits you, sir
    http://www.theregister.co.uk/2015/04/21/build_v_buy_your_data_center_part_1/

    DIY vs COTS: Part 1 Microsoft is doing it, Apple is doing it – so is IBM. The giants are spending billions of dollars building fantastic data centres.

    But what about the rest of us? Do you walk in the footsteps of the giants and Do It Yourself (DIY) or buy something Commercial, Off The Shelf (COTS): it’s an ages-old debate.

    The former demands prototyping, experience in proof-of-concept design and lots of QA testing. With COTS, vendors take care of that – at a price. So who should be engaging in which approach?

    Before we go too far down the rabbit hole, here it should be mentioned that both DIY and COTS are pretty broad categories. Not only can each category encompass the efforts of businesses of all sizes, the discussion is as much about software as it is hardware, and everything is always a moving target.

    Let’s examine the argument in stages and see how it plays in different areas of the market and how the same old arguments crop up, even when designing at the level of entire data centres.

    A new generation of systems has emerged in the form of the Open Compute Project, kick-started by Facebook.

    Large companies have taken much of the legwork out of DIY data centres for me, and done so while keeping margins as cut-throat as possible. DIY or COTS? The latter is looking a lot more promising these days; it’s just a matter of learning to let go.

    Reply
  11. Tomi Engdahl says:

    Airbnb Boosts Presto SQL Query Engine For Hadoop
    http://www.informationweek.com/big-data/big-data-analytics/airbnb-boosts-presto-sql-query-engine-for-hadoop/d/d-id/1319359

    Airbnb is open sourcing a home-grown data-exploration and SQL query tool for Hadoop. Will it give Facebook Presto a leg up on Cloudera Impala?

    Distributed SQL Query Engine for Big Data
    https://prestodb.io/

    Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.

    Presto was designed and written from the ground up for interactive analytics and approaches the speed of commercial data warehouses while scaling to the size of organizations like Facebook.

    Presto allows querying data where it lives, including Hive, Cassandra, relational databases or even proprietary data stores. A single Presto query can combine data from multiple sources, allowing for analytics across your entire organization.

    Presto is targeted at analysts who expect response times ranging from sub-second to minutes. Presto breaks the false choice between having fast analytics using an expensive commercial solution or using a slow “free” solution that requires excessive hardware.

    Facebook uses Presto for interactive queries against several internal data stores, including their 300PB data warehouse.

    Reply
  12. Tomi Engdahl says:

    EMEA PC Shipments Resume Steady Decline
    http://www.eetimes.com/document.asp?doc_id=1326400&

    PC shipments in Europe, the Middle East, and Africa (EMEA) reached 20.2 million units in the first quarter of 2015, a 7.7% decrease year on year, according to International Data Corporation (IDC). After a strong 2014, the market returned to a decline as expected, with business renewals decelerating after last year’s uplift prompted by the end of Windows XP support.

    Macro-economic improvements in Europe were dampened by currency fluctuations and political tensions in Central and Eastern Europe, Middle East and Africa (CEMA). The strong dollar led to various price increases in local currencies.

    Overall portable PCs performed better than desktop thanks to final shipments of the 15 inch portables with Bing in Western Europe (WE) and some parts of the Central and Eastern Europe (CEE). The portable PC declined by 3.6% and desktop PCs by 14%.

    Reply
  13. Tomi Engdahl says:

    NetApp: Don’t know about the hybrid cloud? Then you’re a dummy
    There’s a book saying so – it must be true
    http://www.theregister.co.uk/2015/02/16/netapp_thinks_hybrid_cloud_is_for_dummies/

    his storage Vulture confesses to never having understood the appeal of writing books specifically for dummies, for example NetApp’s not-very-magnum opus, Hybrid Cloud for Dummies.

    Reply
  14. Tomi Engdahl says:

    Excessively fat virtual worlds – come on, it’s your guilty secret
    Does my server look big in this?
    http://www.theregister.co.uk/2015/04/22/cutting_virtual_server_estates/

    Now that virtualisation is seen as a robust and mature technology, managers and administrators are looking to reduce server deployment and management costs further.

    One area of potential cost reduction is reclaiming unused or under-utilised infrastructure capacity.

    You can cut the size of your estate: I know, as I’ve seen compute footprints slashed by 60 per cent. That’s a figure to admire, because it means saving a bunch of hardware and related costs.

    In my experience, these are the five most common mistakes and factors:

    Failing to appropriately resize physical infrastructure when migrating to virtual environments
    Software vendors frequently overstate resource requirements
    Business functions demanding CPU and RAM reservations to ensure that their application never runs short of resource capacity.
    The illusion that virtual machines have no cost. This is perhaps the most dangerous type of thinking, but also appears most pervasive among a lot of management.
    Business or application owners that feel they need to maintain additional capacity for month-end or year-end type activities.

    This means that potentially massive amounts of compute are wasted for most of the year.

    On the other side of the equation, giving a virtual machine too much capacity can negatively impact performance. This is due to the fact a virtual machine has to take into effect managing and scheduling between all the virtual CPUs it has.

    This can introduce an overhead greater than the extra resources gained by adding C

    The old fashioned way involves looking at metrics manually, which could potentially cost a small fortune when scaled up. The smart money is on tools specially designed to do the job, that are not only virtual machine-aware but incorporate intelligent tuning algorithms that will advise sensible reductions.

    Such tools include VMTurbo (free edition available) and VMware vCOPs.

    Be warned that collecting the metrics may take a while

    Where’s the catch?

    Initially, you may find they will be very skeptical about claims of “same performance for less money”. My advice is to start with the lowest-hanging fruit, the more over-specified the better. Put the savings in black and white and agree to test the new set-up. Be sure to tell them that if it doesn’t work out, they can go straight back to the old configuration.

    Discuss any negative effects found and deal with them, as angry customers won’t let you do it again if it isn’t running as well as it did previously.

    The big sell is the fact that most good hypervisor platforms will allow hot add of RAM and CPU, so they can be ratcheted up without downtime. Taking away capacity however, will require downtime.

    Reply
  15. Tomi Engdahl says:

    This data-powered, insight-rich future, how do I get there exactly?
    Microsoft wants to show you how to prep yesterday’s systems for tomorrow
    http://www.theregister.co.uk/2015/04/22/this_datapowered_insightrich_future_how_do_i_get_there_exactly/

    Reply
  16. Tomi Engdahl says:

    Ubuntu 15.04 to bring ‘Vivid’ updates for cloud, devices this week
    Canonical claims first-to-market with LXD, OpenStack ‘Kilo’
    http://www.theregister.co.uk/2015/04/22/ubuntu_15_04_release/

    Canonical says Ubuntu 15.04 “Vivid Vervet,” the latest version of its popular Linux distro, will ship this week, following a two-month beta period.

    Along with the desktop version – which Canonical says is “the favorite environment for Linux developers” – the release will also deliver a range of variants, including special formulations of the OS for cloud deployments, phones, and the Internet of Things (IoT).

    Reply
  17. Tomi Engdahl says:

    Andrew Cunningham / Ars Technica:
    Intel Compute Stick review: its small size, energy efficiency, and low price of $149, come at the expense of barely acceptable performance and few ports

    Intel’s Compute Stick: A full PC that’s tiny in size (and performance)
    Mini-review: Atom-powered mini PC is a good streaming stick but a slow desktop.
    http://arstechnica.com/gadgets/2015/04/intels-compute-stick-a-full-pc-thats-tiny-in-size-and-performance/

    Our appreciation of mini desktop PCs is well-documented at this point. In the age of the smartphone and the two-pound laptop, the desktop PC is perhaps the least exciting of computing devices, but there are still plenty of hulking desktop towers out there, and many of them can be replaced by something you can hold in the palm of your hand.

    Intel’s new Compute Stick, available for about $150 with Windows 8.1 and $110 with Ubuntu 14.04 LTS, takes the mini desktop concept about as far as it can go. The Stick isn’t even really a “desktop” in the traditional sense, since it’s an HDMI dongle that hangs off the back of your monitor instead of sitting on your desk.

    It’s not very powerful, but the Compute Stick is one of the smallest Windows desktops you can buy right now. Let’s take a quick look at what it’s capable of.

    Reply
  18. Tomi Engdahl says:

    VR developers can now apply for free HTC Vive dev kits
    Valve will start shipping to devs in Spring, though “supplies may be limited.”
    http://arstechnica.com/gaming/2015/04/vr-developers-can-now-apply-for-free-htc-vive-dev-kits/

    Developers eager to get their hands on the Valve-supported HTC Vive can now sign up to be considered for a free developer kit.

    Reply
  19. Tomi Engdahl says:

    Gamers feel the glove: Student team creates feedback device for the hand for virtual environments (w/ Video)
    Read more at: http://phys.org/news/2015-04-gamers-glove-student-team-feedback.html#jCp

    Reply
  20. Tomi Engdahl says:

    Beowulf Gods — rip into cloud’s coding entrails
    Slay distributed dragons with old-school skills
    http://www.theregister.co.uk/2015/04/23/old_school_distribute_computing_/

    Distributed computing is no longer something that only occurs in universities or the basements of the really frightening nerds with Beowulf clusters made of yesteryear’s recycled horrors.

    Distributed computing is sneaking back into our data centres on a number of fronts and it looks like it’s probably here to stay. The thing is, those of us hardened in the ways of Beowulf are likely to have an edge when it comes to wrangling today’s abstracted but supposedly easier-to-use approaches.

    Before I get into that, it’s worth making sure we’re all on the same page. Distributed computing is one of those terms that has evolved over the years and is used differently by different people.

    As I view it, distributed computing is when a group of individual computers (nodes) work together towards the same goal (usually the provisioning of a given service), but do not have a shared memory space. That is, each node communicates with other nodes by passing messages in some form or another, instead of directly addressing the RAM of another node.

    Computers that have a shared memory space (where each node can directly address the RAM of another node) are better talked about as parallel computing.

    Reply
  21. Tomi Engdahl says:

    Seagate to open-source Kinetics at OpenStack summit
    Object storage stack
    http://www.theregister.co.uk/2015/04/24/seagate_to_opensource_kinetics_at_openstack_summit/

    Seagate will hand over some of its Kinetic Storage platform to the world at the coming OpenStack summit in Vancouver, Canada.

    Kinetic is the object storage platform Seagate has built to make it possible to so useful work with its Ethernet-equipped disk drives. Seagate’s ambition is to cut arrays out of the loop, allowing software to talk directly to disks instead of having to do all that pfaffing about with storage area networks. By cutting arrays and file systems out of the loop, Seagate reckons it can save users some cash and also speed things up.

    Seagate already publishes some libraries and other other developer resources, but appears to have come to the conclusion that radical re-architecting of infrastructure and applications isn’t the sort of thing that happens when a technology’s inventor holds all the cards.

    Reply
  22. Tomi Engdahl says:

    Google Updates: Project Fi, nuked TV networks and Loch Ness monsters
    It’s been a huge week for Google
    http://www.theinquirer.net/inquirer/news/2405464/google-updates-project-fi-nuked-tv-networks-and-loch-ness-monsters

    Google has been in the news so much this week that it’s difficult to know where to start.

    So let’s talk about NPAPI plug-in and YouTube depreciation, HTTPS everywhere for adverts, Android Wear going WiFi, the switch off of several outdated log-in protocols, and Google’s Project Fi, and finish off with the Loch Ness Monster. No. Really.

    Google to turn off support for NPAPI plug-ins in Chrome.
    The problem is that NPAPI powers Silverlight, and Silverlight powers a number of major broadcasters including Sky Go, BT Sport, Now TV and a slew of other multimedia content and a whole bunch of proprietary software.
    So was the hate for Google that ensued justified? Yes and no. But mostly no.

    We live in an HTML5-led web now and, if Chrome aims to be the fastest browser (after all, isn’t that what every browser wants?), there needs to be a cut off point.

    As it is, there is a workaround in the flags page of Google, but only until September, which means that the clock is ticking on getting these services back up and running, although most are issuing the official advice to switch to another browser.

    At the same time, Google also switched off the v2 API for YouTube. Version 3 was launched in 2012 and an end of life date has always been on the cards. It means that flash rendering of videos is no longer standard, but it also means lights-out for many embedded YouTube applications.

    Smart TVs, some as recent as 2012, have now stopped supporting YouTube as the manufacturers haven’t upgraded the TV firmware for the new API. And it appears that they don’t intend to either, leaving buyers who forked out hundreds for a smart TV only a few years ago facing forced obsolescence.

    Reply
  23. Tomi Engdahl says:

    Disk drive shipment numbers set to spin down
    Rush to flash and collapse of PC market fingered for decline
    http://www.theregister.co.uk/2015/04/24/disk_drives_spinning_down/

    Disk drive shipments have a negative CAGR, and will fall by 3.7 per cent between 2013 and 2020, said spindle motor maker Nidec.

    Within that, traditional enterprise drive shipments will decrease by 17.6 per cent, PC drives will drop 8.1 per cent, consumer electronics drives by 6.9 per cent and external drives by 0.4 per cent, but high-capacity data centre drive shipmentss will grow at a CAGR of 16.2 per cent.

    Stifel MD Aaron Rakers says Nidec disk motor shipments have a high correlation with overall disk drive shipments, so these Nidec estimates are reasonable.

    At the same time as the unit numbers decline, capacity per drive will increase, which will help preserve prices. But in the long-term, fewer drives produced by more and more expensive manufacturing processes – think HAMR – does not impart a picture of disk drive industry health.

    The reasons for the decline are the replacement of PCs by tablets and smart phones and their use of flash storage, together with the use of flash for storing data which must be accessed quickly.

    Reply
  24. Tomi Engdahl says:

    CERN turns to Seagate’s Kinetic system and says ‘it’s storage time’
    Boffins may need to expend energy on software issues
    http://www.theregister.co.uk/2015/03/17/kinetic_drives_are_nit_a_shoein_even_at_cern/

    CERN, with its extremely high-tech, bleeding-edge Big Data wizardry, is waved around like a trophy by IT suppliers these days. Now Seagate has stepped up onto the CERN stage, wanting to get its Kinetic disk drives used to store Large Hadron Collider (LHC) data.

    Seagate has gone and signed a three-year deal with CERN to scoop some of that LHC glamour and “to collaborate on the development of the Seagate Kinetic Open Storage platform”.

    The 4TB, 4-platter Kinetic drives store objects and are directly addressed using Ethernet, meaning interfacing software has to know about them. You can’t just slot them in drive arrays and expect them to work.

    The benefit is for large data stores which can get rid of storage array controllers and complex storage I/O stacks, and so simplify and speed disk I/O operations.

    The beauty of a storage array or VSAN or VSA is that familiar storage access protocols and stacks are used, so that app software doesn’t have to change.

    Not so with Kinetic drives: the whole storage I/O stack has to be rewritten and it looks like each customer has to craft its own system software to do this, what with there being no standard Kinetic disk drive array controller software block.

    On this understanding, making a success of Kinetic drives by Seagate will be a hard, difficult and multi-year undertaking, with no guarantee of success.

    Reply
  25. Tomi Engdahl says:

    In New AI Benchmark, Computer Takes On Four Top Professional Poker Players
    http://games.slashdot.org/story/15/04/26/0326235/in-new-ai-benchmark-computer-takes-on-four-top-professional-poker-players

    Stephen Jordan reports at the National Monitor that four of the world’s greatest poker players are going into battle against a computer program that researchers are calling Claudico in the “Brains Vs. Artificial Intelligence” competition at Rivers Casino in Pittsburgh. Claudico, the first machine program to play heads-up no-limit Texas Hold’em against top human player

    “Poker is now a benchmark for artificial intelligence research, just as chess once was. It’s a game of exceeding complexity that requires a machine to make decisions based on incomplete and often misleading information, thanks to bluffing, slow play and other decoys

    Brains Vs. Artificial Intelligence: Carnegie Mellon Computer Faces Poker Pros in Epic No-Limit Texas Hold’Em Competition
    80,000 Hands Will Be Played in Two-week Contest at Rivers Casino in Pittsburgh
    https://www.cmu.edu/news/stories/archives/2015/april/computer-faces-poker-pros.html

    Reply
  26. Tomi Engdahl says:

    DevOps and Security Mingle at RSA Conference
    http://www.securityweek.com/devops-and-security-mingle-rsa-conference

    RSA Conference 2015 — “The DevOps train is coming, and security can choose to get on board or not, but DevOps isn’t going away.”

    In his talk, Mortman and co-presenter Joshua Corman of Sonatype mentioned five ways DevOps can improve security. First, is by instrumenting everything.

    “DevOps pros love data and measuring and sharing that data is a key tenet of DevOps,” Mortman said Wednesday. “DevOps folks tend to instrument to a great degree in order to have deep insight into the state of their systems. Even seemingly trivial stats such as CPU temperature or fan speed can be indicators of compromise in the right situations. As Galileo famously said, measure all that is measurable, and that which is not, make measurable.”

    Second, he advised organizations to be “mean” to their code.

    “This idea has been heavily pushed by the folks Netflix who bump it a tool called Chaos Monkey, which intentionally initiates faults to help ensure that systems are resilient and stable,” he said. “By forcibly failing in controlled ways we can build better stronger code faster.”

    Reducing complexity and focusing on change management are third and fourth on his list.

    “DevOps orgs tend to be extremely process oriented and leverage automation whenever possible,” he said. “As a result of the use of systems like Chef and Puppet or Jenkins these orgs have also automatically created change management/change tracking systems. This not only improves security and operations but also makes auditors happier.”

    But perhaps the most important aspect of the DevOps movement is empathy, he said.

    “Only by understanding and having empathy for the needs and concerns of all the players can we effectively build software,” said Mortman. “It’s time to break down silos and talk to each other like friends instead of enemies.”

    Reply
  27. Tomi Engdahl says:

    C++ Daddy Bjarne Stroustrup outlines directions for v17
    Standards committee meeting next week should be scintillating
    http://www.theregister.co.uk/2015/04/27/c_daddy_bjarne_stroustrup_outlines_directions_for_v17/

    Calm yourselves, readers. The Spring 2015 C++ Standards Committee Meeting takes place next week in Lenexa, Kansas. And at that meting much of the discussion is expected to consider C++ 17, a major revision of the programming language due in 2017.

    C++ is currently in version 14, which was released last year, but last week C++ Daddy Bjarne Stroustrup published the presentation below outlining what he thinks needs to be done to create version 17.

    At first glance, Stroustrup’s suggestions appear to be concerned with ensuring C++ remains relevant to the “cloud-native” application crowd, without losing its soul along the way.

    Reply
  28. Tomi Engdahl says:

    When Exxon Wanted To Be a Personal Computing Revolutionary
    http://tech.slashdot.org/story/15/04/26/2113240/when-exxon-wanted-to-be-a-personal-computing-revolutionary

    “This weekend is the anniversary of the release of the Apple IIc”

    “Standard Oil, Exxon. The oil giant had been quietly cultivating a position in the microprocessor industry since the mid-1970s via the rogue Intel engineer usually credited with developing the very first commercial microprocessor, Federico Faggin, and his startup Zilog.”

    When Exxon wanted to Be the Next Apple
    http://motherboard.vice.com/read/when-exxon-wanted-to-be-a-personal-computing-revolutionary

    Faggin had ditched Intel in 1974, after developing the 4004 four-bit CPU and its eight-bit successor, the 8008.

    Soon after leaving Intel and forming Zilog, Faggin was approached by Exxon Enterprises, the investment arm of Exxon, which began funding Zilog in 1975.

    With Exxon’s financing, Zilog created one of the most important microprocessors in computing history, the Z80. It was designed to be a backwards-compatible, improved version of Intel’s 8080 microprocessor, which it quickly replaced as the dominant CPU for powering embedded systems (cash registers, printers) and intelligent terminals (stand-alone, hard drive-less CPUs that would be sort of analogous to an Arduino board/microcontroller). Z80s are still all over the place, providing logic controllers in industrial settings while powering credit card gas pumps, a whole bunch of classic synthesizers and video games, VeriFone terminals, and breathalyzers.

    Given Exxon’s patronage of Zilog and its Z80 wunderkind, why didn’t the oil co. go on to become the next IBM or Apple? Or anything at all associated with computing?

    Ironically, the problem may have been that Exxon wanted too much to become an IBM or Apple.

    Exxon began buying up various tech firms in an effort to build a diversified IBM-like computing titan. Meanwhile, Zilog’s follow-up Z8000 processor was fizzling, a relative flop usually blamed on the CPU’s lack of backward compatibility.

    In 1981, Exxon went from being Zilog’s financier to its parent company.

    “R&D expenditure [at Zilog] topped 35 percent of revenue, while the wider range of development caused slippage in its own 16-bit Z8000 processor as Exxon’s demands and the relative managerial inexperience of Federico Faggin became exposed,” Singer wrote. The increased Exxon presence more or less doomed Zilog to failure.

    Exxon had an intention
    “Basically presented themselves as a challenger to IBM.”

    The result was that Zilog was shut out of future IBM collaborations, which, for a designer of microcontrollers, is bad news.

    Eventually, Zilog was bought back from Exxon by its management and employees, and Exxon … well, it still sells oil. The firm’s Z-series microcontrollers are still developed and sold to this day.

    Reply
  29. Tomi Engdahl says:

    New C-type USB connector is increasing at a rapid pace. Soon it may be the only laptop connection, as is the case of Apple’s new MacBook. Cypress Semiconductor has now presented a solution that devices equipped with a USB-C interface can be equipped to take advantage of DisplayPort interface screens.

    This is the EZ-PD solution that is based on Cypress earlier introduced CCG1 control. The solution includes all the iron and software implementation of the new USB interface between the EU and DisplayPort or mini-display port.

    In practice, the Cypress USB controller understands Display Port as one of the alternative connection mode (alternate mode). Cypress has already demonnut that the solution works, for example, Apple’s new MacBook, Google Chromebook Pixel laptop and many other devices.

    Source: http://www.etn.fi/index.php?option=com_content&view=article&id=2747:usb-c-yhteys-vanhaan-nayttoon&catid=13&Itemid=101

    Reply
  30. Tomi Engdahl says:

    The Changing Face of Computer Storage
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326459&

    The newest storage innovations will change how things are done for the logistics chain, both upstream of the drive and array makers, and downstream in channels of distribution of their products.

    For three decades, computer storage was very predictable. Capacity doubled every three years and, while there was an initial price premium, the drive prices dropped to the same level as their predecessor. The status quo took a tumble when solid-state drives (SSDs) arrived, changing the rules for measuring drives from capacity to performance metrics.

    Within just five years, SSD had the “enterprise” drive makers forming a defensive circle, trying to justify a performance-based existence in a world where hard drives were out-classed by a factor of 1,000 times in IOPS.

    Most of the other hard drive categories are also under siege. Desktop computer drives are up against Microsoft’s stumble on Windows 8 and the (diskless) tablet trend. Attempts to market hybrid drives with a flash memory cache have not attracted high sales and there appears to be no expectation in the industry of a turnaround of the decline, since the boost in sales from upgrades to Windows XP looks to be over.

    Reply
  31. Tomi Engdahl says:

    Exclusive: Gaming startup OUYA needs to find a buyer quickly
    http://fortune.com/2015/04/28/exclusive-gaming-startup-ouya-needs-to-find-a-buyer-quickly/

    OUYA has been unable to restructure its debt, and now needs to quickly find a buyer.

    Gaming company OUYA is on the auction block after tripping a debt covenant, according to a confidential email sent out earlier this month from CEO Julie Uhrman to company investors and advisors.

    OUYA originally launched as a crowdfunded “microconsole” for gaming, but quickly struggled to find buyers. What it did have, however, was a large library of games. The Santa Monica, Calif.-based company last year signed an agreement to deliver its games via Xiaomi’s televisions and set-top boxes, and next month will go live on certain Alibaba set-top boxes.

    OUYA has been unable to restructure its debt, and now needs to quickly find a buyer.

    Gaming company OUYA is on the auction block after tripping a debt covenant, according to a confidential email sent out earlier this month from CEO Julie Uhrman to company investors and advisors.

    Investment bank Mesa Global — which recently managed the sale of music service Songza to Google GOOG -0.30% — has been hired to manage the process. No word yet on asking price.

    OUYA originally raised $15 million in Series A funding in early 2013 from investors like Kleiner Perkins, Mayfield Fund, Occam Partners, Shasta Ventures and NVIDIA NVDA 0.47% . Not too long after, it quietly secured some venture debt from TriplePoint Capital.

    It is unclear exactly how much was lent by TriplePoint, except that it must have been more than the $10 million that OUYA raised just two months ago from Alibaba BABA 0.21% . Debt restructuring negotiations were unsuccessful. In her memo, Uhrman writes: “Given our debtholder’s timeline, the process will be quick. We are looking for expressions of interest by the end of this month.”

    OUYA originally launched as a crowdfunded “microconsole” for gaming, but quickly struggled to find buyers. What it did have, however, was a large library of games. The Santa Monica, Calif.-based company last year signed an agreement to deliver its games via Xiaomi’s televisions and set-top boxes, and next month will go live on certain Alibaba set-top boxes.

    “Our focus now is trying to recover as much investor capital as possible,” Uhrman wrote. “We believe we’ve built something real and valuable.”

    Reply
  32. Tomi Engdahl says:

    What to expect from Microsoft’s most important event of the year
    Build starts Wednesday and it’s going to be big
    http://www.theverge.com/2015/4/27/8503035/microsoft-build-2015-developer-conference-preview

    Reply
  33. Tomi Engdahl says:

    This guy cut open his Surface Pro 3 and installed a 1TB SSD
    “This cut was really easy”
    http://www.winbeta.org/news/guy-cut-open-his-surface-pro-3-and-installed-1tb-ssd

    Microsoft’s Surface Pro 3 is the tablet that can replace your laptop, something the Redmond giant has been touting for quite some time now. If you head over to the Microsoft Store, you can purchase this device with models ranging from $799 to $1799. Unfortunately, the most expensive model only has a storage capacity of 512GB.

    What if you purchased a cheaper model and you wanted more space?

    According to Jorge, with the right tools and a lot of patience, as well as a schematic to see where everything is laid out internally, he was able to cut a “window” into the area of the Surface Pro 3 that housed the SSD. After successfully drilling a window around the SSD, he was able to install a Samsung 840 EVO 1TB SSD with ease. In fact, he is ready to upgrade the device to 2TB of storage space once the SSD arrives in the market.

    Reply
  34. Tomi Engdahl says:

    PayPal adopts ARM servers, gets mightily dense
    Applied Micro trumpets accelerating cloud-scale adoption in solid Q4 results
    http://www.theregister.co.uk/2015/04/29/aookied_micro_q4_2015_results/

    Those hoping ARM-powered servers can give Intel and AMD some stiff competition in the data centre have some good news today, after Applied Micro revealed that PayPal “has deployed and validated” the company’s ARM-architected X-Gene server-on-a-chip.

    Applied Micro CEO and president Paramesh Gopi said, during the company’s Q4 earnings call today, that PayPal achieved “… an order of magnitude improvement in compute density” and added that the payments company “represents one of the many hyperscale data center customers that we are currently engaged with to drive X-Gene adoption.”

    Gopi went on to say “we expect to share additional X-Gene success stories over the next several quarters from our customers in the scientific and HPC, financial, hyperscale and networking sectors.”

    Reply
  35. Tomi Engdahl says:

    Brit Boffins EXPLODE Li-On batteries and film the MELTING COPPER
    This is why Lenovo is recalling ThinkPads
    http://www.theregister.co.uk/2015/04/29/boffins_blow_up_batteries_so_you_dont_have_to/

    Video UK boffins have taken a close-up of what happens with Li-ion batteries when they get hot under the collar, and it’s not pretty.

    As Lenovo, Boeing, Tesla, Sony and others will attest, Li-ion battery fire-safety is worth researching.

    Or, to be more scientific: when the batteries break down exothermically, it generates a lot of heat, and since that heat can’t escape, the battery suffers a catastrophic failure.

    What the boffins spotted included pockets of gas forming and venting inside the battery. They also found that with the right internal support, a battery can remain relatively intact in the runaway process – up until around 1,000°C when the copper melted.

    “In contrast, the battery without an internal support exploded causing the entire cap of the battery to detach and its contents to eject. Prior to thermal runaway, the tightly packed core collapsed, increasing the risk of severe internal short circuits and damage to neighbouring objects”, the university says.

    Reply
  36. Tomi Engdahl says:

    How do you really know if a storage array will perform for you?
    When dealing with multi-million storage estates, flying blind is not ideal
    http://www.theregister.co.uk/2015/04/28/how_do_you_know_really_if_a_storage_array_will_perform_for_you/

    When you have an existing storage array infrastructure with a variety of server apps about to hit the array, how do you know if array technology upgrades or even a new array will work as well or better than the existing kit?

    Do you trust your vendor and generalised performance data: It does 450,000 IOPS? This version is 1.3 times the performance of the previous model? Statements like these are not exactly tailored to your particular workload, are they?

    The pressures of a limited budget could cause you to under-provision the array, leading to slowed applications. Or you could prioritise performance and over-provision, spending excessive cash.

    Do you use Iometer and get involved in scripting and trying to somehow model your existing workload? It’s inexact and complicated.

    A start-up, Load DynamiX, has written storage array load-generating software that sits in an appliance, models your workload quite precisely and hammers a target array with it so that you can see if new tech or a new array will do what you need it to do.

    The stages of a validation process start with workload modelling, to simulate a production environment’s IO profile. Then an array with a particular set of technology has its performance profiled under a range of load parameters that encompass your workload using the appliances.

    Load DynamiX kit isn’t cheap, with a starting price at $60,000, but it seems as if it has an unrivalled tool for validating the performance of enterprise storage arrays.

    http://www.loaddynamix.com/

    Reply
  37. Tomi Engdahl says:

    Half-Life 2 Writer on VR Gaming: We’re At Pong Level, Only Scratching the Surfac
    http://hardware.slashdot.org/story/15/04/29/0129202/half-life-2-writer-on-vr-gaming-were-at-pong-level-only-scratching-the-surfac

    Chet Faliszek on virtual reality gaming: We’re at the Pong level, we’ve only scratched the surface
    http://www.ibtimes.co.uk/chet-faliszek-virtual-reality-gaming-were-pong-level-weve-only-scratched-surface-1498798

    Already hugely impressive, virtual reality is only now at the stage video games were when Pong was released in 1972, says Left 4 Dead, Portal and Half-Life writer Chet Faliszek.

    Speaking at the Slush Play virtual reality (VR) conference in Reykjavik, the Valve video game writer gave advice on what he expects to see from the technology in 2015 – and said that the honest answer is, no one really knows.

    “None of know what the hell we are doing. We’re still just scratching the surface of VR. We still haven’t found out what VR is, and that’s fine. We’ve been making movies in pretty much the same way for 100 years, TV for 60 years and videogames for 40. VR has only really been [in development] for about a year, so we’re at Pong level.”

    “Just because a game genre has been around for 35 years doesn’t mean it’ll work with VR. How do you move around in VR? Locomotion is a real problem. Or you might find out that that genre shouldn’t exist anymore. It doesn’t work.”

    “We can get into gamers’ heads in ways we never have before. The feeling of vulnerability has never been higher. You aren’t looking at the action, you’re in it and you can’t escape it.”

    But there’s one thing that VR game developers must be careful to avoid, and that is motion sickness and nausea. “There’s one thing you can’t do and that’s make people sick,” Faliszek said. “It has to run at 90 frames per second. Any lower and people feel sick.”

    Reply
  38. Tomi Engdahl says:

    C++ Daddy Bjarne Stroustrup outlines directions for v17
    Standards committee meeting next week should be scintillating
    http://www.theregister.co.uk/2015/04/27/c_daddy_bjarne_stroustrup_outlines_directions_for_v17/

    Reply
  39. Tomi Engdahl says:

    Sick of Chrome vs Firefox? Check out these 3 NEW browsers
    Opera for Opera lovers, Microsoft offering and more
    http://www.theregister.co.uk/2015/03/03/new_browsers_stagnation_breaker/

    Browsers have been making a comeback. There have been three brand new browsers released and even Firefox, which seems to be sliding further into irrelevancy every day, has released a new version aimed at developers and claims to be working on a WebKit-based version for iOS devices.

    It’s a refreshing moment. After an initial mushrooming development and branches during the 1990s, Internet Explorer, Firefox and Opera were it for quite some time. Apple produced Safari and much later Google followed up with Chrome. Since Chrome in 2008, the browser market has been more or less been stagnant.

    Of late we’ve had three new offerings to break this stagnation: Microsoft’s Project Spartan, Yandex’s still-just-a-concept browser and an odd little upstart named Vivaldi.

    Reply
  40. Tomi Engdahl says:

    Google Insiders Talk About Why Google+ Failed
    http://tech.slashdot.org/story/15/04/26/2341219/google-insiders-talk-about-why-google-failed

    Business Insider spoke with a few insiders about what happened to the network that Google believed would change the way people share their lives online. Google+ was really important to Larry Page, too — one person said he was personally involved and wanted to get the whole company behind it. The main problem with Google+, one former Googler says, is the company tried to make it too much like Facebook.

    Why Google+ failed, according to Google insiders
    http://www.businessinsider.com/what-happened-to-google-plus-2015-4

    Last month, Google announced that it’s changing up its strategy with Google+.

    In a sense, it’s giving up on pitching Google+ as a social network aimed at competing with Facebook. Instead, Google+ will become two separate pieces: Photos and Streams.

    This didn’t come as a surprise — Google+ never really caught on the same way social networks like Facebook, Twitter, or LinkedIn did.

    Technically, tons of people use Google+, since logging into it gives you access to Gmail, Google Drive, and all of Google’s other apps.

    But people aren’t actively using the social network aspect of it.

    The main problem with Google+, one former Googler says, is the company tried to make it too much like Facebook. Another former Googler agrees, saying the company was “late to market” and motivated from “a competitive standpoint.”

    Here are some other things we heard from former Google employees:

    Google+ was designed to solve the company’s own problems, rather than making a product that made it easy for its users to connect with others.

    One person also said Google didn’t move into mobile fast enough with Google+. Facebook, however, realized it was slow to move into mobile and made up for lost time — now most of Facebook’s revenue comes from mobile

    Google+ was a “controversial” product inside Google

    When Vic Gundotra, who led Google+ and played a big role in creating it, left the company about a year ago, it came as a complete surprise.

    Although Google+ didn’t boom into a massively successful social network, that doesn’t mean it completely failed. Google made a solid platform that makes it easy for the millions of people that use its products to seamlessly log in to all of the company’s apps. It made a really useful tool for organizing your photos online.

    Reply
  41. Tomi Engdahl says:

    4 Of The Hardest Things To Change In Information Technology (IT)
    http://www.cio.com/article/2875736/it-transformation/4-of-the-hardest-things-to-change-in-information-technology-it.html

    As an information technology (IT) leader dealing with the intricacies and complexities of enterprise technology every day, I can tell you this: it’s not the technology that is the toughest thing to change in IT. It’s the people.

    1. Going global
    There’s no question that transforming your company from regional-based systems to global systems is a big job. Global applications, global processes, global networks … that takes tech expertise to the nth degree.

    2. Migrating legacy applications
    There are often – though not always – financial benefits associated with migrating legacy applications to the cloud. Unfortunately, it’s never a matter of just moving the app from Point A to Point B and shifting dollars from capex to opex. Customizations, integrations, security requirements, and concerns of protecting personally identifiable information (PII) often impact the timeline and present unexpected hurdles.

    3. Changing domains
    The fact is, changing your domain involves massive file migrations, starting over from a search engine ranking perspective, and modifying marketing materials, voice recordings, and forms.
    So what’s the challenge here? Internally, it’s making sure people track down every ramification of your new domain. Build a cross-functional team soliciting impact from every department in the organization. Communicate often and early.

    4. Building bridges
    The last change I want to talk about doesn’t involve technology directly, but is intimately connected with IT. In an era where business units often have their own technology budgets and are looking for quick solutions to address evolving business needs, IT needs to take the stance of partner – not gatekeeper. Why do you want to do this? Too often I’ve seen IT get cut out of the conversation in technology selections because they’re perceived as too dogmatic, too conservative, and too slow.

    The good news is this: when you include people as one of the key components of your technology change, even the hardest implementations won’t be as hard as you think.

    Reply
  42. Tomi Engdahl says:

    HDS turns its back to ASICs: Storage vaults to rely on x86 sans magic
    Tickling IT buyers’ sensitivities by hitting the G-spot
    http://www.theregister.co.uk/2015/04/29/hitachi_asics/

    With its greatly expanded VSP G-line of products, Hitachi Data Systems has opened a path to a single converged enterprise storage array platform – and has done so by eliminating proprietary hardware dependencies.

    The high-end VSP G1000 has a handful of ASICs for hardware acceleration and a PCIe backplane with similar custom chips. The newly-announced G800, G600, 400 and 200 systems have no ASICs at all, relying just on x86 hardware, and still using PCIe.

    Reply
  43. Tomi Engdahl says:

    Lenovo adds one-litre Chromebox to ThinkCentre Tiny modular range
    It’s like Ikea for PCs
    http://www.theinquirer.net/inquirer/news/2406348/lenovo-adds-one-litre-chromebox-to-thinkcentre-tiny-modular-range

    LENOVO HAS ANNOUNCED a new edition to its ThinkCentre line with the arrival of the Chromebox Tiny, a cheap and cheerful Chrome OS device aimed at small businesses,

    Lenovo describes the Chromebox Tiny as having a one-litre capacity and weighing just 1kg. It follows the introduction of a Windows version earlier in the year.

    A full range of matching peripheral accessories is available, or it can be combined with the ThinkCentre Tiny-In-One monitor to create a 23in touchscreen-based computer.

    The overall cost in doing so isn’t competitive but, if you are buying into the Tiny ecosystem launched last autumn, which looks set to expand further, it might be worth the outlay.

    Reply
  44. Tomi Engdahl says:

    The Era of Japan’s All-Powerful Videogame Designers Is Over
    http://www.wired.com/2015/04/era-japans-powerful-videogame-designers/

    Hideo Kojima’s exit from Konami isn’t just the end of Metal Gear as we know it. It’s the end of the era of big-name directors running the show in Japan.

    It may not be a stretch to say that there will never be another Kojima, no one creator who holds such sway over a massive big-budget gaming enterprise. It’s too expensive, too risky a business to be left up to the creative whims of a single auteur. But that’s precisely what the Japanese game business was, for a long time. Kojima’s exit just puts a period on it. The era of the Legendary Game Designer producing massive triple-A games at Japanese studios is officially over.

    Most of Japan’s most famous game designers already have split from the publishers that made them famous, opening studios of their own. Capcom’s powerhouse producers Shinji Mikami (Resident Evil) and Keiji Inafune (Mega Man) are long gone. Tomonobu Itagaki (Ninja Gaiden) is no longer with Koei Tecmo. Castlevania chief Koji Igarashi left Konami last year.

    But is it too little, too late? The Japanese triple-A game looks like it’s headed towards an extinction event. The new consoles are selling poorly there, while mobile games with pseudo-gambling mechanics explode.

    Meanwhile, the console makers are finding that it’s a buyer’s market out there.

    But at independent software makers? To the extent they produce massive blockbusters at all, expect them to be designed by committee, crafted to alienate as few people as possible. If you want to be an auteur, you can do it on your own dime.

    Reply
  45. Tomi Engdahl says:

    Clorox CIO discusses the real challenge of big data
    http://www.cio.com/article/2915002/big-data/clorox-cio-discusses-the-real-challenge-of-big-data.html

    Many big companies today do a great job of collecting big data, However, the challenge remains to get insights out of the data. Clorox CIO Manjit Singh suggests taking a more agile approach.

    Where the data challenge lies

    “The challenge is not in collecting the data,” Singh says. “We’re challenged in how to get insight out of the data — what questions to ask and how to use the data to predict results in the business.”

    For Singh, the answer is to always focus on enabling strategic business objectives, to break things down into discrete tasks with measurable outcomes.

    “The goal of big data is not better data science,” Moghe says. “The goal is to be able to leverage data to achieve business objectives. Too often this idea gets overshadowed by our tendency to focus on new tools and technologies rather than business outcomes.”

    Rather than chasing hot new tools and hiring large teams of data scientists to set up the technology and manage the data, Moghe argues that CIOs should focus on finding employees with an entrepreneurial mindset that can drive agility with big data.

    “By doing so, big data will become the bridge that helps the CIO role break out of the organizational silo,” he says.

    Reply
  46. Tomi Engdahl says:

    Intel’s High-End Xeon E7v3 Debuts
    Claims it is fastest processor for data analytics
    http://www.eetimes.com/document.asp?doc_id=1326521&

    The fastest processor for data analytics, and not too shabby on engineering, scientific and other workloads either, is the new Intel Xeon processor E7-8800/4800 v3 product families (Xeon X7v3), according to Intel Corp. (Santa Clara, Calif.) Available with up to 18 cores, the Xeon E7v3 boosts performance while cutting power, putting it ahead of its main competition, IBM’s Power8, according to Edward Goldman, chief technology officer in Intel’s Data Center Group.

    “Over the last few years, the cost per server has dropped 40 percent while the market is growing to $17 billion,”

    Real-time business intelligence (BI) and analytics has become a top priority as the time-to-market has shrunk from years to months, and the flood of “big data” has created a deluge overwhelming the traditional data center. “White box” or low-end processors do fine for easy tasks like serving up web pages, but for heavy-duty analytic loads all the processor makers — led by Intel and IBM — are looking for ways to harness the biggest clusters of multicore processors they can, not just to keep up, but ideally to predict where the future of their businesses are going so they can plan to be there when it happens, rather than catch-up after the newest trend is.

    The biggest customers of BI and analytics using big-data, namely healthcare, retail and telecommunications among a dozen others, are turning to in-memory computing (loading the entire application and its data into main memory rather than keep spooling from mass storage) to run their big data analytics, a market which Intel claims will exceed $9.5 billion by 2018. That’s why Intel built the largest memory space available to any processor, 1.4 terabytes per core, into the E7v3, they say, estimating that 50 percent of large installations are adopting in-memory computing for BI and real-time analytics.

    Inside the E7v3
    In Intel’s famous yearly tick-tock upgrade policy — where “tick” is a process improvement while “tock” is an architectural improvement — the Xeon E7v3 is a tock, thus getting a work over on its architecture rather than advancing to the next process node.

    Reply
  47. Tomi Engdahl says:

    Sean Gallagher / Ars Technica:
    Microsoft’s Azure Stack will let enterprises run the cloud platform in any datacenter

    Your own personal Azure: Microsoft’s new Azure Stack for private clouds
    New Windows Server 2016 and Hyper-V previews to drop this week.
    http://arstechnica.com/information-technology/2015/05/04/your-own-personal-azure-microsofts-new-azure-stack-for-private-clouds/

    Technology Lab / Information Technology
    Your own personal Azure: Microsoft’s new Azure Stack for private clouds
    New Windows Server 2016 and Hyper-V previews to drop this week.

    by Sean Gallagher – May 4, 2015 6:01pm EEST

    Share
    Tweet

    32
    A new WIndows Server 2016 technical preview drops this week, along with a host of new cloud tools based on Azure.

    Today at Microsoft’s Ignite conference in Chicago, the company’s executives will make a set of major announcements about the company’s server, management, and cloud offerings. At the top of the list is Azure Stack, a cloud infrastructure platform that packages the capabilities of Microsoft’s public Azure cloud for use by customers in private data centers and public hosting services. Microsoft also announced Microsoft Operations Management Suite, a set of Azure-based management tools that will help companies manage public and private cloud infrastructure as well as virtual and physical Windows and Linux servers. Lastly, there’s a new preview of Windows Server 2016 and the Systems Center 2016 systems management platform. The new Server 2016 preview includes the first release of Nano Server, a minimal Windows Server environment designed for “headless” cloud and virtual server applications that greatly reduces the server operating system footprint.

    Reply
  48. Tomi Engdahl says:

    Intel adds more Xeon chips for business analytics, continues with Cloudera deal
    http://www.zdnet.com/article/intel-adds-more-xeon-chips-for-business-analytics-continues-with-cloudera-deal/

    Summary:Suffice to say, these chips are meant for massive big data sets run by large corporations and organizations relying on in-memory computing and big data to influence and shift their own global business practices.

    Marking the next stop on its ongoing Internet of Things roadmap, Intel unloaded a new family of Xeon processors designed for punching out real-time analytics.

    The Xeon processor E7-8800/4800 v3 product families come with up to 18 cores (a 20 percent increase compared the previous generation) and up to 45MB of last level cache.

    With those figures combined, Intel touted the Xeon family additions could deliver up to 70 percent more results per hour compared to the v2 set.

    Suffice to say, these chips are meant for massive big data sets run by large corporations and organizations relying on in-memory computing and big data to influence and shift their own global business practices.

    Intel already has 17 manufacturers signed up, including Hewlett-Packard, Oracle, Cisco and Dell, among others.

    Reply
  49. Tomi Engdahl says:

    Andrew Webster / The Verge:
    GOG’s DRM-free Steam competitor is finally open to everyone — GOG is best known for its library of classic games — it was originally called Good Old Games — but last year the company announced a new venture that would put it in direct competition with PC gaming behemoth Steam.

    GOG’s DRM-free Steam competitor is finally open to everyone
    http://www.theverge.com/2015/5/5/8524597/gog-galaxy-open-beta-launch

    GOG is best known for its library of classic games — it was originally called Good Old Games — but last year the company announced a new venture that would put it in direct competition with PC gaming behemoth Steam. Called GOG Galaxy, the service is an online gaming platform that includes Steam-like community features and tools like auto-updates to keep your games up to date without any hassle. But the key selling point is that Galaxy includes these features while still ensuring every game can be played offline with no DRM. It was originally supposed to launch last year, but today the platform is finally available in open beta so you can try it out.

    According to GOG, the service currently has a library of more than 1,000 games, with plans to expand that significantly, particularly when it comes to big-budget blockbusters

    Reply
  50. Tomi Engdahl says:

    SanDisk Adds Own Flash To Fusion-io Accelerator
    http://www.eetimes.com/document.asp?doc_id=1326524&

    SanDisk has begun the integration of Fusion-io products by updating its Fusion ioMemory PCIe application accelerators, which were announced at Interop in Las Vegas last week (April 27).

    The latest Fusion ioMemory PCIe application accelerators are made up of Virtual Storage Layer (VSL) data access acceleration software and SanDisk NAND flash, which Callaghan said helps to reduce the list price compared with predecessors. Fusion-io previously sourced NAND flash from other sources. “We can be very aggressive with pricing,”

    More than 250,000 Fusion ioMemory PCIe application accelerators have been deployed since the technology first debuted eight years ago.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*