Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Digi content market set for vast embulgement, will be $154bn in 2019
    Physical possessions? Pah! We’ll all be sat at home in our pants playing games
    http://www.theregister.co.uk/2015/05/06/gamers_to_boost_digital_content_market_to_154bn_by_2019/

    Global digital content sales are on track to hit $154bn (£113bn) annually by 2019, up 60 per cent from 2014, according to recent analysis.

    The current market is worth $99bn (£65bn) and is expected to increase at an average annual rate of 9.4 per cent over the next five years, said a report by crystal ball gazers at Juniper Research.

    Mobile and online games will account for the largest share of sales, as gamers continue to opt for digital formats, it said.

    However, this is set to decrease from the current proportion of 44 per cent, as more users stream videos.

    Reply
  2. Tomi Engdahl says:

    Chrome-Colored Parakeets
    http://www.linuxjournal.com/content/chrome-colored-parakeets

    I personally like Google’s Chrome interface. It’s simple, fast, elegant and did I mention fast? Unfortunately, I don’t like how locked down the Chrome OS is on a Chromebook, nor do I like its total dependence on Google.

    If you like the simplicity and speed of the Chrome interface, but want a full-blown system underneath that deceptively simple GUI, I urge you to give Budgie a try. You either can download the Evolve-OS, or just install the PPA into a standard Ubuntu system.

    Then log out, and when logging in, choose the Budgie desktop instead of Unity. You’ll find a very Chrome-like interface but on top of a full-blown Linux system instead of Chrome!

    Reply
  3. Tomi Engdahl says:

    When Official Debian Support Ends, Who Will Save You?
    http://www.linuxjournal.com/content/when-official-debian-support-ends-who-will-save-you

    With a new version of Debian recently released, it’s an exciting time for users who long for newer applications and cutting-edge features. But for some users, the new release is a cause for concern. A new release means their current installation is reaching the end of its lifecycle, and for one reason or another, they can’t make the switch. And, this leaves them at risk from a variety of security risks and crippling bugs, but there is hope in the shape of an independent project.

    The Debian Long Term Support (LTS) project has been providing support for Debian version 6 (Squeeze) and will continue to do so until early next year. LTS announced that it will be supporting later editions too.

    The project provides security patches and bug fixes for the core components of the Debian system, in addition to the most popular packages. The team would like to expand the range of packages covered, but it will require additional support to make that happen.

    Reply
  4. Tomi Engdahl says:

    Hyper-convergence? I believe – just not like this
    It’s time to drop out of over-hype space
    http://www.theregister.co.uk/2015/05/04/hyperconvergence_hype_de_hyped/

    There’s a horrible, horrible thing I get asked at least three times a week: “What is hyper-convergence?” This is like an icepick into my soul, because I consult with almost all of the current hyper-convergence vendors in one form or another and the truth is, “hyper-convergence” is a meaningless marketing term as wishy-washy and pointless as “cloud”.

    Every vendor has their own specific take on what it is supposed to mean. Each has its own opinion on what are the minimum feature sets to be considered a “hyper-converged” vendor and what under no circumstances should be called thus.

    The formula for figuring it out is simple.

    Small hyper-convergence players want to use whatever buzzwords the big hyper-convergence players are using, so that they can get some free marketing by living in the afterglow of the big players. Big players want to narrow the definition to “exactly how we do things”, so that they can disassociate themselves from other players.

    Everyone, ultimately, wants you to buy their specific flavour and loudly denounce their competitors.

    At the core of hyper-convergence is the “server SAN”.
    A server SAN is a bunch of commodity servers (usually x86, but I’ve seen ARM prototypes) that are clustered in some fashion or another to collectively act as a single storage source. The goal is to take the local disks on each system and lash them all together.
    A server SAN can tolerate the loss of individual drives within a given node, or the loss of entire nodes.

    The short version of why some people exclude object storage systems from the server SAN definition is twofold. First, most operating systems and hypervisors don’t natively talk object storage. It’s something that is (currently) strictly application level and largely designed for developers, not infrastructure teams.

    The second reason for exclusion is that object storage is quite crap at running VMs. The sort of people who usually talk about server SANs and hyper-convergence are infrastructure nerds, so anything that is “outside their wheelhouse” isn’t something they want to have to constantly make exceptions for when talking about what they do.

    My definition of a server SAN is “commodity servers clustered to collectively act as a single storage source”

    Legacy convergence, hyper-convergence and data centre convergence, oh my!

    Legacy convergence is best thought of as proprietary switches married to traditional disk arrays alongside some high-end, tier-1 nameplate servers providing compute capacity.

    It is generally a “buy-by-the-rack” set-up, where a bunch of old-school hardware is sold together as a single SKU, typically for a king’s ransom, and supported by the sorts of enterprise support teams that have their own helicopters.

    Hyper-convergence does away with the expensive disk arrays (hence why many hyper-convergence vendors have “no SAN” stickers) and replaces the expensive tier-1 nameplate servers with more modest commodity servers from lower-margin vendors.

    Storage in a hyper-converged environment is provided by filling up the compute nodes with disks and creating a server SAN. This uses a part of the compute servers RAM and CPU resources. Hyper-converged solutions still rely on proprietary switches for networking.

    The argument for this trade-off is that the overall costs of the set-up are so much lower than the traditional legacy convergence stack that adding an extra node or two per cluster to make up for the lost compute power per node is still cheaper.

    Hyper-convergence might require a few extra compute nodes to fit the same number of VMs, but the overall footprint is generally smaller.

    Data centre convergence is an emerging term used by various hyper-converged companies that have integrated software-defined networking into their platform.

    Convergence is a continuing trend of commoditisation. Legacy convergence did away with expensive and time-consuming integration projects. No longer did you have to do a massive needs assessment followed by hiring network architects, implementation consultants and so on and so forth. You dialled up the convergence vendor and ordered an SKU that provided X number of VMs in racks that could handle Y amperage and had Z interconnect to the rest of your network.

    Hyper-converged solutions took the margins away from the storage and server vendors. This let the new generation of convergence players rake in fat margins while bringing the price (and the entry size) down into reach of the commercial mid-market.

    Data centre converged vendors are now taking the margins away from the switch vendors.

    Software-defined table stakes

    A critical part of this rush to commoditisation is the software. Let’s put hypervisors to one side for a moment and talk about enterprise-storage features. Ten years ago, you could become a massively major player in the storage industry by introducing a new feature, such as deduplication, for example.

    Today, snapshots, cloning, compression, deduplication, replication, continuous data protection and hybrid/tiered storage at a minimum are the table stakes. If you don’t have these features as part of the base storage offering, you’re already dead and you just don’t know it yet.

    Think about this for a moment. Multi-billion dollar companies were founded on almost every one of these features.
    Today, they are tick-box items in hyper-converged and data centre converged offerings. You cannot enter the market without them.

    As data centre convergence becomes more and more prevalent, full SDN stacks will be added to that list.

    Everyone except the high-margin hardware vendors being butchered for fun and profit agree on the hardware part of the convergence definitions. But when we start talking about the software stack, well, there will be blood in the water.

    Thus hyper-convergence is a special class of server SANs where VM workloads run alongside the storage workloads. It was conceived of to be cheaper, denser and more appealing than legacy convergence. Data centre convergence is a special class of hyper-convergence.

    Reply
  5. Tomi Engdahl says:

    Computing Needs a Reboot
    Old techniques running out of gas
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1326520&

    Engineers need to explore new computing paradigms to fuel future performance advances.

    An estimated $200 million in economic activity was lost in New York City during the blizzard of 2015. One reason that snowstorm was not predicted correctly that got lost in the debates and finger pointing: computers are not getting faster.

    For years, it’s been true that about every year and a half, computers in general doubled in speed. It used to be due to Moore’s Law, the observation by Intel co-founder Gordon Moore that the semiconductor industry was increasing the density of transistors per unit area every 18 months.

    Over time, the architecture of computers started to be the limit. Around 1995, the industry started to execute programs in parallel. This was the era of instruction-level parallelism and its standard bearer, the superscalar microprocessor. This kept computers doubling in performance for the same cost every 18 months.

    These tricks hit a roadblock in 2005.

    This so-called speculative execution meant that higher performance was tied directly to higher power

    We ended up with power densities in excess of 200 watts per square centimeter, roughly the same power density as an operating nuclear reactor core! The costs shifted to cooling, but moving beyond the standard fan and heat sink cooling approaches proved to be far more expensive.

    The industry reacted by putting on the same die multiple computers, christened cores by marketing. In order to light up all of the cores, the burden shifted from hardware to the programmer.

    But things got even worse for the computer industry. The trend that Gordon Moore observed, that transistors per unit area doubled every 18 months, was coming to an end.

    And so here we are today: microprocessors are not getting faster for the same price. Building larger and larger computer systems to solve problems such as weather prediction have become exceedingly expensive. It looks as if it’s the true end of the road for computing itself: a technology that has fueled major advances in science, healthcare, drug development, engineering, entertainment, transportation…the list of computing’s impact is nearly infinite.

    In late 2012, we began an initiative under the auspices of the Institute of Electrical and Electronics Engineers.

    We called this initiative “Rebooting Computing” and held three summits, one in Washington, DC, and two near Silicon Valley.

    What emerged were several potential approaches to getting back to the historic exponential scaling of computer performance. Each of these is radical.

    For example, one approach leverages randomness and allows computers to produce approximate results rather than computing to the 100th decimal point. The human eye does this

    Another approach mimics the structures of the brain
    Such a computer is good at recognizing patterns

    A third approach is based on the observation that power in a computer is only consumed when the result is picked from a list of potential results– the longer you can keep the list around, the better the chance of not burning power needlessly.

    Each of these approaches is considered lunatic fringe by the industry. One may be the way forward, but we do not know.

    Current approaches to weather prediction use computing concepts unchanged from the early days of computers. As such, they also inherit the same limits modern computers have.

    Rebooting Computing
    http://rebootingcomputing.ieee.org/

    Reply
  6. Tomi Engdahl says:

    AMD’s 2016-2017 x86 Roadmap: Zen Is In, Skybridge Is Out
    by Ryan Smith on May 6, 2015 2:02 PM EST
    http://www.anandtech.com/show/9231/amds-20162017-x86-roadmap-zen-is-in

    AMD’s CTO Mark Papermaster just left the stage at AMD’s 2015 Financial Analyst Day, and one of the first things he covered was AMD’s CPU technology roadmap for the next couple of years.

    The big question on everyone’s mind over the last year has been AMD’s forthcoming x86 Zen CPU, developed by Jim Keller’s group, and Papermaster did not disappoint, opting to address the future of AMD’s x86 plans first and foremost. AMD is not releasing the complete details on Zen until closer to its launch in 2016, but today they are providing some basic details on the CPU’s abilities and their schedule for it.

    n terms of features, AMD once again confirmed that they’re aiming for significantly higher performance, on the order of a 40% increase in Instruction Per Clock (IPC) throughput. In a significant shift in threading for AMD’s x86 CPUs, Zen will also shift from Bulldozer’s Clustered Multithreading (CMT) to Simultanious Multithreading (SMT, aka Intel’s Hyperthreading).

    Meanwhile AMD has confirmed that Zen will be shipping in 2016, and that it will be produced on a yet-to-be-named FinFET process. Our bet would be that AMD continues to use traditional partner (and spin-off fab) GlobalFoundries, who will be ramping up their 14nm equipment for next year as part of their licensing/partnership with Samsung to implement Samsung’s 14nm FinFET process.

    Reply
  7. Tomi Engdahl says:

    Facebook’s Open Compute could make DIY data centres feasible
    Fitting in never looked so good
    http://www.theregister.co.uk/2015/05/07/build_v_buy_your_datacenter2/

    DIY vs COTS: Part 2 Last time I looked at the PC versus console battle as a metaphor for DIY versus Commercial Off the Shelf (COTS) data centres, and touched on the horrors of trying to run a DIY data centre.

    Since 2011, however, we’ve had the Open Compute Project, initiated by Facebook. The ideal is some kind of industry-standard data centre, with OCP members agreeing open interfaces and specs.

    Does Open Compute shift the DIY data centre story back in favour of build and against buy?

    The PC-versus-console metaphor is relevant to an examination of Open Compute. Of particular note is that after the dust had cleared, the PC gaming market settled into a sense of equilibrium.

    DIY data centre types of today are fortunate. The market as a whole has ground down the margins on servers to the point that the Open Compute Project handles most of this. For those needing a little bit more vendor testing and certification, Supermicro systems with their integrated IPKVMs are such good value for dollar that you can go the DIY route but still get most of the benefits of COTS and still keep it cheap.

    The ODMs are getting in on the deal. Huawei, Lenovo, ZTE, Xiaomi, Wiwynn/Wistron, Pegatron, Compal and Lord knows how many others are now either selling directly to customers or selling on through the channel with minimal added margin.

    Recently, it has been noted that this is affecting storage. It’s only noticeable there because – unlike servers – it’s a relatively new phenomenon. Networking is next, and I wouldn’t want to be the CEO of Cisco right about now.

    DIY data centres made easy

    The selection of ultra-low-margin servers and storage is getting better and better every month. In fact, the low-margin providers are even now certifying their solutions for various hypervisors. The near universal adoption of virtualisation combined with the sheer number of people adopting these models means that finding benchmarks, quirks, foibles and driver conflicts is now a minor research project for the average SMB.

    Put simply: DIY data centres are no longer required to recreate significant chunks of the COTS vendors’ value-add, because there is an in-between.

    Anyone willing to maintain their own spares cabinet and deal with some minor supply chain issues can use Open Compute to make DIY data centres cheaply and easily. And while that’s great for an enterprise, the value of this decreases the smaller you get.

    We also had many Sys Admins working together, pooling the resources of MSPs and individual companies until collectively we had the budget of an upper-midmarket company and the manpower resources of an enterprise. Even with the advances to the DIY market, the cost of dealing with supply chain issues makes COTS the better plan.

    A very limited number of people will know what you’re talking about if you quote an Open Compute model. Only the nerdiest of spreadsheet nerds will understand what you mean if you try to use a Supermicro model name for anything. Nearly everyone knows what’s in a Dell R710 or can discuss issues with HP Gen 9 servers in depth.

    COTS servers are the consoles of the data centre. In the fullness of time, you’ll end up paying a lot more. From not getting access to BIOS updates unless you pay for support to having to pay a licence to access the IPKVM functionality of your server’s baseband management controller, COTS servers cost. They’re a lot up front and they nickel and dime you until the bitter end.

    The collapse of COTS server margins seems inevitable. Even the proudest banner waver of ultra-high-margin servers – HP – has decided to build an Open Compute solution. Win/win for everyone, surely?

    Not quite.

    Unlike the PC-versus-console wars, the DIY-versus-COTS data centre wars are just beginning. The Open Compute Project may ultimately be the stabilising influence that provides a usable equilibrium between low margin and model stability, but we’re not quite there yet.

    HP is only dipping its toe in the water. You can’t buy their Open Compute nodes unless they really like you, and you buy lots of them. It’s their way of not losing hyperscale customers, it is not here to benefit the masses. Dell, Supermicro and so forth don’t sell Open Compute nodes and we are only just now starting to see differentiation in Open Compute designs.

    Open Compute servers are where gaming notebooks were about 10 years ago.

    Storage is lagging servers here by about two years, but is experiencing greater pressures from hyper-convergence

    Software has already advanced to the point that it doesn’t really matter if all the nodes in your virtual cluster are the same.

    When the majority of the market can be served by a “sweet spot” Open Compute server that is essentially homogenous, regardless of supplier, then DIY data centre supply chain issues evaporate.

    Hardware vendors won’t survive that level of commoditisation. They need margins to keep shareholders happy, buy executive yachts and keep up the completely unrealistic double-digit annual growth that Wall Street demands. As soon as any of the hardware-sales-reliant big names start posting consistent revenue declines, they’ll enter a death spiral and evaporate.

    Selling the hardware departments to China, as IBM has done with its x86 commodity line, will only delay this for a few years. Manufacturers in China can show growth by taking customers away from the US makers, but very soon here those US suppliers will no longer be selling hardware. Then the OEMs in China will have to compete among themselves. That battle will be vicious and there will be casualties.

    Market consolidation will occur and the handful of survivors will collectively – but not together, if you know what I mean, anti-trust investigators – put up prices.

    DIY versus COTS is an old, old debate. There is no one answer to this that will apply to all businesses. It is, however, worth taking the time to think beyond this refresh cycle and beyond just the hardware.

    Reply
  8. Tomi Engdahl says:

    New Intel Platform Rich with Transformative Features
    http://www.cio.com/article/2860881/cpu-processors/new-intel-platform-rich-with-transformative-features.html

    When Intel launches a new family of processors, there is always a lot of attention paid to increases in core counts, gains in clocks speeds, and other CPU fundamentals. Those advances are, of course, extremely important to application owners and data center operators, but they are never the full story.

    With every rollout of a new CPU platform there is always a rich backstory filled with new features that will make a big difference in application performance, data center efficiency, security, and more—although they may not grab the big headlines. That’s the case with the launch of the Intel® Xeon® Processor E5-2600/1600 v3 product families.

    Along with the big performance of gains people have come to expect, the new server platform offers many innovations at the microarchitecture level that will drive notable advances in scientific and enterprise computing. Let’s look at a couple of these unsung heroes.

    Reply
  9. Tomi Engdahl says:

    C Code On GitHub Has the Most “Ugly Hacks”
    http://developers.slashdot.org/story/15/05/06/2218200/c-code-on-github-has-the-most-ugly-hacks

    An analysis of GitHub data shows that C developers are creating the most ugly hacks — or are at least the most willing to admit to it. To answer the question of which programming language produces the most ugly hacks, ITworld’s Phil Johnson first used the search feature on GitHub, looking for code files that contained the string ‘ugly hack’.

    C leads the way in “ugly hacks”
    http://www.itworld.com/article/2918583/open-source-tools/c-leads-the-way-in-ugly-hacks.html

    Reply
  10. Tomi Engdahl says:

    Is a 3D memory too expensive to produce?

    PC and server memories go towards a rapid pace based on semiconductor circuits SSD disks. They are faster, quieter and consume significantly less power. According to recent analyzes by their manufacturing solid-state disks can, however, be too expensive. It will slow the spread of this technology.

    This year, the SSD-terabyte manufacturing costs 53-fold and the next year to 49-fold mechanical terabyte disk extent. This not only slows down the SSD penetration, also set a shadow over the future profitability of the entire hard drive industry.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=2771:onko-3d-muisti-liian-kallis-valmistaa&catid=13&Itemid=101

    Reply
  11. Tomi Engdahl says:

    Hardware
    Russian Company Unveils Homegrown PC Chips
    http://hardware.slashdot.org/story/15/05/10/232238/russian-company-unveils-homegrown-pc-chips

    Reader WheatGrass shares the news from Russia Insider that MCST, Moscow Center of SPARC Technologies, has begun taking orders for Russian-made computer chips,

    Besides the chips, MCST unveiled a new PC, the Elbrus ARM-401 which is powered by the Elbrus-4C chip and runs its own Linux-based Elbrus operating system. MCST said that other operating systems, including Microsoft’s Windows and other Linux distributions, can be installed on the Elbrus ARM-401.

    Russia Unveils Homegrown PC Microprocessor Chips
    http://russia-insider.com/en/business/russias-mcst-unveils-homegrown-pc-microprocessor-chips/ri6603

    Moscow Center of SPARC Technologies begins taking orders for Russian-made computer chips, but one expert warns the technology lags five years behind that of western companies

    Moscow Center of SPARC Technologies (MCST) has announced it’s now taking orders for its Russian-made microprocessors from domestic computer and server manufacturers. The chip, called Elbrus-4C, was fully designed and developed in MCST’s Moscow labs. It’s claimed to be the most high-tech processor ever built in Russia, and is comparable with Intel Corp’s Core i3 and Intel Core i5 processors.

    Besides the chips, MCST unveiled a new PC, the Elbrus ARM-401 which is powered by the Elbrus-4C chip and runs its own Linux-based Elbrus operating system.

    “This chip has been designed for everything connected with the extremely critical applications, such as military, information security, governance,” said Basil Moczar, an analyst with the Russian research company ITResearch, to Kommersant. “It’s priced cheaper and offers protection of information, so I do not see any problems.”

    However not everyone was convinced Elbrus-4C was up to scratch with its U.S. made competitors. Sergei Viljanen, editor in chief of the Russian-language PCWorld website, told Kommersant the design was inferior to foreign chips.

    “Russian processor technology is still about five years behind the west,” Viljanen said. “Intel’s chips come with a 14 nanometer design, whereas the Elbrus is 65 nanometers, which means they have a much higher energy consumption.”

    Reply
  12. Tomi Engdahl says:

    Mark Bergen / Re/code:
    Google’s strategy for making mobile searches more lucrative: new mobile ad formats, options for ordering food directly from inside mobile search, more

    Google’s Mobile Search Strategy: Bake In and Take Out
    http://recode.net/2015/05/09/googles-mobile-search-strategy-bake-in-and-take-out/

    On Tuesday, Google confirmed what many had long suspected: Mobile searches now outnumber those on desktop.

    That must have been worrying news in the Googleplex, because, while the company does not share the figures, evidence suggests mobile search is much less lucrative. That’s why executives spent most of the last earnings call assuring investors that, indeed, they do have a plan for mobile.

    It’s important. On desktop, Google built a superior search engine and watched the dollars roll in. But, on mobile, it may have to fight harder. Rivals like Amazon and Pinterest* are beginning to lay the groundwork for search revenue products, which would give advertisers more than one option.

    “It’s not going to be the Google show,” one media buyer said about mobile spending.

    Reply
  13. Tomi Engdahl says:

    Linux Mint Will Continue to Provide Both Systemd and Upstart, Users Will Choose
    http://news.softpedia.com/news/Linux-Mint-Will-Continued-to-Provide-Both-Systemd-and-Upstart-Users-Will-Choose-480758.shtml

    After Debian had adopted systemd, many of the distros based on this operating system made the switch as well. Ubuntu has already implemented systemd, but Linux Mint is still providing dual options for users.

    A good chunk of the Debian community was not happy with the systemd integration, but those problems didn’t go beyond Debian. The Ubuntu transition was painless, and no one really put up a fight, and the Linux Mint team chose the middle ground. As it stands right now, Linux Mint is providing users with the possibility of running their favorite system either with systemd or upstart.

    From the looks of it, this decision was not taken lightly by the developers, and it will remain in place for the time being. The devs consider that the project will wait enough for systemd to become more stable and mature. Only after this happens, systemd and all the other components in the family will be implemented by default.

    Reply
  14. Tomi Engdahl says:

    Gartner: Dell nowhere to be seen as storage SSD sales go flat
    EMC leads the way
    http://www.theregister.co.uk/2015/05/11/gartner_says_storage_ssd_sales_flat/

    Well, here’s another nail in the coffin for traditional storage arrays; Gartner claims array SSD sales were up just one per cent year-on-year, while server flash sales grew 51 per cent.

    Total enterprise SSD revenue is 2014 was $5.77bn, up 30 per cent year-on-year.

    Within that server SSD revenue was $3.9bn, and storage SSD revenue was $1.8bn.

    Vendor shares were:

    Intel at $1.58bn — a 27.4 per cent share
    Samsung with $848.2m — 14.7 per cent
    SanDisk/Fusion-io with $848.2m — 14.7 per cent
    Western Digital/HGST with $590m — 10.2 per cent
    Micron $294m — 5.1 per cent

    What do we notice? Neither Cisco nor Dell appear, meaning their AFA revenues are less than $50m

    Reply
  15. Tomi Engdahl says:

    Automation eases the pain of software patching
    Cure your fear of updates
    http://www.theregister.co.uk/2015/05/11/how_to_ease_the_pain_of_software_patching/

    The three biggest challenges for IT managers are security, reliability and performance. Ideally, an organisation’s software will excel at all three but in practice we know that isn’t true.

    Even the best-laid software development plans let bugs through which can cause problems in all these areas. So patching the organisation’s software is key.

    Patching application and operating system software is often seen just as a way to eliminate security flaws, but it can also create a more efficient system by preventing performance bugs and memory leaks. It is a crucial part of any corporate IT security strategy, but unfortunately for IT managers it is also difficult to do.

    The Australian Government Department of Defence found that operating system and application patching could have stopped 85 per cent of all security incidents it experienced, when used alongside application whitelisting and restricting administrative privileges.

    It identified application and operating system patching as essential for security, but also ranked them as medium or high in terms of upfront cost (including equipment and technical complexity) and maintenance. Staff costs figured heavily in both these areas.

    Simply put, patch management is complex, time consuming and hard to document when carried out manually.

    Strategies to Mitigate Targeted Cyber Intrusions
    http://www.asd.gov.au/infosec/top-mitigations/mitigations-2014-table.htm

    Reply
  16. Tomi Engdahl says:

    Diablo Memory Channel Storage Architecture Gains Traction
    http://www.eetimes.com/document.asp?doc_id=1326564&

    Diablo Technologies’ Memory Channel Storage (MCS) architecture is making its way into more servers. The company recently announced its technology will be integrated into latest edition of Lenovo’s X6 servers to power its eXFlash MCU products. The news comes on the heels of Diablo announcing its legal issues with Netlist have been resolved.

    Lenovo’s new X6 rack and blade servers support up to 32 eXFlash DIMMs per system, which translates into 12.8TBs of high-performance system storage and designed for accelerating a wide range of enterprise workloads, including databases, analytics and virtualization. Lenovo acquired the x6 server product line from IBM last year.

    The MCS architecture connects NAND flash directly to the CPU through a server’s memory bus; persistent memory is essentially attached to the host processors of a server or storage array. This configuration allows for linear scalability in performance at extremely low latencies for high-demand enterprise applications, the company told EE Times in November 2013.

    Reply
  17. Tomi Engdahl says:

    lso, an open source software are susceptible to new network dangers, such as Heartbleed-bug last year showed. The code quality is an important factor in how errors – and thus disadvantages – you fight. This field linux stacks up very well.

    Normally, high-quality code boundary is considered that a thousand line of code gets lost in the ranks of one mistake. Linux kernel code, the error number is 0.55. Linux is thus almost twice higher quality than the kind of code, which is usually considered the highest quality.

    This has not always been so.

    In 2007, the linux-kernel consisted of nearly 3.5 million lines of code. The bugs were identified and 425 of them were repaired 217. Now, at the core of lines of code almost 20 million

    Source: http://etn.fi/index.php?option=com_content&view=article&id=2803:linux-koodissa-on-vahan-virheita&catid=13&Itemid=101

    Reply
  18. Tomi Engdahl says:

    Toshiba shares go over a cliff after probe into hidden losses
    Loss-making projects from 2012 may have been swept under the carpet
    http://www.theregister.co.uk/2015/05/12/toshiba_shares_go_over_a_cliff_after_probe_into_hidden_losses/

    Toshiba’s shares have dived 22 per cent in a week after the company commenced a probe into accounting irregularities, rescinded previous profit guidance and cancelled a planned dividend payment.

    Toshiba’s a colossal company, so its information technology arms will be isolated to an extent because they’re not under suspicion

    Toshiba’s main relevance to Reg readers is solid state disks, a field in which moving slowly probably isn’t an option at present.

    Reply
  19. Tomi Engdahl says:

    Microsoft’s run Azure on Nano server since late 2013
    Redmond discovers the limits of cloud-first
    http://www.theregister.co.uk/2015/05/12/microsofts_run_azure_on_nano_server_since_late_2013/

    Microsoft’s only just announced its new Nano server, but has been using it in production on Azure since late 2013.

    So says D. Britton Johnston, CTO for Microsoft Worldwide Incubation, with whom The Reg chatted over the weekend.

    But Britt said that running the server on Azure has also taught Microsoft that what works in the cloud wont work on-premises. In Azure bit barns, he explained, Microsoft just shifts workloads to another server in the case of any hardware glitch. While businesses will build some redundancy into their systems, Microsoft’s tweaked Nano server and the Cloud Platform System converged hardware rigs it announced last year to recognise that businesses can’t just throw hardware at a problem.

    Cloud-first, it seems, only gets you so far on-premises.

    The newly-announced Azure Stack – an on-premises version of Azure – also reflects on-premises constraints. Britt explained that Azure Stack will represent one way to do private and/or hybrid cloud in Microsoft’s new way of thinking. If you want to base your rig on Windows Server, Hyper-V, System Centre and Virtual Machine Manager, feel free to do so.

    Reply
  20. Tomi Engdahl says:

    There’s a BIG problem with Microsoft’s VDI rules
    Dedication, that’s what you need
    http://www.theregister.co.uk/2015/05/12/dedication_microsoft_virtual_desktop/

    If you’re talking virtual desktop infrastructure (or VDI) there are a few options – VMware Horizon View, Microsoft Remote Desktop Services, even smaller players like 2X Software – but chances are you’re going to plump for the biggest hitter, the company which has been doing it for the longest. You guessed it, I’m talking about Citrix.

    From its early days delivering multi-user access technologies for OS/2 and Windows NT, Citrix Systems’ core offering has always been server-based.

    From WinFrame and MetaFrame to Presentation Server and latterly Citrix XenApp, Citrix has been delivering desktop-like application access on Windows Server operating systems.

    Usually this works well, but occasionally there can be compromises in dressing up a server OS like a desktop one (and hoping nobody trips over the cracks). So when Citrix released XenDesktop it was seen as the answer to all of these issues.

    If you want to operate XenDesktop (other VDI technologies are available) in a corporate environment, you will encounter no issues around licensing, particularly if you have a Select Licence Agreement with Microsoft for easy access to desktop licences.

    If you purchased Software Assurance on top of your Select Licences, you automatically receive an entitlement to utilise your local desktop licences in a dedicated hosted environment.

    That’s great news for sysadmins in internal environments – any user licences in play automatically transfer in a 1:1 relationship with software used on your virtual desktop infrastructure

    What happens if you’re not big enough for a Select Licence with Software Assurance, though? Microsoft has you covered there, too. You can purchase a Virtual Desktop Access licence, which entitles a user with a licensed version of a Windows on their local machine use a single instance of a Windows desktop operating system on a VDI server, in that same 1:1 ratio.

    So as long as your local licence is nice and legal, and you purchase the extra VDA license for about $100 per year, then you’re covered in the virtual environment.

    Reply
  21. Tomi Engdahl says:

    Swift vs. Objective-C: 10 reasons the future favors Swift
    http://www.infoworld.com/article/2920333/mobile-development/swift-vs-objective-c-10-reasons-the-future-favors-swift.html

    It’s high time to make the switch to the more approachable, full-featured Swift for iOS and OS X app dev

    Reply
  22. Tomi Engdahl says:

    Widespread Windows XP Use Remains Among Businesses Despite End-of-Life: Survey
    http://www.securityweek.com/widespread-windows-xp-use-remains-among-businesses-despite-end-life-survey

    Windows 10 may be on the way, but many organizations are still stuck deep in the past when it comes to using versions of Microsoft’s flagship operating system.

    According to a survey from Bit9 + Carbon Black, many enterprises in the U.K. and the U.S. are still running Windows XP, which reached its end-of-life last year. The survey, which fielded responses from 500 medium and large businesses in those countries, found that 34 percent are still using a combination of Windows XP and Windows Server 2003. Another 10 percent continue to use Windows XP exclusively, bringing the total percentage of organizations in the survey using XP to 44 percent.

    “More than a year after the end-of-support deadline for XP, the fact that 44 percent of companies surveyed are still using it is startling,” said Chris Strand, PCIP, senior director of compliance and governance for Bit9 + Carbon Black, in a statement. “Companies that have been running Windows XP without compensating controls—such as application control combined with continuous monitoring solutions—have been exposed to a host of possible exploits that may have allowed hackers to take advantage of the vulnerabilities associated with the unsupported machines. These vulnerabilities could lead to the compromise of companies’ critical infrastructure and loss of essential information—including customers’ personal data.”

    “Although patching for XP and 2003 end-of-life will be rare, companies should still do the normal blocking and tackling, like proper network segmentation and patching applications that run on top of the OS,”

    Reply
  23. Tomi Engdahl says:

    Silicon/software package to build USB Type-C to DisplayPort cables
    http://www.edn-europe.com/en/silicon/software-package-to-build-usb-type-c-to-displayport-cables.html?cmp_id=7&news_id=10006303&vID=209#.VVMAnJNLZ4A

    Cypress Semiconductor has assembled a complete silicon and software offering for USB Type-C to DisplayPort adapters (dongles).

    The EZ-PD CCG1-based Type-C to DisplayPort Cable solution enables connectivity between a USB Type-C receptacle and a DisplayPort (DP) or Mini DisplayPort (mDP) receptacle, allowing emerging Type-C notebooks and monitors to be interoperable with older products.

    Reply
  24. Tomi Engdahl says:

    Oracle proposes to deliver Java 9 SDK on September 22nd, 2016
    It’s a Thursday afternoon, in case you were wondering
    http://www.theregister.co.uk/2015/05/13/oracle_proposes_to_deliver_of_java_9_sdk_on_september_22nd_2016/

    Oracle’s chief architect of the Java Platform Group, Mark Reinhold, has outlined a “proposed schedule for JDK 9” that will see it delivered on Thursday, September 22nd, 2016.

    Reinhold’s post on the topic offers the following development milestones:

    10 December 2015: Feature Complete
    04 February 2016: All Tests Run
    25 February 2016: Rampdown Start
    21 April 2016: Zero Bug Bounce
    16 June 2016: Rampdown Phase 2
    21 July 2016: Final Release Candidate
    22 September 2016: General Availability

    What’s coming in version 9? Oracle says we can expect a new modular structure, modular source code and modular run-time images.

    Reply
  25. Tomi Engdahl says:

    Firefox 38 Arrives With DRM Required To Watch Netflix
    http://yro.slashdot.org/story/15/05/12/172238/firefox-38-arrives-with-drm-required-to-watch-netflix

    Mozilla today launched Firefox 38 for Windows, Mac, Linux, and Android. Notable additions to the browser include Digital Rights Management (DRM) tech for playing protected content in the HTML5 video tag on Windows, Ruby annotation support, and improved user interfaces on Android.

    Firefox 38 arrives with DRM tech required to watch Netflix video, Ruby annotation, revamped look on Android
    http://venturebeat.com/2015/05/12/firefox-38-arrives-with-drm-tech-required-to-watch-netflix-video-ruby-annotation-revamped-look-on-android/

    Both desktop and mobile releases are getting Ruby annotation support, a long-time request from East Asian users. Ruby is essentially extra text attached to the main text for indicating the pronunciation or meaning of the corresponding characters — adding the feature to the browser means users no longer need to install add-ons like HTML Ruby.

    Ruby is widely used in Japanese publications, and it is also common in Chinese books for children, educational publications, and dictionaries.

    Desktop

    The most important addition to Firefox 38 is undoubtedly integration with the Adobe Content Decryption Module (CDM) to play back DRM-wrapped content on Windows Vista and later. Mozilla announced the controversial (given the closed nature of DRM) move just under a year ago.

    The company’s reasoning for the decision is the same today:

    We are enabling DRM in order to provide our users with the features they require in a browser and allow them to continue accessing premium video content. We don’t believe DRM is a desirable market solution, but it’s currently the only way to watch a sought-after segment of content.

    The CDM in question is downloaded from Adobe shortly after you install Firefox 38 or higher, and it activates when you first interact with a site that uses Adobe CDM. Mozilla says some premium video services, including Netflix, have already started testing the solution in Firefox.

    Mozilla has designed a security sandbox that sits around the CDM, adding another layer of security for code that the company does not control itself. Firefox users can also remove the CDM from their copy of the browser, and the company even offers a separate Firefox release without the CDM enabled by default

    Reply
  26. Tomi Engdahl says:

    Mark Walton / Ars Technica:
    Nvidia debuts its Grid cloud gaming service with 1080p 60 FPS streaming for Shield Hub beta members who own a Shield device and have 30Mbps Internet or better

    Nvidia turns on 1080p 60 FPS streaming for its Grid cloud gaming service
    Shield device, 30Mbps connection, and beta membership needed to get in on the action.
    http://arstechnica.com/gaming/2015/05/nvidia-turns-on-1080p-60-fps-streaming-for-its-grid-cloud-gaming-service/

    Reply
  27. Tomi Engdahl says:

    Criticizing the Rust Language, and Why C/C++ Will Never Die
    http://developers.slashdot.org/story/15/05/12/1920246/criticizing-the-rust-language-and-why-cc-will-never-die

    An anonymous reader sends an article taking a harsh look at Rust, the language created by Mozilla Research, and arguing that despite all the flaws of C and C++, the two older languages are likely to remain in heavy use for a long time to come. Here are a few of the arguments: “[W]hat actually makes Rust safe, by the way? To put it simple, this is a language with a built-in code analyzer and it’s a pretty tough one: it can catch all the bugs typical of C++ and dealing not only with memory management, but multithreading as well.”

    Criticizing the Rust Language, and Why C/C++ Will Never Die
    http://www.viva64.com/en/b/0324/

    I couldn’t but notice how much interest the readers of this blog had shown in the topic “should we let kittens play with new balls of wool?” So I felt like sharing a few more of my reflections on a related subject in regard to the C and C++ languages and the odds that Rust will kill them. No need to tell you that it will inevitably cause a big holy war, so before you proceed, think twice if you really want to go on reading this post and especially participate in a “constructive debate” via comments.

    So, to sum it up, personally I will be investing my time into studying C/C++ rather than Rust in the next 5 or so years. C++ is an industrial standard. Programmers have been using it to solve a huge variety of tasks for over 30 years now.

    A C++ programmer will hardly ever have any difficulties finding a job with a more than worthy salary and, if necessary, can quickly learn Rust. But the opposite scenario is very, very unlikely. By the way, the language choice is far not the only and most important factor when picking a new job. Besides, a skilled C/C++ programmer can easily find their way in PostgreSQL’s or Linux kernel’s source code, has access to modern powerful development tools, and has a pile of books and articles at hand (for example on OpenGL).

    So, take care of your health and don’t waste your time – you have less of those than you think!

    Reply
  28. Tomi Engdahl says:

    The Effects of Industrial Temperature on Embedded Storage
    http://rtcmagazine.com/articles/view/108055

    For many applications, it imperative that embedded systems developers understand what effects extended high temperatures have on SSDs. To assisting the decision process and provide reference points, OEMs can use several helpful calculations to determine SSD endurance and data retention characteristics of different types of NAND flash media.

    Reply
  29. Tomi Engdahl says:

    Quarterly losses double at Hadoop hawker Hortonworks
    Although revenues and customer base figures both grew
    http://www.theregister.co.uk/2015/05/13/hortonworks/

    Open source Hadoop purveyor Hortonworks doubled its quarter-on-quarter net losses in the first quarter of 2015 to a loss of $40m (£25m).

    Meanwhile, revenue rose 167 per cent to $22.8m (£14.5m).

    Reply
  30. Tomi Engdahl says:

    Speaking in Tech: The post-SSD world – we’re talking about memories
    Netflix is the hammer and every data centre is a nail
    http://www.theregister.co.uk/2015/05/13/speaking_in_tech_episode_159/

    Hosted by Greg Knieriemen, Ed Saipetch and Sarah Vela. This week we talk Cloud Foundry, the problems with Solid State storage (losing power for a while — and then data for the rest of your life),

    Reply
  31. Tomi Engdahl says:

    The Programming Talent Myth
    http://developers.slashdot.org/story/15/05/05/0134242/the-programming-talent-myth

    Jake Edge writes at LWN.net that there is a myth that programming skill is somehow distributed on a U-shaped curve and that people either “suck at programming” or that they “rock at programming”, without leaving any room for those in between. Everyone is either an amazing programmer or “a worthless use of a seat” which doesn’t make much sense. If you could measure programming ability somehow, its curve would look like the normal distribution.

    The truth is that programming isn’t a passion or a talent, says Edge, it is just a bunch of skills that can be learned. Programming isn’t even one thing, though people talk about it as if it were; it requires all sorts of skills and coding is just a small part of that. Things like design, communication, writing, and debugging are needed. If we embrace this idea that “it’s cool to be okay at these skills”—that being average is fine—it will make programming less intimidating for newcomers.

    The programming talent myth
    http://lwn.net/Articles/641779/

    Mediocrity

    When he said that he was a mediocre programmer, some in the audience probably didn’t believe him, he said. Why is that? The vast majority of those in the audience have never actually worked with Kaplan-Moss, so why would they assume his coding ability is exceptional? In the absence of any other data, people should assume that he is solidly in the middle of the curve. Part of the problem there is the lack of a way to even measure coding ability. “We are infants in figuring out how to measure our ability to produce software”, he said. What are our metrics? Lines of code—what does that measure? Story points? “What even is a story point?”, he wondered.

    Programmers like to think they work in a field that is logical and analytical, but the truth is that there is no way to even talk about programming ability in a systematic way. When humans don’t have any data, they make up stories, but those stories are simplistic and stereotyped. So, we say that people “suck at programming” or that they “rock at programming”, without leaving any room for those in between. Everyone is either an amazing programmer or “a worthless use of a seat”.

    But that would mean that programming skill is somehow distributed on a U-shaped curve. Most people are at one end or the other, which doesn’t make much sense. Presumably, people learn throughout their careers, so how would they go from absolutely terrible to wonderful without traversing the middle ground? Since there are only two narratives possible, that is why most people would place him in the “amazing programmer” bucket. He is associated with Django, which makes the crappy programmer label unlikely, so people naturally choose the other.

    But, if you could measure programming ability somehow, its curve would look like the normal distribution. Most people are average at most things. This is not Lake Wobegon, most people are not above average, he said.

    A dangerous myth

    This belief that programming ability fits into a bi-modal distribution (i.e. U-shaped) is both “dangerous and a myth”. This myth sets up a world where you can only program if you are a rock star or a ninja. It is actively harmful in that is keeping people from learning programming, driving people out of programming, and it is preventing “most of the growth and the improvement we’d like to see”, he said to a big round of applause.

    Just skills to be learned

    The truth is that programming isn’t a passion or a talent, it is just a bunch of skills that can be learned. Programming isn’t even one thing, though he had been talking about it as if it were; it requires all sorts of skills and coding is just a small part of that. Things like design, communication, writing, and debugging are needed. Also, “we need to have at least one person who understands Unicode”, he said to laughter.

    There are multiple independent skills, but we tend to assume that someone is the minimum of their skill set. Sure, you might be a good designer, speak and write well, and be a great project manager, but you don’t know how a linked list works, so “get out of the building”. Like any other skill, you can program professionally, occasionally, or as a hobby, as a part-time job or a full-time job. You can program badly, program well, or, most likely, be an average programmer.

    Reply
  32. Tomi Engdahl says:

    Two of the three Web pages run on Unix

    Of Windows share their websites for currently 32.4 percent.
    Unix servers run by 67.6 per cent.

    W3Techsin statistics show that 52.5 percent of Unix servers run Linux. BSD, HP-UX, and Solaris share classes per cent or less, so is likely to rotate the majority of linux Of those sites, which W3Techs is unable to list.

    Windows desktops position remains steadfast. Net Applications According to different versions of Windows are used for more than 90 per cent of personal computers. Mac OS has accounted for 6-7 per cent for a long time. Linux is still only one and a half per cent of computers.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=2829:kaksi-kolmesta-nettisivusta-ajaa-unixia&catid=13&Itemid=101

    Reply
  33. Tomi Engdahl says:

    Tablet sales will continue to fall

    In January-March, sold to 51.9 million tablet computers. The number is eight per cent lower than a year earlier. Most lost its position as the major manufacturers Apple and Samsung.

    According to Strategy Analytics, Apple sold 12.6 million iPad in the first quarter. The amount is a quarter lower than a year earlier. Meanwhile, Apple’s market share fell to 24.3 per cent.

    Even less went to Samsung. 8.8 million sold tablets knew the market share shrinkage by nearly one third, or 8.8 per cent.

    Research by the small manufacturers were able to increase their sales during the first half of the year. For example, Huawei more than doubled its sales volume to 1.3 million tablets.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=2831:tablettimyynnin-lasku-jatkuu&catid=13&Itemid=101

    Reply
  34. Tomi Engdahl says:

    Jeff Grubb / VentureBeat:
    Ubisoft made $430M from digital channels in fiscal 2015, up 96.8% YoY, confirming digital distribution as main revenue driver

    Ubisoft: ‘Digital distribution was the main driver’ of our fiscal 2015 growth
    http://venturebeat.com/2015/05/12/ubisoft-digital-distribution-was-the-main-driver-of-our-fiscal-2015-growth/

    Assassin’s Creed publisher Ubisoft reported the rresults of its fiscal 2015 today, and the company explained that digital sales contributed significantly to its improved bottom line. Digital made $430 million for Ubisoft last year. That represents a 96.8 percent year over-year surge, and the company said this segment of its business was the “main driver” of its growth. This helped the French publisher set a new record of 77 percent gross margins.

    Reply
  35. Tomi Engdahl says:

    Technology Lab / Information Technology
    Cortana for all: Microsoft’s plan to put voice recognition behind anything
    Microsoft and co. make computer vision, voice, and text processing a Web request away.
    http://arstechnica.com/information-technology/2015/05/cortana-for-all-microsofts-plan-to-put-voice-recognition-behind-anything/

    The Internet of things without keyboards

    The early targets for the Project Oxford services are clearly mobile devices. The software developer kits released by Microsoft—in addition to those supporting .NET and Windows—include speech tools for iOS and Android, and face and vision tools for Android. The REST APIs can be adapted for any platform.

    But it’s also clear that Microsoft is thinking about other devices that aren’t traditional personal computers—devices that generally fall under the banner of Internet of Things. “Especially if you have a device that you’re not going to hook up a mouse or keyboard to,” Gaglon said, “to have a language model behind it that can process intent and interactions is… very powerful.”

    The longterm result could be that developers of all sorts of devices could build speech and computer vision into their products, delivering the equivalent of Cortana on everything from televisions to assembly line equipment to household automation systems. All such implementations would be customized to specific tasks and backed by cloud-based artificial intelligence. Some of the components of projects in fields such as cloud robotics could easily find their way into the Azure and Bing clouds.

    By making the Project Oxford services as accessible as possible, Microsoft is positioning Azure and Bing to become the cloud platform for this new world of smart products. And ironically in the process, Windows could become even more relevant… as a development platform in a “post-Windows” world.

    Reply
  36. Tomi Engdahl says:

    Big Data, OpenStack and object storage: Size matters, people
    Consider your needs before rushing out and investing in new storage tech
    http://www.theregister.co.uk/2015/05/18/size_matters_storage_open_stack/

    Do you have the problem?

    These technologies are all designed to solve problems on a big scale:

    Object storage = storing huge amounts of unstructured data
    Big Data (analytics) = analysing huge amounts of data
    OpenStack (and cloud management platforms in general) = managing huge pools of compute, networking and storage resources

    The common word here is HUGE. Otherwise, it’s like shooting sparrows with bazookas.

    Reply
  37. Tomi Engdahl says:

    Stripped to the core and full of Xfce: Xubuntu Linux loses it
    Like Ubuntu but a darn sight slimmer
    http://www.theregister.co.uk/2015/05/18/xubuntu_review/

    he Xubuntu project recent unveiled a stripped-down build of its Xfce-based Ubuntu: Xubuntu core. Core offers a very basic version of the Xfce desktop, along with the basic look and feel of Xubuntu, but any extras like an office suite, media player, Xfce add-ons or even a web browser will have to be installed separately.

    The “core” name is a little confusing since Ubuntu proper recently began shipping Ubuntu Core, a lightweight version of Ubuntu optimized for container-based environments like Docker. Xubuntu core is unrelated and derived from Xubuntu, not Ubuntu Core.

    Reply
  38. Tomi Engdahl says:

    Cisco tipped to buy ‘dominant’ STORAGE BADBOY Nutanix
    It’s not clear if even the Borg could pull off this assimilation
    http://www.theregister.co.uk/2015/05/18/cisco_to_buy_nutanix_simplivity_looks_a_likelier_target/

    Jared Rinderer, a senior research analyst at Equity Capital Research Group, has claimed Cisco is about to buy converged infrastructure enfant terrible Nutanix.

    “Cisco would gain the most strategic value and long-term accretive revenue contribution with the acquisition of privately-held Nutanix, which is the clear market leader thus far,” the analyst wrote.

    He said with Cisco sitting on $3.2 billion of cash as of January, it will issue debt to complete the acquisition, “which may be announced during Nutanix’s user/partner conference in Miami from 8-10 June”.

    Reply
  39. Tomi Engdahl says:

    OpenStack private clouds are SCIENCE PROJECTS says Gartner
    Don’t Try This At Home and watch out for the lock-in from hired help
    http://www.theregister.co.uk/2015/05/18/openstack_private_clouds_are_science_projects_says_gartner/

    OpenStack can run a fine private cloud, if you have lots of people to throw at the project and are willing to do lots of coding, according to Alan Waite, a research director at Gartner.

    If you are thinking of OpenStack, Waite reckons you’ll therefore do worse than to hand over an implementation project to its backers because working with the stack is not straightforward.

    “OpenStack is great as an open source standard for infrastructure access,” Waite told the Gartner IT Infrastructure, Operations & Data Center Summit” in Sydney today. “It has great APIs. But it is not a cloud management tool. It is a framework on which you build and this is why people get into trouble: it is a science project and you need to be aware what you are getting into.”

    That complexity, and the need to do a fair bit of integration between modules, is the reason Gartner sees mainly large organisations trying OpenStack. Indeed, Waite said the firm has counted just 740 implementations anywhere, “because the use cases are pretty small.”

    Hyperscale operations are the sweet spot, for now, which is why the likes of eBay, PayPal, WalMart and BMW are prominent among implementers.

    Waite also said OpenStack’s structure deserves scrutiny. Organisations fond of medium-term roadmaps from key suppliers will need to come to terms with OpenStack’s six month-horizons. Those who accept Linus Torvalds’ control of Linux as a useful stabiliser need to understand that OpenStack’s many project co-ordinators could conceivably choose almost any path for the projects they lead.

    Reply
  40. Tomi Engdahl says:

    It’s the end of life as we know it for Windows Server 2003
    Can you survive without support?
    http://www.theregister.co.uk/2015/05/18/its_the_end_of_life_as_we_know_it_for_windows_server_2003/

    Windows Server 2003 will pass out of Microsoft support on July 14, 2015. Different organisations report different numbers, but all agree that there are millions of Server 2003 servers still running in the wild.

    Microsoft says there are 11 million Server 2003 servers still running. Gartner says eight million. Several internet searches bring up various other numbers, but I think it is safe to say somewhere between five and 15 million Server 2003 servers are still out there.

    My hunch is that Gartner is under-estimating here. The analyst focuses on enterprises and on the whole wouldn’t care if small businesses were all to get flushed into the sun. A Spiceworks poll of workplaces reports that 57 per cent of respondents have at least one Server 2003 instance still running.

    There are a number of reasons why people don’t want to migrate: familiarity with the older operating system; money; and in many cases the complexity of the workloads running on those Server 2003 instances.

    Reply
  41. Tomi Engdahl says:

    SimpliVity opens its doors to KVM, OpenStack
    Hyper-V also on the roadmap for hyper-convergence evangelists
    http://www.theregister.co.uk/2015/05/20/simplivity_gives_the_nod_to_kvm_openstack/

    Hyper-converged infrastructure company SimpliVity has announced it now supports the KVM hypervisor and OpenStack.

    SimpliVity’s schtick is that everything below the hypervisor – especially data compression and de-duplication for back-up, networking and cloud-tiering storage – should be handled by a hyper-converged appliance. It therefore offers to pop a dedicated PCIe card into its 2U “OmniCubes” to handle those chores without bothering the CPU and degrading application performance, because it assumes you buy servers to run apps and not to do boring background chores like dedupe.

    Until now, SimpliVity has assumed apps get to live in the virtualised paradise that is ESXi. But the company’s now decided to support KVM too.

    One reason is that supporting KVM means SimpliVity plays nice with OpenStack. As the two entities share an ambition to change the way large-scale IT gets done, supporting KVM makes a lot of sense.

    Reply
  42. Tomi Engdahl says:

    Do any REAL CIOs believe we’re in a post PC world? No.
    Reg roundtable delivers non-rose-tinted view of 2020
    http://www.theregister.co.uk/2015/05/20/2020_tech_future_roundtable/

    Reply
  43. Tomi Engdahl says:

    If IT isn’t careful, marketing will soon be telling us what to do
    Tell us how can CIOs hold the line in a multi-channel world
    http://www.theregister.co.uk/2015/05/20/roundtable_june_multichannel_world/

    CIO Manifesto Every business today is a multichannel business with more communication channels generating more data than ever before.

    Marketing is a key driver here as it struggles to keep the business relevant and on the agenda of a new breed of buyer – and supplier. It needs to be social, integrated, responsive, open, honest. But IT has to hold it all together, and we want to know how you’re managing to do this.

    How does IT need to change to ensure its survival in a mutli-channel world

    For tech, this is a new world of challenges. Social is alien. It’s a world largely beyond your control, which can generate piles of hard to interrogate data that needs to be integrated with existing businesses systems.

    And someone also has to tie this all together, ensure customer data is secure, and that employees’ social communications are not dragging the company into the compliance mire.

    Worse still, the world of marketing technology is moving at such a phenomenal pace it throws the spotlight on IT like never before. It needs to be agile, fast, responsive and massively delivery focussed – but are your traditional systems, practices and teams constructed and managed to meet these challenges?

    Either way, we tend to get blamed for things we only heard about when they go wrong.

    Reply
  44. Tomi Engdahl says:

    How does IT need to change to ensure its survival in a mutli-channel world
    17th June 2015. 6pm
    Facing up to marketing and social challenges
    http://whitepapers.theregister.co.uk/paper/view/3904/

    Reply
  45. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Microsoft rolls out touch-first test versions of Word, Excel, PowerPoint for Android phones; devices must run KitKat 4.4.x or higher, have at least 1GB RAM

    Microsoft rolls out touch-first Office apps preview for Android phones
    http://www.zdnet.com/article/microsoft-rolls-out-touch-first-office-apps-preview-for-android-phones/

    Summary:Microsoft is rolling out test versions of its standalone, touch-first Word, Excel and PowerPoint apps for Android phones.

    Reply
  46. Tomi Engdahl says:

    AMD Preps GPU, DRAM Stack
    http://www.eetimes.com/document.asp?doc_id=1326630&

    SAN FRANCISCO – AMD plans to combine on a single device one its graphics processors and the SK Hynix high bandwidth memory (HBM) DRAM stack. The company described the technology but not the specifics of the product it claims will beat to market similar devices described by its rival Nvidia.

    The approach will deliver more than 100 Gbytes/second of memory bandwidth, up from 28 GB/s using external GDDR5 DRAMs in today’s boards. The GPU die and DRAM stack will sit side-by-side on a silicon interposer in a so-called 2.5-D stack, a technique first pioneered in FPGAs by Xilinx.

    Although the HBM stack runs at a slower clock rate than GDDR5 chips (500 compared to 1,750 MHz), the HBM chips sit on a 1,024-bit link compared to a 32-bit interface for GDDR5. The HBM stack also runs at a lower voltage than GDDR5 (1.3 versus 1.5V).

    As a result, HBM can deliver 35 GBytes/s of bandwidth per watt, more than three times the 10.66 GB/s/W of GDDR5. In addition, the HBM stack fits into a 35mm 2 area, 94 percent smaller than the GDDR5 chips required to deliver as much capacity.

    Reply
  47. Tomi Engdahl says:

    Qt celebrated its 20th anniversary

    Qt development environment for at least 20 years. Platform independent development environment version 0.90 was published on May 20, 1995. At present, the Qt tools used by over 800,000 developers worldwide.

    From the beginning, Qt has been available in both free and commercial license.

    Nokia acquired Trolltech’s in 2008 and sold it to Digia in 2012.

    Digia, Qt application area is quickly extended to other embedded applications, for example automotive systems.

    Qt is soon coming version 5.5.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=2857:qt-taytti-20-vuotta&catid=13&Itemid=101

    Reply
  48. Tomi Engdahl says:

    SAP chief sees no takers for software rival Salesforce
    http://www.reuters.com/article/2015/05/20/us-salesforce-com-sap-se-idUSKBN0O50L720150520

    German software company SAP’s chief executive once again ruled out any move to acquire Salesforce.com, then went further by saying that its richly valued rival is unlikely to be acquired by any other player in the industry.

    Microsoft, Oracle, IBM and SAP have all been touted as potential buyers of Salesforce, which last month said it had been contacted by an unnamed suitor about a potential takeover and that it had hired financial advisers.

    Established players in the industry are all looking to boost Internet-based delivery of their business software products to fend off competition from pure cloud-based rivals, a market Salesforce pioneered and where it remains category leader.

    But SAP Chief Executive Bill McDermott remains adamant that his company is not interested and told reporters that Salesforce’s core customer relationship management (CRM) products have become commoditised and are now widely available from SAP and other software providers.

    Reply
  49. Tomi Engdahl says:

    It pays to fake it: Test your flash SAN with a good simulation
    How to measure your storage performance
    http://www.theregister.co.uk/2015/05/21/it_pays_to_fake_it_test_your_flash_san_with_a_good_simulation/

    It is pretty obvious that storage systems vary. You could reply, with some justification: “No shit, Sherlock!”

    What is less obvious and more useful to know, however, is how and why they vary and how the variation – not just between all-disk, hybrid and all-flash arrays but even between different arrays of the same class – can affect your applications.

    Increasingly the answers to those questions are being found in simulations, whether that’s simulations of entire SANs or simply of existing workloads designed to stress test both the SAN and the attached storage systems.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*