Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Employees love their IT departments (almost very nearly true)
    …While trying to do their own IT support
    http://www.theregister.co.uk/2015/06/17/employees_it_department_appreciation_survey/

    Haters gonna hate, but not corporate employees who actually quite like the support their IT departments provide.

    That’s according to a survey of 2,500 staffers in Australia, France, Germany, the UK and the USA that landed on our helpdesk this week. When asked to give a letter grade, more than 80 per cent of respondents gave their IT departments an A or a B.

    Even better – maybe – employees in increasing numbers are learning how to do their own level 1 support, with 81 per cent of respondents trying to sort out their own IT issues before asking IT for help.

    This “follows the trend of increased self-sufficiency and autonomy in end users and indicates that end users are more resourceful than ever”, according to LANDESK, an IT systems management vendor, which commissioned the survey.

    IT issues are not much of a drag on user productivity – 46 per cent losing less than one hour of work per month and 80 per cent lost less than three hours work per month.

    Reply
  2. Tomi Engdahl says:

    Expert: This is how IT mega-project is successful

    US healthcare IT mega-projects are successful more than half.

    “The United States has been completed successfully, even billion giant projects that go beyond state borders,” Obey said.

    He did not see very much difference in the Project Will a private company only public organization, such as Finland. However, Obey pointed out that the United States being the health-it: even in 5-7 years Finnish above.

    The trend is the use of analytics, digital technologies and mobile devices. The data make it possible to reach the treatment of illnesses in their prevention.

    How to succeed?

    Obeyn the planning phase of projects has left the most important. Then determine what your aiming and what are their needs.

    The key to the success of the reform projects can be found in the cord. Project Leaders must be clearly in mind the objectives and visions. They shall also ensure that the objectives will end up the entire project team and the design process.

    “I recommend to take the project leaders of the workshops and to identify which goals are not aligned with each other,” said Obey.

    Project managers should also take into account clinical, doctors point of view.

    “There may not be anyone saying what an organization should go and how to get there. The goal must be to articulate clearly, “Obey said.

    The same issues also need to be aware of the system vendor.

    One important factor for the success is also money. Healthcare IT project failure is the cause of the Obeyn often saved too much on budget.

    Obey says that the best managed in the US healthcare IT project to save money, improve the quality of care and patient safety.

    “It means a continuous development. Travel is a 10-15 year long, ”

    Source: http://www.tivi.fi/rss/2015-06-17/Asiantuntija-N%C3%A4in-it-megahanke-onnistuu-3324065.html

    Reply
  3. Tomi Engdahl says:

    Cromwell Schubarth / Silicon Valley Business Journal:
    Andreessen Horowitz makes case against tech bubble talk — There has been a lot of debate about whether a new tech bubble has formed in the past couple of years, but the investors at Andreessen Horowitz aren’t buying it. — In a presentation to its big investors, the Menlo Park firm co-founded …

    Andreessen Horowitz makes case against tech bubble talk
    http://www.bizjournals.com/sanjose/blog/techflash/2015/06/andreessen-horowitz-makes-case-against-tech-bubble.html?page=all

    There has been a lot of debate about whether a new tech bubble has formed in the past couple of years, but the investors at Andreessen Horowitz aren’t buying it.

    In a presentation to its big investors, the Menlo Park firm co-founded by Marc Andreessen and Ben Horowitz argued that this time it really is different. We aren’t watching a replay of previous booms and busts in the sector, they said, particularly not another dotcom meltdown.

    — More money is flowing into privately owned tech companies than at any time since the bubble of the late 1990s, but this is mostly because of the megabucks being poured into a handful of late-stage companies that have become known as unicorns.

    Reply
  4. Tomi Engdahl says:

    Kyle Orland / Ars Technica:
    Oculus Rift and Touch hands-on: light, easy to wear, unobtrusive cable, audio similar to prototype, finger tracking unrefined, but shared play natural, engaging

    Hands-on: Reaching out and touching someone with Oculus’ Touch controllers
    Plus, we give our first impressions of the finalized consumer Rift headset.
    http://arstechnica.com/gaming/2015/06/hands-on-reaching-out-and-touching-someone-with-oculus-touch-controllers/

    Reply
  5. Tomi Engdahl says:

    Satya Nadella email to employees on aligning engineering to strategy
    http://news.microsoft.com/2015/06/17/satya-nadella-email-to-employees-on-aligning-engineering-to-strategy/

    Over the past year, I have said that Microsoft aspires to empower every person and every organization on the planet to achieve more. To do this, building the best-in-class productivity services and platforms for the mobile-first, cloud-first world is at the heart of our strategy, with three interconnected and bold ambitions:

    Reinvent productivity and business processes
    Build the intelligent cloud platform
    Create more personal computing

    To better align our capabilities and, ultimately, deliver better products and services our customers love at a more rapid pace, I have decided to organize our engineering effort into three groups that work together to deliver on our strategy and ambitions. The changes take effect today.

    Terry Myerson will lead a new team, Windows and Devices Group (WDG), enabling our vision of a more personal computing experience powered by the Windows ecosystem.
    WDG will drive Windows as a service across devices of all types and build all of our Microsoft devices including Surface, HoloLens, Lumia, Surface Hub, Band and Xbox.

    Scott Guthrie will continue to lead the Cloud and Enterprise (C+E) team focused on building the intelligent cloud platform that powers any application on any device. The C+E team will also focus on building high-value infrastructure and business services that are unique to enterprise customers, such as data and analytics products, security and management offerings, and business processes.

    Qi Lu will continue to lead the Applications and Services Group (ASG) that is focused on reinventing productivity.

    Reply
  6. Tomi Engdahl says:

    Oculus CEO: Consumer Rift and a suitable PC will cost about $1,500
    http://arstechnica.com/gaming/2015/05/oculus-ceo-consumer-rift-will-cost-about-1500-with-a-suitable-pc/

    As Oculus nears its recently announced “Q1 2016″ launch of the consumer version of its Oculus Rift headset, the company has remained tight-lipped about specifically how much it will charge for the hardware. Today, Oculus CEO Brendan Iribe gave the biggest hint yet of that price: a complete Rift system, including a computer that can power the experience, should cost about $1,500.

    “We are looking at an all-in price, if you have to go out and actually need to buy a new computer and you’re going to buy the Rift … at most you should be in that $1,500 range,” Iribe said during an interview at the Re/code conference in Rancho Palos Verdes, California.

    Reply
  7. Tomi Engdahl says:

    Elop and Others Leaving Microsoft, Myerson Taking Bigger Role
    http://tech.slashdot.org/story/15/06/17/160223/elop-and-others-leaving-microsoft-myerson-taking-bigger-role

    Former Nokia CEO Stephen Elop and “Scroogled” mastermind Mark Penn are leaving Microsoft as part of a fresh company reorganization. “We are aligning our engineering efforts and capabilities to deliver on our strategy and, in particular, our three core ambitions,” says CEO Satya Nadella in an e-mail to employees today.

    Stephen Elop and Mark Penn leave Microsoft in company shake-up
    http://www.theverge.com/2015/6/17/8796055/stephen-elop-mark-penn-leave-microsoft

    Reply
  8. Tomi Engdahl says:

    LUCI4HPC
    http://www.linuxjournal.com/content/luci4hpc?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    The software described in this article is designed for a Beowulf-style cluster. Such a cluster commonly consists of consumer-grade machines and allows for parallel high-performance computing. The system is managed by a head node and accessed via a login node. The actual work is performed by multiple compute nodes. The individual nodes are connected through an internal network. The head and login node need an additional external network connection, while the compute nodes often use an additional high-throughput, low-latency connection between them, such as InfiniBand.

    This rather complex setup requires special software, which offers tools to install and manage such a system easily. The software presented in this article—LUCI4HPC, an acronym for lightweight user-friendly cluster installer for high performance computing—is such a tool.

    The aim is to facilitate the maintenance of small in-house clusters, mainly used by research institutions, in order to lower the dependency on shared external systems. The main focus of LUCI4HPC is to be lightweight in terms of resource usage

    LUCI4HPC focuses only on essential features

    The current beta version of LUCI4HPC comes in a self-extracting binary package and supports Ubuntu Linux. Execute the binary on the head node, with an already installed operating system, to trigger the installation process. During the installation process, you have to answer a series of questions concerning the setup and configuration of the cluster. These questions include the external and internal IP addresses of the head node, including the IP range for the internal network, the name of the cluster as well as the desired time zone and keyboard layout for the installation of the other nodes.

    The installation script offers predefined default values extracted from the operating system for most of these configuration options. The install script performs all necessary steps in order to have a fully functional head node. After the installation, you need to acquire a free-of-charge license on the LUCI4HPC Web site and place it in the license folder. After that, the cluster is ready, and you can add login and compute nodes.

    It is very easy to add a node. Connect the node to the internal network of the cluster and set it to boot over this network connection. All subsequent steps can be performed via the Web-based control panel.

    The installation script offers predefined default values extracted from the operating system for most of these configuration options. The install script performs all necessary steps in order to have a fully functional head node. After the installation, you need to acquire a free-of-charge license on the LUCI4HPC Web site and place it in the license folder. After that, the cluster is ready, and you can add login and compute nodes.

    It is very easy to add a node. Connect the node to the internal network of the cluster and set it to boot over this network connection. All subsequent steps can be performed via the Web-based control panel.

    An important part of a cluster software is the scheduler, which manages the assignment of the resources and the execution of the job on the various nodes. LUCI4HPC comes with a fully integrated job scheduler, which also is configurable via the Web-based control panel.

    The control panel uses HTTPS, and you can log in with the user name and password of the user that has the user ID 1000. It is, therefore, very easy and convenient to change the login credentials—just change the credentials of that user on the head node.

    The installation process of LUCI4HPC is handled with a preseed file for the Ubuntu installer as well as pre- and post-installation shell scripts. These shell scripts, as well as the preseed file, are customizable.

    Resources
    LUCI4HPC: http://luci.boku.ac.at

    Reply
  9. Tomi Engdahl says:

    AMD Beats Nvidia to 2.5-D Graphics
    Fiji GPU debuts with SK Hynix memory stack
    http://www.eetimes.com/document.asp?doc_id=1326890&

    AMD beat archrival Nvidia to the goal of rolling out high-end graphics cards that use DRAM chip stacks to provide more memory bandwidth — and thus performance — on relatively small, low-power boards. AMD rolled out the four new graphics cards at E3, a conference for serious gamers and those who develop for them.

    The new Radeon R9 300 series is based on AMD’s new Fiji GPUs and High Bandwidth Memory (HBM) chip stacks from SK Hynix. “Fiji is the most complex and highest performance GPU we’ve ever built — it is the first with High Bandwidth Memory,” AMD CEO Lisa Su told attendees.

    AMD described the 2.5-D HBM stack earlier this month but did not say which GPU would use the next-generation memory developed with SK Hynix. Its flagship Radeon Fury X uses 4GB of HBM memory, delivering up to 512 Gbits/second of memory bandwidth — an increase of around 63% over the previous generation Radeon R9 290X, principal analyst Patrick Moorhead of Moor Insights & Strategy wrote — to reach a 1.5x improvement in performance per watt.

    The 28nm Fury X is a liquid cooled card with 4,096 stream processors and 64 compute units at clock speeds up to 1.05 GHz. Fury X can perform at up to 8.6 GFLOPS, a 65% over the previous generation, to display games at 45 frames per second (FPS) for 4K and 65 fps on future 5K displays.

    Reply
  10. Tomi Engdahl says:

    After Uproar, Disney Cancels Tech Worker Layoffs
    http://tech.slashdot.org/story/15/06/17/1611211/after-uproar-disney-cancels-tech-worker-layoffs

    The NY Times previously reported that Disney made laid-off workers train their foreign replacements. The Times now reports that Disney has reversed its decision to lay off the workers and canceled training of the replacements. This follows public uproar, two investigations by the Department of Labor into outsourcing firms,

    In Turnabout, Disney Cancels Tech Worker Layoffs
    http://www.nytimes.com/2015/06/17/us/in-turnabout-disney-cancels-tech-worker-layoffs.html?_r=0

    In late May, about 35 technology employees at Disney/ABC Television in New York and Burbank, Calif., received jarring news. Managers told them that they would all be laid off, and that during their final weeks they would have to train immigrants brought in by an outsourcing company to do their jobs.

    The training began, but after a few days it was suspended with no explanation. In New York, the immigrants suddenly stopped coming to the offices. Then on June 11, managers summoned the Disney employees with different news: Their layoffs had been canceled.

    Although the number of layoffs planned was small, the cancellation, which was first reported by Computerworld, a website covering the technology business, set off a hopeful buzz among tech employees in Disney’s empire. It came in the midst of a furor over layoffs in January of 250 tech workers at Walt Disney World in Orlando, Fla. People who lost jobs there said they had to sit with immigrants from India, some on temporary work visas known as H-1B, and teach them to perform their jobs as a condition for receiving severance.

    Reply
  11. Tomi Engdahl says:

    Hazelcast adds amped-up caching to its in-memory tech
    Company goes NUTS with enterprise add-on to open-source core
    http://www.theregister.co.uk/2015/06/18/hazelcast_adds_ampedup_caching_to_its_inmemory_tech/

    In-memory open source NoSQLer Hazelcast has announced a caching update to its product range, claiming v3.5 of its High Density Memory Store offers “100’s of GB of near cached data to clients for massive application scalability”.

    The near cache provides hundreds of gigabytes of in-memory data on a single application server, meaning “an instant, massive increase in data access speed on fewer application server instances to power the same total throughput”.

    What’s a near cache? Hazelcast tells us thus: “A cache of data that is kept locally in the memory of the client application where it takes microseconds to access. That’s in contrast to the data in the Hazelcast cluster which is also kept in-memory, but because of network latency, it takes milliseconds to access.”

    http://www.theregister.co.uk/2014/09/19/hazelcast_wants_to_eviscerate_exadata/

    Reply
  12. Tomi Engdahl says:

    JavaScript creator Eich’s latest project: KILL JAVASCRIPT
    Someday you’ll code for the web in any language, and it’ll run at near-native speed
    http://www.theregister.co.uk/2015/06/18/brendan_eich_announces_webassembly/

    Brendan Eich, the former CEO of Mozilla, has announced a new project that could not only speed up web applications but could eventually see the end of JavaScript as the lingua franca of web development.

    Dubbed WebAssembly, the new effort is a kind of successor to Asm.js, the stripped-down JavaScript dialect that backers describe as an “assembly language for the web.” Like Asm.js, it executes via a JavaScript engine. The difference is that WebAssembly is a new, low-level binary format, like a bytecode, which allows it to load and run even faster than Asm.js.

    The long-term goal, Eich said, is for WebAssembly to become a kind of binary object format for the web, one that can be used as a compiler target for all kinds of languages – including but not limited to JavaScript.

    “Bottom line: with co-evolution of [JavaScript and WebAssembly], in a few years I believe all the top browsers will sport JS engines that have become truly polyglot virtual machines,” Eich said in a Wednesday blog post.

    WebAssembly has apparently been underway as a skunkworks project for some time and has already gained the support of developers at Google, Microsoft, and Mozilla, all three of which have previously built Asm.js optimizations into their respective browsers.

    browser vendors have “come to a general agreement on a shared set of goals.”

    One of those goals is “interoperability with JavaScript.” In keeping with that, one deliverable for the project is a “polyfill” that implements WebAssembly in JavaScript, so that today’s browsers will be able to execute WebAssembly code once the spec is finalized – albeit with slightly degraded performance.

    What’s more, although WebAssembly is itself a binary format, the specification will include a text format that can be rendered when anyone does a View Source, preserving (at least in part) the openness of the web.

    As far as language support, the initial focus will be on compiling C/C++ to WebAssembly. Once a viable backend for the LLVM compiler has been developed, work on other languages will commence – although you can expect that to be some way down the road.

    a modified version of Google’s V8 JavaScript engine that can execute WebAssembly natively have already been posted to the project’s GitHub page.

    https://www.w3.org/community/webassembly/

    https://github.com/WebAssembly

    Reply
  13. Tomi Engdahl says:

    Three things you need to break down those company silos
    Why soft skills matter in the drive to shared services
    http://www.theregister.co.uk/2015/06/18/breaking_down_silo_mentality/

    If you’re the guy tasked with breaking down silos, should you be breaking down the people who police those silos first? We explore how to de-mine your team ahead of your brownfield project.

    Something common between all of the frustrating employers and clients has been when the place has operated as a collection of separate silos.

    What does matter is the third of the three definitions in that same entry: “a system, process, department, etc. that operates in isolation from others”.

    Companies have structure: all but the smallest couldn’t work without one. They’re split into divisions, the divisions may be further split into departments, and so on. It’s a necessary thing to do in order to preserve order.

    You sometimes have to …

    The title of this feature makes it pretty clear that we think a company operating in a silo mentality is a Bad Thing and that the structure needs to be sorted out. Before we get to how you do that, though, let’s look at instances where a silo approach is absolutely essential to some things we do.

    Take the information security function of your business. Can you let individual departments look after their security? Of course not, because they don’t know how to and they won’t do it anyway – particularly the sales function because given the choice of employing a security specialist (who costs money) or a salesperson (who does quite the opposite) the decision is a no-brainer.

    … but usually you don’t

    So we’ve established that sometimes businesses need to have components that are deliberately and forcefully separate.

    There are only three departments that really have the benefit of seeing across the entire company: IT, finance and HR; of the three, IT’s the one best placed to help departments out either by innovation or by simply directing the right technology and people at the problem.

    Three things you need

    People with MBAs have written plenty of books about how to organise companies so they work best
    1. Helping them understand the big picture
    2. Identifying tasks that belong elsewhere
    3.Doing it well In my experience this is the most important of the three actions that help you break down silos.

    In short

    Although this feature is ostensibly about breaking down silos, in fact this isn’t always either possible or desirable. What you do need to do, though, is ensure that:

    Where silos exist they’re doing the stuff that necessarily belongs in them.
    There’s sufficient communication between the silos (either directly or via departments like finance and IT which frequently interact with all of them).
    Where silos are stepping outside their sensible scope you make that stop by ensuring that the proper department does the stuff they shouldn’t be doing – but justifies this desire by making a bloody good job of it.

    Use your influence and personal relationships to get over to people the reasons why you think changing the approach will work better – after all you have no authority over them so influence is the best tool you have to hand.

    Reply
  14. Tomi Engdahl says:

    Google, Microsoft, Mozilla And Others Team Up To Launch WebAssembly, A New Binary Format For The Web
    http://techcrunch.com/2015/06/17/google-microsoft-mozilla-and-others-team-up-to-launch-webassembly-a-new-binary-format-for-the-web/

    Google, Microsoft, Mozilla and the engineers on the WebKit project today announced that they have teamed up to launch WebAssembly, a new binary format for compiling applications for the web.

    The web thrives on standards and, for better or worse, JavaScript is its programming language. Over the years, however, we’ve seen more and more efforts that allow developers to work around some of the limitations of JavaScript by building compilers that transpile code in other languages to JavaScript. Some of these projects focus on adding new features to the language (like Microsoft’s TypeScript) or speeding up JavaScript (like Mozilla’s asm.js project). Now, many of these projects are starting to come together in the form of WebAssmbly.

    The new format is meant to allow programmers to compile their code for the browser (currently the focus is on C/C++, with other languages to follow), where it is then executed inside the JavaScript engine. Instead of having to parse the full code, though, which can often take quite a while (especially on mobile), WebAssembly can be decoded significantly faster

    Mozilla’s asm.js has long aimed to bring near-native speeds to the web. Google’s Native Client project for running native code in the browser had similar aims, but got relatively little traction. It looks like WebAssemly may be able to bring the best of these projects to the browser now.

    Reply
  15. Tomi Engdahl says:

    Open at the source
    http://www.apple.com/opensource/

    As the first major computer company to make Open Source development a key part of its ongoing software strategy, Apple remains committed to the Open Source development model. Major components of Mac OS X, including the UNIX core, are made available under Apple’s Open Source license, allowing developers and students to view source code, learn from it and submit suggestions and modifications. In addition, Apple uses software created by the Open Source community, such as the HTML rendering engine for Safari, and returns its enhancements to the community.

    https://developer.apple.com/opensource/

    Reply
  16. Tomi Engdahl says:

    Apple quietly pulls original iPad mini from web site and Apple Store
    http://9to5mac.com/2015/06/19/apple-quietly-pulls-ipad-mini-from-web-site-and-apple-store/

    Apple notably continued to sell the 16GB iPad mini as an entry-level model alongside two of its sequels, dropping its price to $299 in October 2013, then $249 in October 2014. In recent months, falling street prices for other models made the classic mini a tougher sell.

    Reply
  17. Tomi Engdahl says:

    News & Analysis
    SSD Standards Poised for Update
    http://www.eetimes.com/document.asp?doc_id=1326915&

    The JEDEC Solid State Technology Association is looking to update its solid state drive (SSD) standard for the first time in more than four years, and given recent SSD-related technology development, there will be several factors affecting the next iteration of the standard.

    Originally published in September 2010, the last update to the JESD218 standard for Solid State Drive Requirements and Endurance Test Method was released in February 2011. As flash becomes more pervasive in data centers, vendors are diversifying their SSD offerings, in part to address different workloads, which JEDEC describes in a separate document.

    Reply
  18. Tomi Engdahl says:

    Dana Wollman / Engadget:
    Toshiba’s Windows 10 laptops all have a built-in Cortana key
    http://www.engadget.com/2015/06/18/toshiba-windows-10-laptops-cortana-button/

    Reply
  19. Tomi Engdahl says:

    Jacob Long / Android Police:
    Pushbullet Introduces Portal, A New App For Easy File Sharing Between Your Android Device And PC
    http://www.androidpolice.com/2015/06/16/pushbullet-introduces-portal-a-new-app-for-easily-sharing-files-between-your-android-device-and-pc/

    The developers that brought us Pushbullet have announced a brand new app. Portal is designed to do one thing and one thing only: move files between your computer and your Android device. While this is possible with Pushbullet, it isn’t a strong point and requires sending those files to their servers and back. Portal sends them within your local wireless network, avoiding potentially costly data fees and making possible far faster transfer times.

    Reply
  20. Tomi Engdahl says:

    Talking to IT: the struggle is real.
    By Maxime Doucet-Benoit – April 23, 2015
    https://www.igloosoftware.com/blogs/inside-igloo/talking_to_it_the_struggle_is_real?utm_source=techmeme&utm_medium=referral&utm_campaign=blog

    You know exactly what would help you, but you can’t sell it to IT? We hear that all the time. We got you.

    Most of the interactions we have are with people in corporate communications (hi!). They notice the technology problems first, because it stops their workflow dead in its tracks.

    We talk to these people, and we ask them about their woes. The main one goes something like this: I absolutely love Igloo (yeah you do), but we’ve standardized on – insert legacy platform here – and I’m not sure how to bring this to IT. The struggle is real, we totally understand.

    Keep your cool.

    Of course, some people are just mean, and not because they are in IT. If at first you fail, reassess who the stakeholders are, maybe you can approach someone else about this. If this is truly a problem for the work your company is trying to do, someone is bound to see it, eventually. This can be a long process, and it doesn’t reflect in any way on your leadership. It’s just complicated to get across, it’s a big investment at the same time as you try to change the way people work. That is no easy task.

    Reply
  21. Tomi Engdahl says:

    “Optimize Everything” Startup SigOpt Raises $2M From A16Z And Data Collective
    http://techcrunch.com/2015/06/16/sigopt-seed-funding/?ncid=rss&cps=gravity_1462_-5674937221657316555

    Y Combinator-incubated SigOpt is announcing that it has raised $2 million in seed funding.

    The company’s aim is to help customers optimize anything, whether it’s an ad campaign or a formula for shaving cream.

    “The technology’s really for anyone testing things on a trial-and-error basis,” Clark said. “That could be someone in a lab running experiments that take dozens to hundreds of hours, or it could could be in a machine learning setup that requires dozens or hundreds of computer hours.”

    In the case of a physical experiment, you’d manually enter both the experiment and the results, then SigOpt could tell you what variation to test next. Clark said it can automatically help you identify the “points of highest improvement” while balancing between the need for “exploration” (i.e., trying out totally new ideas) and “exploitation” (drilling down on the areas that are already working).

    Reply
  22. Tomi Engdahl says:

    Intel, Altera: Math in Question
    Co-packaged x86, FPGAs ship late 2016
    http://www.eetimes.com/document.asp?doc_id=1326741&

    The math on Intel’s $16.7 billion bid to buy Altera doesn’t add up although the merged companies could see gains, analysts said. The deal raises more questions than others in a string of recent mega-mergers for a semiconductor industry that is consolidating as it matures.

    In a conference call, Intel chief executive Brian Krzanich said the combined companies will ship integrated products starting in late 2016 for servers and some still-undetermined embedded systems. Initial products will pack x86 and FPGA die in a single package, followed “shortly” by products that merge both on SoCs.

    Microsoft is starting to use Altera FPGAs to accelerate its search algorithm, a researcher revealed last summer. Separately, Microsoft and China’s Baidu said they also are exploring other uses for FPGAs.

    Reply
  23. Tomi Engdahl says:

    Sebastian Anthony / Ars Technica:
    How lithium-ion batteries, industrial design and Moore’s law created modern laptops

    The creation of the modern laptop
    An in-depth look at lithium-ion batteries, industrial design, Moore’s law, and more.
    http://arstechnica.com/gadgets/2015/06/from-laptops-that-needed-leg-braces-to-laplets-engineering-mastery/

    Reply
  24. Tomi Engdahl says:

    Oculus Rift Inventor Palmer Luckey: Virtual Reality Will Make Distance Irrelevant (Q&A)
    http://recode.net/2015/06/19/oculus-rift-inventor-palmer-luckey-virtual-reality-will-make-distance-irrelevant-qa/

    Will virtual reality be as significant a new technology as smartphones were a decade ago? Can VR do more than just play games? And how much is all of this going to cost?

    Oculus co-founder Palmer Luckey has answers to all of those questions, and more. At the gaming trade show E3 this week, he sat down with Re/code to discuss the Oculus Rift, the company’s PC-connected headset that is set to launch in early 2016.

    Reply
  25. Tomi Engdahl says:

    SSD Prices In A Free Fall
    http://www.networkcomputing.com/storage/ssd-prices-in-a-free-fall/a/d-id/1320958

    With the prices of solid-state drives expected to reach parity with hard-disk drives next year, are HDDs doomed?

    Hard-disk drive vendors point to the higher price of solid-state drives as a reason to keep on buying hard drives, but as Bob Dylan sang, “The Times They Are a -Changin’.” The advent of 3D NAND has become a game-changer for the storage industry by increasing SSD capacity and dropping SSD prices.

    By packing 32 or 64 times the capacity per die, 3D NAND will allow SSDs to increase capacity well beyond hard drive sizes. SanDisk, for example, plans 8 TB drives this year, and 16 TB drives in 2016. At the same time, vendors across the flash industry are able to back off two process node levels and obtain excellent die yields.

    The result of the density increase is clear: This year, SSDs will nearly catch up to HDD in capacity. Meanwhile, hard drives appear to be stuck at 10 TB capacity, and the technology to move beyond that size is going to be expensive once it’s perfected. HDD capacity curves already were flattening, and the next steps are likely to take some time.

    This all means that SSDs will surpass HDDs in capacity in 2016. There’s even serious talk of 30 TB solid-state drives in 2018.

    So what about SSD price points? In 2014, prices for high-end consumer SSDs dropped below enterprise-class HDD, and continued to drop in 2015. A terabyte SSD can be had for around $300. Moreover, this is before 3D NAND begins to further cut prices. By the end of 2016, it’s a safe bet that price parity will be close, if not already achieved, between consumer SSDs and the bulk SATA drives.

    This will put pressure on hard-disk drive makers to lower prices, but, frankly, they’ve used up most of the tricks to reduce cost and are already at single-digit margins for bulk SATA drives, so they don’t have much wriggle room.

    As in any transition, there will be points of resistance. After-market HDD spares will continue to be sold, though upgrades and replacements will increasingly use SSDs, especially in servers. The volume reductions in HDDs will probably lead to some major fire sales, though.

    Speaking of tape, the SSD archive appliance likely will cause the demise of that hallowed medium. Today’s interest is more for rapid access to data, as demonstrated by Google’s Nearline cloud storage and Amazon Glacier. An SSD basis provides the desired low power with instant-on performance. Tape-based Glacier takes two hours to recover the first blocks of data.

    Reply
  26. Tomi Engdahl says:

    Public, Private, Hybrid? Choosing the Right Cloud Mix
    http://www.cio.com/article/2936939/hybrid-cloud/public-private-hybrid-choosing-the-right-cloud-mix.html

    Each model offers its own advantages – and tradeoffs. Here’s what you need to consider.

    IDG’s annual cloud survey, which polled more than 1,600 IT managers, found that 39 percent or organizations are using a mix of cloud models. About 60 percent have at least some enterprise applications hosted in a public cloud environment while nearly the same proportion (57 percent) said they were using a private cloud. About one in five are using a hybrid cloud.

    Despite differing safety, control and cost considerations between public and private cloud models, the growth in adoption of the two models is almost identical, according to the IDG survey.

    A public cloud is a great option if you are looking to offload some of the costs and management involved in running standardized applications and workloads such as email, collaboration and communications, CRM and web applications. In some cases, it is also a good option for application development and testing purposes. Many companies have also begun moving big data workloads to the public cloud because of the enormous scalability benefits.

    But there are some major caveats when using a public cloud. Your applications are hosted on an infrastructure that is shared by many other organizations.

    A private cloud model addresses many of these concerns. Because your applications and workloads are hosted on a dedicated infrastructure you have much more control over it. In many cases, a private cloud is enabled on existing enterprise hardware and software using virtualization technologies.

    Many companies use a private cloud model for proprietary workloads such as ERP, business analytics and HR applications

    Best of Both Worlds

    A hybrid approach combines the best of both cloud worlds by allowing organizations to tap the scalability and cost efficiencies of a public cloud while keeping core applications or data center components under enterprise control.

    Reply
  27. Tomi Engdahl says:

    Ask Slashdot: Best Setups For Navigating a Programming-Focused MOOC?
    http://ask.slashdot.org/story/15/06/21/1218252/ask-slashdot-best-setups-for-navigating-a-programming-focused-mooc

    Comments:

    I use a dual monitor workstation + a laptop to play the course content. This allows me my regular programming workspace on my computer with any reference material I need on the second monitor. Using the laptop allows me to go fullscreen without worrying about window focus and makes the material easy to pause by mashing the spacebar. My laptop also is setup to provide no notifications or interruptions so it is a distraction free workspace. I also download course material that I can listen to on drives.

    Its a lot easier when you have whole course material on the hdd and play video clips in mplayer window.
    https://github.com/coursera-dl/coursera

    Reply
  28. Tomi Engdahl says:

    Facebook SSD failure study pinpoints mid-life burnout rate trough
    Burnouts peak early, then fall, before increasing with age. Like journalists, then
    http://www.theregister.co.uk/2015/06/22/facebook_reveals_ssd_failure_rate_trough/

    Facebook engineers and Carnegie Mellon researchers have looked into SSD failure patterns and found surprising temperature and data contiguity results in the first large-scale SSD failure study.

    In a paper (PDF) entitled A Large-Scale Study of Flash Memory Failures in the Field they looked at SSDs used by Facebook over a four year period, with many millions of days of usage. The SSD suppliers mentioned were Fusion-io, Hitachi GST, Intel, OCZ, Seagate and Virident.

    One finding was that SSDs do not fail at a steady rate over their life, instead having periods of higher and lower failures.

    Non-contiguously-allocated data leads to higher SSD failure rates, as can dense contiguous data under certain conditions.

    They point out that it is necessary to measure data actually written to flash cells in an SSD rather than the data sent to the SSD by the host OS, because of wear reduction techniques and system level buffering.

    http://users.ece.cmu.edu/~omutlu/pub/flash-memory-failures-in-the-field-at-facebook_sigmetrics15.pdf

    Reply
  29. Tomi Engdahl says:

    Not in front of the CIO: grassroots drive Linux container adoption
    It’s indifferent at the top
    http://www.theregister.co.uk/2015/06/22/linux_containers_enterprise_adoption/

    Here’s a thing. CIOs don’t care about vast swathes of technology in their organisations. They have people to do that.

    While they make speeches at fancy conferences about being agile / compliant / regulated / on top of the suppliers / skilling up the workers bees, those worker bees are handling the next Windows refresh.

    Sometimes, technology can be adopted at a grassroots level without ever troubling the upper echelons. Linux containerisation may be a good example.

    A survey of 381 IT decision makers and professionals commissioned by Red Hat, published on June 22, 2015, show that nearly all are planning container development on the Linux operating system.

    However, upper management and CIO directives play limited roles in containerised application adoption in the enterprise, respondents say. Internal champions are the grassroots IT implementers (39 per cent) and middle managers (36 per cent).

    Reply
  30. Tomi Engdahl says:

    So what are you doing about your legacy MS 16-bit applications?
    Buy time with Server 2008 or bite the 64-bit upgrade bullet?
    http://www.theregister.co.uk/2015/06/22/windows_server_2003_eos_16_bit_applications/

    This is the last gasp migration for Microsoft ecosystem 16-bit applications. Windows Server 2008 x86 is the last Microsoft server operating system to support them. You can upgrade from Server 2003 to Server 2008 and buy yourself a few more years, but extended support for Server 2008 runs out in 2020.

    The migration won’t be smooth. There are a whole bunch of “little things” that don’t quite work well with 16-bit applications under Server 2008. Printing is one. Especially under Remote Desktop Services (formerly known as Terminal Services). There are workarounds for pretty much everything, but be prepared to spend a lot of time on Google trying to find them.

    Remember that just because your application appears to be a 32-bit application with a Win32 GUI and all the window dressing of a 32-bit Windows application doesn’t mean it is. Lots of 32-bit applications call 16-bit components.

    If you have any reason to doubt that your seemingly 32-bit application might not be entirely 32-bit there is a simple way to check. Install the application on a clean, unpatched Server 2008 x64. It will either work or it won’t.

    The chances of a true 32-bit application which works on a fully patched Server 2003 R2 SP2 system not working on a clean, unpatched Server 2008 x64 are slim to none.

    If your “32-bit” application still refuses to work on an x64 copy of Server 2008 then the chances are that it’s calling some piece of 16-bit code somewhere. Start asking the developers pointed questions.

    Reply
  31. Tomi Engdahl says:

    Gazing at two-tier storage systems: What’s the paradigm, Doc?
    Cloud’s fundamental role in primary storage analytics assessed
    http://www.theregister.co.uk/2015/06/23/flash_trash_and_datadriven_infrastructures/

    I’ve been talking about two-tier storage infrastructures for a while now. End users are targeting this kind of approach to cope with capacity growth and performance needs.

    The basic idea is to leverage flash memory characteristics (all-flash, hybrid, hyperconvergence) on one side and implement huge storage repositories, where they can safely store all the rest (including pure trash) at the lowest possible cost, on the other. The latter is lately also referred to as a data lake.

    We are finally getting there but there is something more to consider — essentially, the characteristics of these storage systems.

    Smarter primary storage

    When it comes to primary storage, analytics is primarily used to improve TCO and to make life simpler for sysadmins. The array continuously collects tons of data from sensors that are then sent to the cloud, aggregated and organised with the goal of giving you information and insights about what is happening to your storage.

    The role of cloud

    Cloud has a fundamental role in primary storage analytics. It has three major advantages. The first is that storage doesn’t need to waste system resources for this application, concentrating all its power to IOPS, latency and predictability.

    Secondly, cloud allows the aggregation of data coming from all over the world, enabling comparisons which would otherwise be impossible to make.

    And, last but not least, cloud helps to simplify the infrastructure because there is no need for a local console or analytics server.

    There is however one considerable exception. DataGravity, which is developing enterprise storage for the mid market, has a peculiar architecture capable of running analytics directly in the system.

    Scale-out storage systems are becoming much more common now, and the trend is clear: they are embedding a series of functionalities to manage, analyse and do automated operations on large amounts of data without the need for external compute resources. Most of these systems have recently started to expose HDFS to be easily integrated with Hadoop for in-place data analytics.

    Storage is changing very quickly, traditional unified storage systems are no longer the solution to every question (and this is why companies like NetApp are no longer growing).

    We are seeing an increasing demand for performance, very predictable behaviour, and specific analytics features to help attain the maximum efficiency and simplify the job of IT operations.

    Reply
  32. Tomi Engdahl says:

    Steam Hit A New Record During The Summer Sale
    http://www.gamingonlinux.com/articles/steam-hit-a-new-record-during-the-summer-sale.5530

    Don’t let anyone tell you PC gaming is dying, well Steam isn’t at least. Steam hit a new all-time high for concurrent users online.

    It’s not really surprising that they reached this new high during a sale though, as I can imagine plenty of people logging in who haven’t for a long time to snap up a cheap game they wanted.

    Reply
  33. Tomi Engdahl says:

    Linus Torvalds: Linux Kernel Would Be OK in a Couple of Months If I Die
    http://news.softpedia.com/news/Linus-Torvalds-Linux-Kernel-Would-Be-OK-in-a-Couple-of-Months-If-I-Die-484554.shtml

    Linus Torvalds built the Linux kernel almost 25 years ago, and he’s still the main developer that determines the direction of the project. So the natural question that seems to arise all the time is what the future of the Linux kernel will be if something happens to him. Linus seems to know the answer to this as well.

    He’s also well known for strong language and honesty, which got him in hot water a few times already, but no matter what happens, the Linux kernel goes forward. On the other hand, the kernel is no longer the work of one man.

    Thousands of developers from across the world contribute to its development and evolution each year, and this is the biggest collaborative project on the planet. So it’s safe to say at this point that losing the creator of Linux won’t stop the project.

    Linus already has a few trusted people who wield almost the same power as him and who can replace him at any given time. The Linux kernel is a meritocracy, for the most part, so the people at the top also happen to be very good at their job.

    Reply
  34. Tomi Engdahl says:

    A brief introduction to converged infrastructure
    All together now: Simplicity drives efficiency
    http://www.theregister.co.uk/2015/06/23/converged_infrastructure_guide/

    Sometimes, it’s better to think inside the box. Bundling different IT components together into a single unit may just solve some of your computing problems, if you plan it right. Welcome to the world of IT convergence.

    Depending on which vendor or analyst you talk to, it’s known as an integrated system, a unified computing system, or converged infrastructure. Whatever name you use, they are all trying to do something broadly similar: simplify the procurement and management of different IT components by bundling them together.

    Converged infrastructure is the latest development in a long journey towards making IT infrastructure more efficient. Virtualization got us some of the way there, because it enabled us to consolidate our physical servers. We ended up with smaller numbers of larger boxes, running large numbers of virtual machines.

    That was all very well, but it came at a price. Management was difficult, and IT departments ended up with new challenges. ‘VM sprawl’ meant more machines to manage, while storage and servers still had to be manually networked together and configured for different workloads.

    Just as with cars, the converged IT solution can be preconfigured according to your particular workload. When buying a vehicle, you might choose a hatchback with good safety features if you’re a family type.

    Similarly, you’ll look for a different converged IT solution depending on your endgame.

    The concept must have traction, because the market is growing. Last September, IDC said that the integrated infrastructure and platforms market grew 33.8 per cent year on year during the second quarter that year. First half revenue ballooned 35.9 per cent. People are buying. Why?

    There are several benefits. One of the biggest is easy deployment and reduced deployment times.

    Because these devices arrive pre-built and pre-configured, there are also fewer things for the IT department to maintain. If the system fails to perform as planned, or suffers an outage, there’s one number to call, and one person to yell at.

    When everything is running smoothly, these systems also often come with management interfaces that make it easy to manage the whole thing from a single pane of glass. That reduces the human resources overhead, helping to drive down the total cost of ownership (TCO).

    Converged infrastructure systems are also predictable, and reliable. They are built on a reference architecture that has been pre-tested against the kinds of workload required by the IT department. This saves the IT department having to develop and carry out those tests before to find potential flaws in the proposed system.

    Then, there’s the more efficient use of resources. One of the biggest benefits of a modern, hyperconverged system is the ability to break down silos.

    What’s next?

    Converged systems are evolving, with modern variants focusing management on the virtual machine, with commodity computing resources (typically x86) and disks managed in the background. Increasingly, you’ll see them offered as bolt-together nodes, enabling them to scale out.

    Typically, you’ll see this referred to as hyperconvergence. It unites storage, compute, and networking in a single box around a hypervisor that does all of the infrastructure management for you. The computing is virtualised. The storage is virtualised. There’s strong software layer across everything that both abstracts and manages everything.

    There are potential drawbacks. The closer the knit between compute, storage, networking and virtualization, the harder it is to use these components independently, and indeed, hyperconverged systems aren’t really designed to be used that way.

    Reply
  35. Tomi Engdahl says:

    Vlad Dudau / Neowin:
    Lenovo unveils the Ideacenter Stick 300: a $130 PC on a stick running Windows
    http://www.neowin.net/news/lenovo-unveils-the-ideacenter-stick-300-a-130-pc-on-a-stick-running-windows

    Lenovo is introducing a brand new PC-on-a-stick device, called the Ideacenter Stick 300. The new device is designed to be taken anywhere and can transform almost any display into a Windows computer.

    Lenovo’s new HDMI dongle is basically a PC in stick, and though we’ve seen this type of devices before, the Ideacenter Stick’s price might make it quite attractive.

    As noted above the device will originally launch with Windows 8 but Lenovo confirmed that users will benefit from the free upgrade to Windows 10, following the operating system’s launch.

    Reply
  36. Tomi Engdahl says:

    Palantir Valued At $20 Billion In New Funding Round
    http://www.buzzfeed.com/williamalden/palantir-valued-at-20-billion-in-new-funding-round#.brOx041vp9

    The secretive data-processing company is raising up to $500 million in a previously undisclosed round of funding. The round makes it the third most valuable startup in the United States.

    By managing data for government agencies and Wall Street banks, Palantir Technologies has grown into one of the most valuable venture-backed companies in Silicon Valley. Now it is adding billions to its already rich valuation.

    The new round of funding, which has not been previously disclosed, reflects investors’ eagerness to gain access to a startup seen as one of the most successful in the world. Little is known about the details of Palantir’s business, beyond reports about its data-processing software being used to fight terror and catch financial criminals.

    But the secrecy apparently didn’t bother investors, who are said to have been impressed by Palantir’s growth in the first quarter of this year. One person close to the company said it had more than $1 billion of cash in the bank.

    Beyond its high-level connections in government, Palantir is also tied to some of the most powerful figures in Silicon Valley.

    Reply
  37. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    Samsung is actively disabling Windows Update on at least some computers — Samsung is actively disabling Windows Update on at least some of the computers it sells. Microsoft MVP Patrick Barker made the discovery and documented the details on his blog after trying to help a user troubleshoot issues with his Samsung machine.

    Samsung is actively disabling Windows Update on at least some computers
    http://venturebeat.com/2015/06/23/samsung-is-actively-disabling-windows-update-on-at-least-some-computers/

    Samsung is actively disabling Windows Update on at least some of the computers it sells. Microsoft MVP Patrick Barker made the discovery and documented the details on his blog after trying to help a user troubleshoot issues with his Samsung machine.

    Windows Update appeared to be getting disabled “randomly” until Microsoft’s Auditpol utility pointed the finger to Samsung’s SW Update software. More specifically, Samsung’s update tool had managed to download and run a file inconspicuously named “Disable_Windowsupdate.exe.”

    Samsung describes its update tool as follows: “You can install relevant software for your computer easier and faster using SW Update. The SW Update program helps you install and update your software and driver easily.”

    In other words, this is the typical OEM tool that ships with your computer to keep all the manufacturer’s software and drivers updated, as well as any other included third-party software (read: bloatware). As Barker rightly points out, there is one major difference between Samsung’s update tool and that of other OEMs: SW Update disables Windows Update.

    Reply
  38. Tomi Engdahl says:

    Michael Bolin / Facebook Code:
    Facebook open-sources Nuclide, an IDE built on Atom, Github’s text editor —

    Building Nuclide, a unified developer experience
    https://code.facebook.com/posts/397706937084869/

    At this year’s F8, Facebook’s developer conference, our infrastructure team talked about Nuclide, a project designed to provide a unified developer experience for engineers throughout the company — whether they work on native iOS apps, on React and React Native code, or on Hack to run on our HHVM web servers.

    In addition to being an important part of our own production stack, Hack, HHVM, and React are, of course, amongst our most popular open source projects, and we realized that it would be helpful to those communities if we shared Nuclide as well. So over the last few months, we’ve been preparing to bring the project to the broader engineering community, and we’re proud to announce that we’re starting that journey today by making the source code available on GitHub.

    https://github.com/facebook/nuclide

    Reply
  39. Tomi Engdahl says:

    Software companies are leaving the UK because of government’s surveillance plans
    Growing concerns about Snooper’s Charter and crypto backdoors fuelling exodus.
    http://arstechnica.co.uk/tech-policy/2015/06/software-companies-are-leaving-the-uk-because-of-governments-surveillance-plans/

    The company behind the open-source blogging platform Ghost is moving its paid-for service out of the UK because of government plans to weaken protection for privacy and freedom of expression. Ghost’s founder, John O’Nolan, wrote in a blog post: “we’ve elected to move the default location for all customer data from the UK to DigitalOcean’s [Amsterdam] data centre. The Netherlands is ranked #2 in the world for Freedom of Press, and has a long history of liberal institutions, laws and funds designed to support and defend independent journalism.”

    O’Nolan was particularly worried by the UK government’s plans to scrap the Human Rights Act, which he said enshrines key rights such as “respect for your private and family life” and “freedom of expression.” The Netherlands, by contrast, has “some of the strongest privacy laws in the world, with real precedents of hosting companies successfully rejecting government requests for data without full and legal paperwork,” he writes.

    This is by no means the first software company to announce that it will be leaving the UK because of the government’s plans to attack privacy through permanent bulk surveillance of online activities and weakened crypto. At the beginning of May, Aral Balkan revealed that he would be moving his Ind.ie software project out of the country

    A few weeks later, Eris Industries became the second company to react to the new UK government and its plans. Eris is “free software that allows anyone to build their own secure, low-cost, run-anywhere data infrastructure using blockchain and smart contract technology.” The company’s move was prompted by the threat that new laws could require backdoors in its encryption technology.
    “with immediate effect, we have temporarily moved our corporate headquarters to New York City, where open-source cryptography is firmly established as protected speech pursuant to the First Amendment to the Constitution of the United States.”

    Reply
  40. Tomi Engdahl says:

    Dell lobs more iron at surging HPC market
    More Xeon in less space
    http://www.theregister.co.uk/2015/06/24/dell_lobs_more_iron_at_surging_hpc_market/

    Dell’s fired its next HPC gun, announcing the PowerEdge C6320 which it says targets big data and heavy workload applications.

    Based on Intel Xeon E5-2600 v3 processors, the C6320 offers up to 18 cores per socket – or 144 cores per 2 rack unit chassis. There’s support for 512 GB of DDR4 memory and as much as 72 TB of local storage.

    Cramming independent four server nodes into the 2U chassis means Dell can claim more than double the muscle of its predecessor: 999 gigaflops on the LinPack benchmark (versus 498 Gflops for the C6620), a 45 per cent improvement on the SPECint_rate benchmark compared to the C6620, and 28 per cent better power performance (SPEC_Power).

    Reply
  41. Tomi Engdahl says:

    The wonderful madness of metrics: Different things to different folk
    Or, how I learned to stop worrying and verify
    http://www.theregister.co.uk/2015/06/23/madness_of_metrics/

    Managers and customers love statistics and metrics. Companies can live or die by how good their metrics are and the potential penalties for failing to meet the required service levels as defined in agreements.

    It can also be: “Have my team met their SLA” or: “What is the uptime on the server farm”.

    The dictionary defines the noun metric as: “A standard for measuring or evaluating something, especially one that uses figures or statistics.”

    In an honest world, things would be that simple. But this isn’t an honest world. Metrics can be interpreted differently, stretched, used and abused if they are not laid out in black and white. It can be like comparing apples and oranges, to use an often quoted phrase.

    Reply
  42. Tomi Engdahl says:

    Ask Slashdot: Is C++ the Right Tool For This Project?
    http://ask.slashdot.org/story/15/06/24/0320207/ask-slashdot-is-c-the-right-tool-for-this-project

    Comments:

    you want everything — memory, disk, network, speed — and c++ will give you all of that just fine. And it’ll give you the giant learning curve, and force you to take every hard road from start to finish.

    Then C++ is almost certainly not the language for you, unless it is a pure learning experience.

    Really.. C++ is a relatively high commitment language, and performance is one of its mainstays, however you dont feel you will spend much time optimising it?

    If you value your project at all then I would suggest C++ is not sounding like your solution.. especially if you need cross platform. Your reasons seem almost to be reasons NOT to use an unfamiliar language.

    As almost everything else has equal or better cross platform support

    I would recommend using Qt for a cross platform framework. I haven’t tried every C++ framework, but of the ones I have tried, Qt is by far the best.

    Decide whether your project is to be done in C or C++. Choose one and embrace it.

    There’s an illusion that because these two languages share a common origin that they’re somehow the same, bundled together as “C/C++”. Especially since C code can often be valid in a C++ compiler.

    In reality, the good programming styles in each of these two languages differ substantially. Start wedging bits of C code inside a C++ program and you’ll soon find yourself fighting the language and core libraries. Likewise, the conventions for core concepts like objects and linked lists in C are somewhat different to C++ and with their own strengths. Both are powerful languages for large projects, but not the same language.

    C++ could be a good choice for all the things you’ve mentioned. Networking is not an issue, as there are many open source libraries (e.g. libcurl – http://curl.haxx.se/ [curl.haxx.se]), and using Boost is often a good thing anyway. Also, there are at least two good memory allocators: tmalloc (http://goog-perftools.sourceforge.net/doc/tcmalloc.html) and jemalloc (http://www.canonware.com/jemalloc/) so you may not need to write your own.

    Reply
  43. Tomi Engdahl says:

    Delphix gets a pat on the back in Gartner’s latest tea leaves reading
    http://www.theregister.co.uk/2015/06/24/delphix_gets_gartner_mq_leader_back_pat/

    Data virtualiser Delphix has had a Gartner boost by appearing in the leaders’ box of its Magic Quadrant report for Structured Data Archiving and Application Retirement.

    Fellow data virtualiser Actifio was also placed in the MQ, as a niche player.

    The other four leaders are IBM, HP, Informatica and Solis Technologies

    Gartner defines structured data archiving as “the ability to index, migrate and protect application data in secondary databases or flat files, typically located on lower-cost storage for policy-based retention”. It addresses these issues:

    Storage optimisation – to reduce the volume of data in production and maintain seamless data access
    Governance – preserve data for compliance
    Cost-optimisation – through reducing data volumes of stored structured data
    Data scalability – Scalability to petabytes of capacity is required to “manage large volumes of non-traditional data resulting from newer applications which can generate billions of small objects”

    Actifio and Delphix use copy data virtualisation, whereas the other suppliers do not.

    Reply
  44. Tomi Engdahl says:

    Gaming the system: exploring the benefits of intranet gamification
    By Melanie Baker – March 10, 2014
    https://www.igloosoftware.com/blogs/inside-igloo/gaming_the_system_exploring_the_benefits_of_intranet_gamification?utm_source=techmeme&utm_medium=referral&utm_campaign=blog

    Gamification isn’t just playing games, and is increasingly becoming a useful corporate tool to increase employee productivity and intranet engagement.

    Gamification is showing up in an increasing number of areas in business, from employee training, to social elements on the corporate website, to the intranet. According to Gartner, by 2015 up to 40% of Global 1000 organizations will be using gamification in business operations. Gamification doesn’t necessarily refer to actually playing games, however. There are many different gamification elements, but not all are relevant to the intranet, so we’ll focus on the ones that are.

    Reply
  45. Tomi Engdahl says:

    How to turn application spaghetti into tasty IT services
    Unscramble your systems
    http://www.theregister.co.uk/2015/06/25/how_to_turn_application_spaghetti_into_tasty_it_services/

    The promise of IT service management is to deliver services that make sense to their business users. To do that, though, IT departments must be able to untangle their own internal resources.

    IT services must be accessible in one place so that users can find them easily and administrators can manage them. And the back-end applications that support those services must themselves be easy to manage and clearly identifiable.

    Sounds simple enough, right? Unfortunately, that is not the situation facing many IT departments today.

    Employees can access services via a bewildering array of touchpoints, and admins are grappling with a spaghetti mix of applications at the back end. Both of these layers of complexity makes it difficult to offer the streamlined services that users are looking for.

    How did we get here, and how can we change it?

    Consolidating front-end services can help an IT department to present a more uniform experience to business users.

    Reply
  46. Tomi Engdahl says:

    AI’s Next Frontier: Machines That Understand Language
    http://www.wired.com/2015/06/ais-next-frontier-machines-understand-language/

    With the help of neural networks—vast networks of machines that mimic the web of neurons in the human brain—Facebook can recognize your face. Google can recognize the words you bark into an Android phone. And Microsoft can translate your speech into another language. Now, the task is to teach online services to understand natural language, to grasp not just the meaning of words, but entire sentences and even paragraphs.

    At Facebook, artificial intelligence researchers recently demonstrated a system that can read a summary of The Lord of The Rings, then answer questions about the books.

    Reply
  47. Tomi Engdahl says:

    NVIDIA Begins Supplying Open-Source Register Header Files
    http://developers.slashdot.org/story/15/06/24/2159200/nvidia-begins-supplying-open-source-register-header-files

    NVIDIA’s latest mark of their newly discovered open-source kindness is beginning to provide open-source hardware reference headers for their latest GK20A/GM20B Tegra GPUs while they are working to also provide hardware header files on their older GPUs. T

    http://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Hardware-Headers

    Reply
  48. Tomi Engdahl says:

    Nouveau: Accelerated Open Source driver for nVidia cards
    http://nouveau.freedesktop.org/wiki/

    The nouveau project aims to build high-quality, free/libre software drivers for nVidia cards. “Nouveau” [nuvo] is the French word for “new”. Nouveau is composed of a Linux kernel KMS driver (nouveau), Gallium3D drivers in Mesa, and the Xorg DDX (xf86-video-nouveau).

    Reply
  49. Tomi Engdahl says:

    Intel Architecture versus the FPGA: The Battle of Time, Complexity and Cost
    http://rtcmagazine.com/articles/view/108876

    The continued evolution of Intel architecture (IA) enables electronic OEMs to consider it for applications previously requiring an FPGA. The benefits brought by IA are decreased development time, lower project cost, and high performance with feature integration.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*