Computer technology trends for 2016

It seems that PC market seems to be stabilizing in 2016. I expect that the PC market to shrinks slightly. While mobile devices have been named as culprits for the fall of PC shipments, IDC said that other factors may be in play. It is still pretty hard to make any decent profits with building PC hardware unless you are one of the biggest players – so again Lenovo, HP, and Dell are increasing their collective dominance of the PC market like they did in 2015. I expect changes like spin-offs and maybe some mergers with with smaller players like Fujitsu, Toshiba and Sony. The EMEA server market looks to be a two-horse race between Hewlett Packard Enterprise and Dell, according to Gartner. HPE, Dell and Cisco “all benefited” from Lenovo’s acquisition of IBM’s EMEA x86 server organisation.

Tablet market is no longer high grow market – tablet maker has started to decline, and decline continues in 2016 as owners are holding onto their existing devices for more than 3 years. iPad sales are set to continue decline and iPad Air 3 to be released in 1st half of 2016 does not change that. IDC predicts that detachable tablet market set for growth in 2016 as more people are turning to hybrid devices. Two-in-one tablets have been popularized by offerings like the Microsoft Surface, with options ranging dramatically in price and specs. I am not myself convinced that the growth will be as IDC forecasts, even though Company have started to make purchases of tablets for workers in jobs such as retail sales or field work (Apple iPads, Windows and Android tablets managed by company). Combined volume shipments of PCs, tablets and smartphones are expected to increase only in the single digits.

All your consumer tech gear should be cheaper come July as shere will be less import tariffs for IT products as World Trade Organization (WTO) deal agrees that tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually.

Happy Computer Laptop

In 2015 the storage was rocked to its foundations and those new innovations will be taken into wider use in 2016. The storage market in 2015 went through strategic foundation-shaking turmoil as the external shared disk array storage playbook was torn to shreds: The all-flash data centre idea has definitely taken off as a vision that could be achieved so that primary data is stored in flash with the rest being held in cheap and deep storage.  Flash drives generally solve the dusk drive latency access problem, so not so much need for hybrid drives. There is conviction that storage should be located as close to servers as possible (virtual SANs, hyper-converged industry appliances  and NVMe fabrics). The existing hybrid cloud concept was adopted/supported by everybody. Flash started out in 2-bits/cell MLC form and this rapidly became standard and TLC (3-bits/cell or triple layer cell) had started appearing. Industry-standard NVMe drivers for PCIe flash cards appeared. Intel and Micron blew non-volatile memory preconceptions out of the water in the second half of the year with their joint 3D XPoint memory announcement. Boring old disk  disk tech got shingled magnetic recording (SMR) and helium-filled drive technology; drive industry is focused on capacity-optimizing its drives.  We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being. Tape industry developed a 15TB LTO-7 format.

The use of SSD will increase and it’s price will drop. SSDs will be in more than 25% of new laptops sold in 2015.  SSDs are expected to be in 31% of new consumer laptops in 2016 and more than 40% by 2017. The prices of mainstream consumer SSDs have fallen dramatically every year over the past three years while HDD prices have not changed much.  SSD prices will decline to 24 cents per gigabyte in 2016. In 2017 they’re expected to drop to 11-17 cents per gigabyte (means a 1TB SSD on average would retail for $170 or less).

Hard disk sales will decrease, but this technology is not dead. Sales of hard disk drives have been decreasing for several years now (118 million units in the third quarter of 2015), but according to Seagate hard disk drives (HDDs) are set to still stay relevant around for at least 15 years to 20 years.  HDDs remain the most popular data storage technology as it is cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. 

Because all-flash storage systems with homogenous flash media are still too expensive to serve as a solution to for every enterprise application workload, enterprises will increasingly turn to performance optimized storage solutions that use a combination of multiple media types to deliver cost-effective performance. The speed advantage of Fibre Channel over Ethernet has evaporated. Enterprises also start  to seek alternatives to snapshots that are simpler and easier to manage, and will allow data and application recovery to a second before the data error or logical corruption occurred.

Local storage and the cloud finally make peace in 2016 as the decision-makers across the industry have now acknowledged the potential for enterprise storage and the cloud to work in tandem. Over 40 percent of data worldwide is expected to live on or move through the cloud by 2020 according to IDC.

Happy Computer Laptop

Open standards for data center development are now a reality thanks to advances in cloud technology. Facebook’s Open Compute Project has served as the industry’s leader in this regard.This allows more consolidation for those that want that. Consolidation used to refer to companies moving all of their infrastructure to the same facility. However, some experts have begun to question this strategy as  the rapid increase in data quantities and apps in the data center have made centralized facilities more difficult to operate than ever before. Server virtualization, more powerful servers and an increasing number of enterprise applications will continue to drive higher IO requirements in the datacenter.

Cloud consolidation starts heavily in 2016: number of options for general infrastructure-as-a-service (IaaS) cloud services and cloud management software will be much smaller at the end of 2016 than the beginning. The major public cloud providers will gain strength, with Amazon, IBM SoftLayer, and Microsoft capturing a greater share of the business cloud services market. Lock-in is a real concern for cloud users, because PaaS players have the ancient imperative to find ways to tie customers to their platforms and aren’t afraid to use them so advanced users want to establish reliable portability across PaaS products in a multi-vendor, multi-cloud environment.

Year 2016 will be harder for legacy IT providers than 2015. In its report, IDC states that “By 2020, More than 30 percent of the IT Vendors Will Not Exist as We Know Them Today.” Many enterprises are turning away from traditional vendors and toward cloud providers. They’re increasingly leveraging open source. In short, they’re becoming software companies. The best companies will build cultures of performance and doing the right thing — and will make data and the processes around it self-service for all their employees. Design Thinking to guide companies who want to change the lives of its customers and employees. 2016 will see a lot more work in trying to manage services that simply aren’t designed to work together or even be managed – for example Whatever-As-A-Service cloud systems to play nicely together with their existing legacy systems. So competent developers are the scarce commodity. Some companies start to see Cloud as a form of outsourcing that is fast burning up inhouse ITops jobs with varying success.

There are still too many old fashioned companies that just can’t understand what digitalization will mean to their business. In 2016, some companies’ boards still think the web is just for brochures and porn and don’t believe their business models can be disrupted. It gets worse for many traditional companies. For example Amazon is a retailer both on the web and increasingly for things like food deliveries. Amazon and other are playing to win. Digital disruption has happened and will continue.
Happy Computer Laptop

Windows 10 is coming more on 2016. If 2015 was a year of revolution, 2016 promises to be a year of consolidation for Microsoft’s operating system. I expect that Windows 10 adoption in companies starts in 2016. Windows 10 is likely to be a success for the enterprise, but I expect that word from heavyweights like Gartner, Forrester and Spiceworks, suggesting that half of enterprise users plan to switch to Windows 10 in 2016, are more than a bit optimistic. Windows 10 will also be used in China as Microsoft played the game with it better than with Windows 8 that was banned in China.

Windows is now delivered “as a service”, meaning incremental updates with new features as well as security patches, but Microsoft still seems works internally to a schedule of milestone releases. Next up is Redstone, rumoured to arrive around the anniversary of Windows 10, midway through 2016. Also Windows servers will get update in 2016: 2016 should also include the release of Windows Server 2016. Server 2016 includes updates to the Hyper-V virtualisation platform, support for Docker-style containers, and a new cut-down edition called Nano Server.

Windows 10 will get some of the already promised features not delivered in 2015 delivered in 2016. Windows 10 was promised coming  to PCs and Mobile devices in 2015 to deliver unified user experience. Continuum is a new, adaptive user experience offered in Windows 10 that optimizes the look and behavior of apps and the Windows shell for the physical form factor and customer’s usage preferences. The promise was same unified interface for PCs, tablets and smart phones – but it was only delivered in 2015 for only PCs and some tablets. Mobile Windows 10 for smart phone is expected to start finally in 2016 – The release of Microsoft’s new Windows 10 operating system may be the last roll of the dice for its struggling mobile platform. Because Microsoft Plan A is to get as many apps and as much activity as it can on Windows on all form factor with Universal Windows Platform (UWP), which enables the same Windows 10 code to run on phone and desktop. Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developer. Can Microsoft keep the developer momentum going? I am not sure. In addition there are also plans for tools for porting iOS apps and an Android runtime, so expect also delivery of some or all of the Windows Bridges (iOS, web app, desktop app, Android) announced at the April 2015 Build conference in hope to get more apps to unified Windows 10 app store. Windows 10 does hold out some promise for Windows Phone, but it’s not going to make an enormous difference. Losing the battle for the Web and mobile computing is a brutal loss for Microsoft. When you consider the size of those two markets combined, the desktop market seems like a stagnant backwater.

Older Windows versions will not die in 2016 as fast as Microsoft and security people would like. Expect Windows 7 diehards to continue holding out in 2016 and beyond. And there are still many companies that run their critical systems on Windows XP as “There are some people who don’t have an option to change.” Many times the OS is running in automation and process control systems that run business and mission-critical systems, both in private sector and government enterprises. For example US Navy is using obsolete operating system Microsoft Windows XP to run critical tasks. It all comes down to money and resources, but if someone is obliged to keep something running on an obsolete system, it’s the wrong approach to information security completely.

Happy Computer Laptop

Virtual reality has grown immensely over the past few years, but 2016 looks like the most important year yet: it will be the first time that consumers can get their hands on a number of powerful headsets for viewing alternate realities in immersive 3-D. Virtual Reality will become the mainstream when Sony, and Samsung Oculus bring consumer products on the market in 2016. Whole virtual reality hype could be rebooted as Early build of final Oculus Rift hardware starts shipping to devs. Maybe HTC‘s and Valve‘s Vive VR headset will suffer in the next few month. Expect a banner year for virtual reality.

GPU and FPGA acceleration will be used in high performance computing widely. Both Intel and AMD have products with CPU and GPU in the same chip, and there is software support for using GPU (learn CUDA and/or OpenCL). Also there are many mobile processors have CPU and GPU on the same chip. FPGAs are circuits that can be baked into a specific application, but can also be reprogrammed later. There was lots of interest in 2015 for using FPGA for accelerating computations as the nest step after GPU, and I expect that the interest will grow even more in 2016. FPGAs are not quite as efficient as a dedicated ASIC, but it’s about as close as you can get without translating the actual source code directly into a circuit. Intel bought Altera (big FPGA company) in 2015 and plans in 2016 to begin selling products with a Xeon chip and an Altera FPGA in a single packagepossibly available in early 2016.

Artificial intelligence, machine learning and deep learning will be talked about a lot in 2016. Neural networks, which have been academic exercises (but little more) for decades, are increasingly becoming mainstream success stories: Heavy (and growing) investment in the technology, which enables the identification of objects in still and video images, words in audio streams, and the like after an initial training phase, comes from the formidable likes of Amazon, Baidu, Facebook, Google, Microsoft, and others. So-called “deep learning” has been enabled by the combination of the evolution of traditional neural network techniques, the steadily increasing processing “muscle” of CPUs (aided by algorithm acceleration via FPGAs, GPUs, and, more recently, dedicated co-processors), and the steadily decreasing cost of system memory and storage. There were many interesting releases on this in the end of 2015: Facebook Inc. in February, released portions of its Torch software, while Alphabet Inc.’s Google division earlier this month open-sourced parts of its TensorFlow system. Also IBM Turns Up Heat Under Competition in Artificial Intelligence as SystemML would be freely available to share and modify through the Apache Software Foundation. So I expect that the year 2016 will be the year those are tried in practice. I expect that deep learning will be hot in CES 2016 Several respected scientists issued a letter warning about the dangers of artificial intelligence (AI) in 2015, but I don’t worry about a rogue AI exterminating mankind. I worry about an inadequate AI being given control over things that it’s not ready for. How machine learning will affect your business? MIT has a good free intro to AI and ML.

Computers, which excel at big data analysis, can help doctors deliver more personalized care. Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better. Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Computers excel at searching and combining vastly more data than a human so algorithms can be put to good use in certain areas of medicine. There are also things that can slow down development in 2016: To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.

Internet of Things (IoT) was talked a lot in 2015, and it will be a hot topics for IT departments in 2016 as well. Many companies will notice that security issues are important in it. The newest wearable technology, smart watches and other smart devices corresponding to the voice commands and interpret the data we produce - it learns from its users, and generate appropriate  responses in real time. Interest in Internet of Things (IoT) will as bring interest to  real-time business systems: Not only real-time analytics, but real-time everything. This will start in earnest in 2016, but the trend will take years to play out.

Connectivity and networking will be hot. And it is not just about IoT.  CES will focus on how connectivity is proliferating everything from cars to homes, realigning diverse markets. The interest will affect job markets: Network jobs are hot; salaries expected to rise in 2016  as wireless network engineers, network admins, and network security pros can expect above-average pay gains.

Linux will stay big in network server marker in 2016. Web server marketplace is one arena where Linux has had the greatest impact. Today, the majority of Web servers are Linux boxes. This includes most of the world’s busiest sites. Linux will also run many parts of out Internet infrastructure that moves the bits from server to the user. Linux will also continue to rule smart phone market as being in the core of Android. New IoT solutions will be moist likely to be built mainly using Linux in many parts of the systems.

Microsoft and Linux are not such enemies that they were few years go. Common sense says that Microsoft and the FOSS movement should be perpetual enemies.  It looks like Microsoft is waking up to the fact that Linux is here to stay. Microsoft cannot feasibly wipe it out, so it has to embrace it. Microsoft is already partnering with Linux companies to bring popular distros to its Azure platform. In fact, Microsoft even has gone so far as to create its own Linux distro for its Azure data center.

Happy Computer Laptop

Web browsers are coming more and more 64 bit as Firefox started 64 bit era on Windows and Google is killing Chrome for 32-bit Linux. At the same time web browsers are loosing old legacy features like NPAPI and Silverlight. Who will miss them? The venerable NPAPI plugins standard, which dates back to the days of Netscape, is now showing its age, and causing more problems than it solves, and will see native support removed by the end of 2016 from Firefox. It was already removed from Google Chrome browsers with very little impact. Biggest issue was lack of support for Microsoft’s Silverlight which brought down several top streaming media sites – but they are actively switching to HTML5 in 2016. I don’t miss Silverlight. Flash will continue to be available owing to its popularity for web video.

SHA-1 will be at least partially retired in 2016. Due to recent research showing that SHA-1 is weaker than previously believed, Mozilla, Microsoft and now Google are all considering bringing the deadline forward by six months to July 1, 2016.

Adobe’s Flash has been under attack from many quarters over security as well as slowing down Web pages. If you wish that Flash would be finally dead in 2016 you might be disappointed. Adobe seems to be trying to kill the name by rebranding trick: Adobe Flash Professional CC is now Adobe Animate CC. In practive it propably does not mean much but Adobe seems to acknowledge the inevitability of an HTML5 world. Adobe wants to remain a leader in interactive tools and the pivot to HTML5 requires new messaging.

The trend to try to use same same language and tools on both user end and the server back-end continues. Microsoft is pushing it’s .NET and Azure cloud platform tools. Amazon, Google and IBM have their own set of tools. Java is on decline. JavaScript is going strong on both web browser and server end with node.js , React and many other JavaScript libraries. Apple also tries to bend it’s Swift programming language now used to make mainly iOS applications also to run on servers with project Perfect.

Java will still stick around, but Java’s decline as a language will accelerate as new stuff isn’t being written in Java, even if it runs on the JVM. We will  not see new Java 9 in 2016 as Oracle’s delayed the release of Java 9 by six months. The register tells that Java 9 delayed until Thursday March 23rd, 2017, just after tea-time.

Containers will rule the world as Docker will continue to develop, gain security features, and add various forms of governanceUntil now Docker has been tire-kicking, used in production by the early-adopter crowd only, but it can change when vendors are starting to claim that they can do proper management of big data and container farms.

NoSQL databases will take hold as they be called as “highly scalable” or “cloud-ready.” Expect 2016 to be the year when a lot of big brick-and-mortar companies publicly adopt NoSQL for critical operations. Basically NoSQL could be seem as key:value store, and this idea has also expanded to storage systems: We got key:value store disk drives with an Ethernet NIC on-board and basic GET and PUT object storage facilities came into being.

In the database world Big Data will be still big but it needs to be analyzed in real-time. A typical big data project usually involves some semi-structured data, a bit of unstructured (such as email), and a whole lot of structured data (stuff stored in an RDBMS). The cost of Hadoop on a per-node basis is pretty inconsequential, the cost of understanding all of the schemas, getting them into Hadoop, and structuring them well enough to perform the analytics is still considerable. Remember that you’re not “moving” to Hadoop, you’re adding a downstream repository, so you need to worry on systems integration and latency issues. Apache Spark will also get interest as Spark’s multi-stage in-memory primitives provides more performance  for certain applications. Big data brings with it responsibility – Digital consumer confidence must be earned.

IT security continues to be a huge issue in 2016. You might be able to achieve adequate security against hackers and internal threats but every attempt to make systems idiot proof just means the idiots get upgraded. Firms are ever more connected to each other and the general outside world. So in 2016 we will see even more service firms accidentally leaking critical information and a lot more firms having their reputations scorched by incompetence fuelled security screw-ups. Good security people are needed more and more – a joke doing the rounds of ITExecs doing interviews is “if you’re a decent security bod, why do you need to look for a job”

There will still be unexpected single points of failures in big distributed networked system. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop. That’s not a single point of failure in the classical sense but it’s really hard to manage unless you go for full cloud agnosticism – which is costly. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail. Oh, and no one seems to test their mission-critical data centre properly, because it’s mission criticalSo they just over-specify where they can and cross their fingers (= pay twice and get the half the coverage for other vulnerabilities).

For IT start-ups it seems that Silicon Valley’s cash party is coming to an end. Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money could be over and valuation expectations are re-calibrating down. The cheap capital party is over. It could mean trouble for weaker startups.

 

933 Comments

  1. Tomi Engdahl says:

    Larger and more detailed displays and heavier graphics are constantly demanding more efficient hardware that needs faster memory bus.

    JEDEC is now a new technology that doubles the graphics card benchmarking: The new standard is called GDDR5X. JEDEC wanted to make as few changes GDDR5 standard – the biggest change is to increase the bandwidth from 10 to 14 gigabits per second. This is about twice the speed of the current GDDR5

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3918:uusi-vayla-tuplaa-grafiikkakorttien-tehon&catid=13&Itemid=101

    Reply
  2. Tomi Engdahl says:

    Intel’s security extensions are SGX: secure until you look at the detail
    MIT research suggests Intel’s taking risks with its locked-down container tech
    http://www.theregister.co.uk/2016/02/01/sgx_secure_until_you_look_at_the_detail/

    A pair of cryptography researchers have published a graduate thesis that accuses Intel of breaking its “Software Guard Extensions” (SGX) security model by bad implementation decisions.

    Victor Costan and Srinivas Devadas of MIT write (PDF) the SGX architecture operates by sending symmetric keys over the Internet.

    Launched in 2013, SGX added a set of CPU commands that let programmers create locked containers, with hardware enforcing access to both the code and data inside the container.

    The long and very detailed analysis of SGX was published at the respected International Association for Cryptologic Research, and gets out the chainsaw when it comes to describing the system’s “attestation model”.

    What’s at issue here is that there seems to be a serious gap between how the model works, and how Chipzilla explained how it works to developers.

    Green’s concerns are directed to a detailed and technical analysis in Section 5.8 of the paper, perhaps best crystallised in this (from Section 6.6.1):

    “Once initialised, an enclave is expected to participate in a software attestation process, where it authenticates itself to a remote server. Upon successful authentication, the remote server is expected to disclose some secrets to an enclave over a secure communication channel”.

    The problem is that, as the image in Green’s Tweet (from the paper, reproduced in full left) shows, Intel intends the symmetrical provisioning key to reside both in the SGX-enabled chip and in Intel servers.

    That puts Intel in a position of huge power, they write: “Intel has a near-monopoly on desktop and server-class processors, and being able to decide which software vendors are allowed to use SGX can effectively put Intel in a position to decide winners and losers in many software markets.”

    Intel SGX Explained
    http://eprint.iacr.org/2016/086.pdf

    Reply
  3. Tomi Engdahl says:

    Controversy at the Linux Foundation
    http://www.linuxjournal.com/content/controversy-linux-foundation

    Linux has seen more than its fair share of controversy through the years. And, that’s not so surprising. For one thing, the operating system flies in the teeth of deeply entrenched multinational companies. The fact that it stands for users instead of vested interests has drawn more than a little ire as well.

    And, let’s be honest. Sometimes the controversy comes from within our own camp. Although the Open Source community is generally very welcoming and accepting, there always will be conflicts when a large group of people works together on a big project. It happens in offices. It happens in universities. And it has certainly happened on the Linux Kernel Mailing list.

    In general, it’s a good idea not to get drawn into flame wars and conflicts. Bruised egos aren’t terribly important in the grand scheme–not while there’s a worthwhile project underway. But, that’s not to say that we should be complacent and ignore genuine controversies when they arise.

    Let’s take the case of the recent changes to the Linux Foundation bylaws. The Linux Foundation is a non-profit organization that exists to protect Linux and fund its growth. It pays Linus Torvald’s wages so he can work on the kernel full time. It fights legal battles to keep the code free, and it provides training and certification.

    Until recently, the foundation was the very model of a peaceful community.

    In the past, any member of the foundation could stand for election to the board. This included big companies paying hundreds of thousands of dollars per year and individuals paying only $99.

    The new bylaws mean that only high-end supporters can become board directors.

    Of course, organizations like the Linux Foundation are important. It’s also important for the organization to maintain good relations with the community–after all, without the community, there is no Linux.

    Reply
  4. Tomi Engdahl says:

    Windows 10 dethrones XP to become number three operating system
    But depending on your view of Windows 8.x it’s a big fat number two
    http://www.theinquirer.net/inquirer/news/2444345/windows-10-dethrones-xp-to-become-number-three-operating-system

    DAMIEN-ESQUE OMEN CHILD Windows 10 has finally assailed diehard Windows XP in the desktop operating system wars.

    It’s worth reiterating this point, as we occasionally do, that percentages are relative and an operating system rebounding slightly may be because another older one has dropped.

    We’re not privy to the exact number of computers, aside from Microsoft’s repeated assertion that there are more than 200 million ‘machines’ possessed by Windows 10, and we can really talk only about shares in the marketplace. Additionally, remember this is computers running the desktop version of the given OS that have logged onto the interwebs. There. Public Service Announcement over.

    Reply
  5. Tomi Engdahl says:

    What is DevOps?
    http://newrelic.com/devops/what-is-devops

    First, let’s just say there is no definitive answer. Yet. There are lots of opinions about what is covered under DevOps and what’s not. Is it a culture? Is it a job title? Is it a way of organizing? Or just a way of thinking? We think it’s a still-evolving movement so let’s not get stuck on limiting it too much right now. Instead, we can talk about some of the common themes, tools and ideas.

    Born of the need to improve IT service delivery agility, the DevOps movement emphasizes communication, collaboration and integration between software developers and IT operations. Rather than seeing these two groups as silos who pass things along but don’t really work together, DevOps recognizes the interdependence of software development and IT operations and helps an organization produce software and IT services more rapidly, with frequent iterations.

    A perfect storm of converging adjacent methodology including Agile, Operations Management (Systems Thinking & Dynamics), Theory of Constraints, LEAN and IT Service management came together in 2009 through a smattering of conferences, talks and Twitter (#devops) debates worldwide that eventually became the philosophy behind DevOps.

    DevOps found initial traction within many large public cloud service providers

    Companies of all sizes are beginning to implement DevOps practices, with a 2012 survey by Puppet Labs and IT Revolution Press showing that 63% of over 4,000 respondents are implementing DevOps practices. And many shops, particularly lean startups, have been “doing DevOps” without calling it DevOps for quite a while.

    Why do DevOps?

    The benefits of a DevOps approach are many, including:

    Improved deploy frequency which can lead to faster time to market
    Lower failure rate
    Shortened lead time
    Faster mean time to recovery

    Reply
  6. Tomi Engdahl says:

    Emil Protalinski / VentureBeat:
    IDC: Tablet shipments decline 10.1% in 2015, leaders Apple and Samsung both lose market share
    http://venturebeat.com/2016/02/01/idc-tablet-shipments-decline-10-1-in-2015-leaders-apple-and-samsung-both-lose-market-share/

    The tablet market is still in decline.

    Q4 2015 is the fifth straight quarter in a row to see a decrease year over year: 65.9 million units shipped, down 13.7 percent from the 76.4 million units that shipped the same quarter last year, according to market research firm IDC. For the whole year of 2015, shipments were 206.8 million, down 10.1 percent from the 230.1 million shipped in 2014.

    In Q4 2015, the top five vendors accounted for 54.2 percent of the market, up from 51.0 percent a year ago. But only Amazon and Huawei managed to grow their pie slices year over year

    Reply
  7. Tomi Engdahl says:

    Why Haven’t Video Game Consoles Died Yet?
    http://www.forbes.com/sites/insertcoin/2016/01/31/why-havent-video-game-consoles-died-yet/#364355e84962

    This week, EA made headlines because they released an estimate which said that 55 million new-gen consoles had been sold so far. That’s a headline because when you take Sony ’s freely offered-up figures of 36 million PS4s sold, that leaves 19 million Xbox Ones, meaning it’s being outsold almost 2:1. The Wii U isn’t included because EA doesn’t make Wii U games, and the math would be nonsense if it was.

    But rather than the whole “PS4 is killing Xbox One” narrative, I think it’s more significant that 55 million of these consoles have been sold period. Both the PS4 and Xbox One are tracking above last gen. I believe The Xbox 360 was at about 17 million at this point, while the PS3 was at 21 million.

    Why is this kind of crazy? Because all we’ve been hearing about the past five years or so was how this console generation was destined to be the last, and game consoles in general were a trend that was bound to be phased out soon enough.

    That hasn’t happened, of course. Sales of the Xbox One and PS4 are booming, and Nintendo already has a new console that’s allegedly coming out this year. Microsoft MSFT -3.19% and Sony haven’t been shy about hinting at future consoles either, even after the multi-year lifespan of their current systems are through.

    PC Gaming: Like consoles, the PC gaming market is as strong as it’s ever been, and while there is certainly crossover between PC gamers and console players, one market is not poised to kill the other. PC Gaming is usually divided into two main markets, casual and hardcore. “Casual” would be those who play a gamer or two on the computer, ones that don’t require hyper expensive gaming machines, titles like League of Legends or Hearthstone. The “hardcore” market will play most of their games on PC, and have high-end machines that cost far, far more than consoles, and are able to play many games at maxed out settings.

    But neither of these types of players appear to be eating into the console base.

    The Mobile Games Market: It is very clear the mobile games market has exploded in the last few years, but rather than replace consoles, it’s merely become its own sort of behemoth, and expanded the network of who traditionally plays games to include toddlers, housewives, the elderly, and everyone in between.

    mobile does not compete with actual video game consoles

    Game Streaming: This is part of the whole “why do we even need boxes anymore?” philosophy of why game consoles are supposedly dying. With new streaming capabilities, why do we need big clunky boxes and discs? Short answer: Because game streaming still is not up to snuff to be a reliable way to play new releases, unlike what we’ve seen with TV and movies.

    This movement was driven by two main forces, OnLive, the game streaming platform which was quickly shut down and sold for parts after it debuted, and PlayStation Now, which is a cool way to play older PlayStation games

    Set-Top Boxes: This is a similar category to the above, but this is when people have predicted that boxes like Apple TV and Amazon Fire would be able to kill consoles by offering their own kind of gaming experiences with many “major developers” on board

    Though it’s great that these boxes are supporting gaming, most of it is converted mobile titles which, as we’ve already established, are no great threat to consoles.

    Steam Machines: This is a different category of “alternative” boxes, but I read so many damn “Will Steam Machines Kill Consoles?” headlines the past few years, it probably should be on the list. I don’t think Steam Machines are a fundamentally bad idea

    Reply
  8. Tomi Engdahl says:

    Today’s Hero Made an AI That Annoys Telemarketers For As Long As Possible
    http://gizmodo.com/todays-hero-made-an-ai-that-annoys-telemarketers-for-as-1756344562

    Hanging up on annoying telemarketers is the easiest way to deal with them, but that just sends their autodialers onto the next unfortunate victim. Roger Anderson decided that telemarketers deserved a crueler fate, so he programmed an artificially intelligent bot that keeps them on the line for as long as possible.

    Anderson, who works in the telecom industry and has a better understanding of how telemarketing call-in techniques work than most, first created a call-answering robot that tricked autodialers into thinking there was an actual person answering the phone. So instead of the machine automatically hanging up after ten seconds, a simple pre-recorded “hello?, hello?” message would have the call sent to a telemarketer who would waste a few precious moments until they realized there really wasn’t anyone there.

    But Anderson then wondered just how long his robot could keep a telemarketer on the line for. It turns out, for surprisingly long.

    After the initial “hello?, hello?,” Anderson’s sophisticated algorithm makes telemarketers think there’s an actual person on the line with random affirmations like “yes, uh huh, right.” It can even detect when a telemarketer is getting suspicious, triggering a completely inane response that usually convinces them otherwise. It’s absolutely brilliant when it works flawlessly.

    Reply
  9. Tomi Engdahl says:

    NOTHING trumps extra pizza on IT projects. Not even more people
    Why Jeff Bezos laughs at your ‘big’ team
    http://www.theregister.co.uk/2016/02/03/mythical_man_month/

    Dilbert might mock the mythical man month, Fred Brooks’ argument that “adding manpower to a late software project makes it later,” but most enterprises still think they can hit their deadlines by hiring more people, feeding ever larger teams, rather than by embracing DevOps-friendly practices that favor small teams and high communication between developers and operations.

    How much is “most”? Well, according to 451 Research survey data, 60.4 per cent of enterprises address rising release demands by “adding staff to our team.”

    Why won’t we learn?

    It turns out to be really hard to change organisations and many, like insurer Hiscox, are set up in uber-large teams that serve as functional silos. Hence, companies end up with development, operations, support, and more, each with its own agenda.

    These atomised teams have all the functions necessary to making decisions and putting out product. According to Fletcher, moving to this DevOps approach resulted in a reduction in its cost per release on one application by 97 per cent, driven by a reduction in time per release by 89 per cent and a reduction in staff needed to release by 75 per cent.

    DevOps is not for me

    According to Nigel Kersten, chief information officer for Puppet Labs, a number of factors hold enterprises back from the DevOps dream.

    In an interview, Kersten acknowledged “a host of myths surrounding DevOps applicability in enterprise environments that block adoption in many organizations,” but as he pointed out, these myths tend to be held most firmly by those least qualified to comment.

    According to Kersten: “These are often views held by technical managers and executives rather than the grassroots practitioners,” which “makes sense for what has been largely a grassroots-driven collection of practices.”

    Speaking specifically of those that use regulatory compliance requirements as a reason to eschew DevOps, Kersten called out the following:

    DevOps is more than just development and operations, and should be inclusive of all entities required to deliver business value, including the audit and compliance teams
    IT automation removes much of the human intervention and manual manipulation that slows and pains the audit and compliance processes
    Automated, repeatable processes are easier to audit, easier to understand, and easier to secure, which enables the shift from merely passing the test, to securing the business

    In short, the reasons many enterprises cite for avoiding DevOps are often the very reasons they should embrace it.

    According to Gartner survey data, 25 per cent of Global 2000 enterprises will embrace DevOps this year. That’s a clear indication that DevOps has moved beyond a niche cultural phenomenon into a real force for change within the enterprise.

    Reply
  10. Tomi Engdahl says:

    The tablet will wither under the leadership of the iPad

    Consumers now seem to indicate that the tablet itself, sufficient for normal use. Basic Tablet sales were down 21.1 per cent from last year’s fourth quarter. Instead of tablets new hybridiläppärien or in practice, which depart from the keyboard, sales increased markedly.

    IDC Research, the transition is in progress at a rapid pace. Hybrid tablet sales by as much as doubled in October-December.

    All in all, the tablets were sold in 206.8 million units. The figure is 10.1 percent lower than in the previous year.

    Competition is extremely fierce in tablets.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3936:tabletti-kuihtuu-ipadin-johdolla&catid=13&Itemid=101

    Reply
  11. Tomi Engdahl says:

    Android is by far the world’s most popular operating system, a number of users of, say, Windows can only dream of. It is now possible to test the Android x96 processors equipped computers. The case is Pekinese Jide Technology, which has developed Remix OS is a kind of three former Google developer’s handiwork.

    In practice, it is the x86 Android, plus scaling and parallel mode, the file system and taskbar applications. It is still the “alpha level” operating system, but its basis we can say that after a few iteration there might be something really good.

    Remix OS: User’s successful live version from a USB stick. Image Installation was easy, because Jide offers a package along with your ISO to write a program. And very easy to understand instructions.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3926:nyt-voit-ajaa-androidia-lapparilla-2&catid=13&Itemid=101

    More: http://www.jide.com/en

    Reply
  12. Tomi Engdahl says:

    Cobol programming language developed in the late 1950s. It has not yet been history, but it is still used in applications coded mainframe environment, and especially in the banking and insurance sectors.

    “Cobol has been a very useful language in the administrative processing of data, and it is still in use in the financial sector and in public administration. Technical or mathematical data processing is typically used in other programming languages. And, after all, already cobolilla life: currently the most commonly used programming languages ​​is much more developed in the future, “confirms Samlink’s President and CEO Pentti Unkuri.

    Samlink to supply IT systems for the financial services industry and its 50 450 employees will continue to work hard with Cobol. For example, the company’s basic banking system is encoded mainly using it.

    amlink is considering the renewal of the core banking system within 5-7 years. The reform has made a preliminary study which compared the various options. For example, the replacement of COBOL with Java and Microsoft-technologies is possible.

    “We do not want the new system because we want to move from one programming language to another. It is a question that we want to base the banking system, which is responsible for the future of banking requirements and allows more efficient service development. Moreover, the current system is based on the IBM mainframe environment, which is more expensive than the cost level of open environments ”

    Source: http://www.tivi.fi/Kaikki_uutiset/ikivanha-ohjelmointikieli-ei-jaa-historiaan-osaamisen-tarve-ei-katoa-6250522

    Reply
  13. Tomi Engdahl says:

    By 2019 world will spend $2.8 TREELLION on the rubbish we write about
    About one dollar in five will go on smartmobes, but servers and storage will also grow
    http://www.theregister.co.uk/2016/02/05/by_2019_world_will_spend_28_treellion_on_the_rubbish_we_write_about/

    Information technology analyst house IDC says that world spending on information technology will hit US$2.8 trillion in 2019, up from this year’s $2.46 trillion.

    Healthcare outfits will do the heaviest lifting, lifting their spending by a compound annual growth rate of 5.5 per cent. Financial services types, the media sector and resources industries will all grow at 4.6 per cent, IDC says.

    Software and services will grow even faster at 6.7 per cent. Hardware won’t grow as fast, but will account for 40 per cent of spend. Half of the hardware category’s haul will consist of smartphones, but there’s also some joy for the enterprise with the firm putting its name to predictions of 2.6 per cent growth for servers and 3.2 per cent for storage.

    Reply
  14. Tomi Engdahl says:

    Making the Most of GPUs Without Draining Batteries
    http://www.eetimes.com/document.asp?doc_id=1328869&

    Launched in January and awarded a grant of 2.97 million Euros from the European Union’s Horizon 2020 research and innovation program, the “Low-power GPU2 (LPGPU2)” research project is a European initiative uniting researchers and graphics specialists from Samsung Electronics UK, Codeplay, Think Silicon and TU Berlin to research and develop a novel tool chain for analyzing, visualizing, and improving the power efficiency of applications on mobile GPUs.

    Running for the next two and a half years, the research project aims to define new industry standards for resource and performance monitoring to be widely adopted by embedded hardware GPU vendors. The consortium will define a methodology for accurate power estimations for embedded GPU and will try to enhance existing Dynamic Voltage and Frequency Scaling (DVFS) mechanisms for optimum power management with sustained performance.

    Ideally, the result will be a unique power and performance visualization tool which informs application and GPU device driver developers of potential power and performance improvements.

    Reply
  15. Tomi Engdahl says:

    Universal Debloater
    A quick and dirty batch script to remove common bloatware preinstalled by OEMs
    https://hackaday.io/project/4168-universal-debloater

    Description
    I hate bloatware. It annoys the hell out of me that the manufacturer has it all preinstalled on an image, and flashes it out to their machines in (probably) minutes.
    Then, when it comes to us to remove it, it can take hours.

    Universal Debloater is a batch script designed to remove common bloatware automagically.

    The script now includes lines to remove common HP bloatware

    Yep, don’t forget to remove mcafee. I’ve seen it being installed on new laptops lately.

    Reply
  16. Tomi Engdahl says:

    Go and whistle, IDC. The storage world’s going to hell in a handbasket
    A storage bear growls
    http://www.theregister.co.uk/2016/02/05/idc_storage_predictions_are_rubbish/

    Comment Data volumes are growing like crazy and yet there’s a case to be made that 2016 is going to be a tough old year for storage, and could be an annus horribilis* and not an IDC prediction-fuelled glory year.

    Really?

    Yes, really. Try this argument on for size.

    South America is in a dreadful economic state. Brazil is over-spent and in the throes of a massive government-oil industry corruption scandal.

    So Latin America’s IT kit buying power is generally screwed.

    The Middle East is riven by the Syrian civil war, Sunni and Shi’ite unrest, ISIS terrorism, millions of displaced refugees, and Saudi Arabia manufacturing an oil glut to screw fracking and Iranian oil revenues, and, coincidentally, the rest of the world’s oil revenues.

    Africa is, well, basically Africa, with endemic corruption and mis-rule.

    Europe is struggling with refugees, an on-going Euro problem as southern states get squeezed by northern ones fed up with bailing them out, and oil-rich economies, sort-of, like the UK and Norway, find their revenues squeezed by the Saudis playing geo-politics.

    So the EMEA’s buying power is getting screwed.

    China’s boom is winding down as the country’s investing and house-buying middle classes find they are getting screwed.

    What does that mean for storage?

    First, revenues are generally going to fall and firms will contract to save costs.

    The El Reg grizzly bear growls that IT-buying customers will spend less and will spend it on IT kit or services that substantially reduce its costs. So any vendor punting kit or services that do this, that enable costly, on-premises kit to be thrown out, will have a better chance of winning business.

    That means that incumbents with legacy storage arrays coming up to lease-end, or where their running costs are unsustainable for customers, face getting booted out by cheaper all-flash arrays, hybrid arrays, converged systems, hyper-converged infrastructure, and the public cloud.

    Reply
  17. Tomi Engdahl says:

    IBM Is Finally Embracing the Cloud—It Has No Other Choice
    http://www.wired.com/2016/02/ibm-learns-to-stop-worrying-and-love-the-cloud-for-real/

    Startup founder Alex Polvi has a name for the biggest idea in the world of information technology. And, yes, it doubles as a hashtag, a six-character encapsulation of this sweeping movement: #GIFEE.

    The acronym has nothing to do with time-wasting animations in your Slack feed. It stands for “Google Infrastructure For Everyone Else!” (exclamation point optional). Nowadays, in the world of IT, the big idea is to give everyone else their own incarnation of the state-of-the-art infrastructure Google built to run its Internet empire. And that’s good news for everyone else. Or, rather, almost everyone else.

    This big idea presents a conundrum for venerable tech giants like HP and Microsoft and IBM. For so long, these giants sold a very different type of IT infrastructure, and it keep their profit margins high. The #GIFEE movement undercuts the old way of doing things. But in recent years, Microsoft has regained some of its mojo by embracing the #GIFEE ideal—and embracing it wholeheartedly (though I’m sure they call it something else). And now, it looks like IBM has made the same leap of faith. Google—and, just as importantly, Amazon—left the company no choice.

    Reply
  18. Tomi Engdahl says:

    Paul Sawers / VentureBeat:
    Amazon launches free cross-platform 3D game engine Lumberyard, with Twitch integration

    Amazon takes on Unity et al with Lumberyard, a free cross-platform game engine
    http://venturebeat.com/2016/02/09/amazon-takes-on-unity-et-al-with-lumberyard-a-free-cross-platform-game-engine/

    Amazon has unveiled two new products aimed squarely at the professional game-developer fraternity: Lumberyard, a free 3D game engine; and GameLift, a service for quickly building backends for deploying session-based multiplayer games. Products of Amazon’s Web Services (AWS) division, Lumberyard and GameLift are aimed at developers building cloud-connected games that can work across multiple platforms.

    Available to download in beta today, Lumberyard is a particularly notable move from the Internet giant, as it sees Amazon go up against a number of long-standing incumbents in the game-engine realm, including Unity and Epic Games’ Unreal Engine, not to mention more recent entrants such as Autodesk’s Stingray.

    But, of course, Amazon has clout, mindshare, and the computing might of AWS on its side. It’s also worth remembering that Amazon has been aligning itself with the gaming world for a while

    Reply
  19. Tomi Engdahl says:

    Amazon Lumberyard is a free AAA game engine deeply integrated
    with AWS and Twitch – with full source.
    https://aws.amazon.com/lumberyard/

    Amazon Lumberyard is a free, cross-platform, 3D game engine for you to create the highest-quality games, connect your games to the vast compute and storage of the AWS Cloud, and engage fans on Twitch.

    With a full-featured editor, native code performance, stunning visuals, and hundreds of other features, Amazon Lumberyard gives professional developers the tools and technology they need to build world-class games.

    With Amazon Lumberyard’s visual scripting tool, your designers and engineers with little to no backend experience can add cloud-connected features to a game in as little as minutes (such as a community news feed, daily gifts, or server-side combat resolution) through drag-and-drop visual scripting.

    Reply
  20. Tomi Engdahl says:

    Fat lady sings for Opera Software as Chinese investors agree $1.2bn buyout
    Chrome clone browser company set to go under the hammer
    http://www.theinquirer.net/inquirer/news/2446535/fat-lady-sings-for-opera-software-as-chinese-investors-agree-usd12bn-buyout

    NORWEIGAN BROWSER MAKER Opera Software looks set to be sold to a consortium of Chinese investors after the company’s board agreed a $1.2bn takeover.

    Opera effectively put itself up for sale in August when it hired investment bank Morgan Stanley to sniff out anyone interested in snapping it up. That decision followed a decline in earnings due to its continuing loss of market share, compounded by lower advertising sales.

    The consortium buying the company includes security company Qihoo 360 and internet firm Beijing Kunlun Tech. The deal is backed by the investment funds Golden Brick and Yonglian Investment.

    In addition to the Opera web browser, the company also offers the SurfEasy virtual private networking service. It also has intellectual property in mobile and slimline web browsers, technology that is embedded in a range of ‘smart’ and other devices.

    Reply
  21. Tomi Engdahl says:

    Women devs – want your pull requests accepted? Just don’t tell anyone you’re a girl
    Did you pull last night? … Code. We’re talking about code
    http://www.theregister.co.uk/2016/02/11/female_devs_experience_discrimination/

    A new study has found that women are more likely than men to have their open-source software contributions accepted – but only when their gender is hidden from project leaders.

    The study from North Carolina State University and Cal Poly examined code committed by more than 1.4 million GitHub users and their contributions to various open-source projects on the source-code repository service.

    The researchers found that women have their pull requests (or suggested changes to code) accepted by project owners more often than men overall across all programming languages, with one important caveat: acceptance rates for women drop lower than those of men when their gender is made known.

    The researchers noted that familiarity plays a major role in showing the bias. When contributions from “insiders” who were known and trusted within a project were analysed, the gender differences disappeared.

    In short, as a whole women contribute more successful submissions to GitHub than men do, but when faced with the choice between the submissions of a man and a woman, a project leader is more apt to use code from a man.

    As a result, the researchers suggest that overall, the women contributing code to GitHub are more competent than their male counterparts, with the theory being that higher attrition rates for women in the lower levels of STEM careers lead to higher levels of average training and experience.

    Reply
  22. Tomi Engdahl says:

    Putin’s internet guru says ‘nyet’ to Windows, ‘da’ to desktop Linux
    In Soviet Russia, computer uninstalls you!
    http://www.theregister.co.uk/2016/02/11/putins_internet_guru_says_nyet_to_windows/

    The Russian government says it is looking to dump Microsoft and adopt Linux as the operating system for agency PCs.

    In an interview with Bloomberg, Russian internet advisor German Klimenko said the state will consider moving all of its networks off the Microsoft platform and onto an unspecified Linux build instead.

    Citing Microsoft’s capitulation to the US government in honoring sanctions against Russia, Klimenko said that the Redmond software giant had reached the “point of no return” with Moscow and that 22,000 government agencies and municipal offices were prepared to drop Windows right now.

    “It’s like a wife seeing her husband with another woman – he can swear an oath afterward, but the trust is lost,” Klimenko was quoted as saying.

    Reply
  23. Tomi Engdahl says:

    Don’t freak out, but your primary storage has become ‘aware’
    Picking apart the new ‘data-aware’ storage trend
    http://www.theregister.co.uk/2016/02/11/why_data_aware_primary_storage/

    The term data-aware storage is fairly new to our industry and its definition, as often happens, is not very clear. Of course vendors have their own view of this term.

    In my personal opinion, data-aware storage means being able to analyse infrastructure and workloads as well as storing the data involved, giving a complete picture of what is really happening to your data while empowering several business, organisational and security processes.

    And I’m convinced that the concept of Data-awareness can also be applied to other infrastructure components.

    Summing up the challenges

    IT organisations are facing many new challenges at the data and storage infrastructure level, but when it comes to primary storage, two trends are common to everyone:

    Capacity growth – Not only are we dealing with larger data sets, but data retention policies extend much further out than in the past and many organisations are now adopting never-delete policies for most of their data.
    Data and workload diversity – The number of applications and access methods has radically changed in the last few years. Now we have many more data types stored in a single storage system, accessed by a larger number of people and devices.

    These problems are relatively easy to solve when they are treated individually, but the sum of the two introduces a new level of complexity and it becomes much harder to fully understand and control what is actually stored, as well as to exploit the value of data and hidden insights. Furthermore, primary storage has to continue to deliver consistent performance while crawling through stored data for these insights.

    The growing number of applications, different data sources, lifetime-long retention periods, and users creating and accessing data from anywhere and any device are heavily impacting the effectiveness of traditional data management and auditing mechanisms, while increasing infrastructure TCO and all sorts of security risks.

    More than just saving data

    Next-generation data-aware storage systems can do more than just save data safely. In fact, they can be the answer to analysing infrastructure and workloads as well as the data involved, giving a complete picture of what is really happening to your data while empowering several business, organisational and security processes.

    In fact, data security is a big concern for any organisation now, and it has already been proved that traditional security mechanisms are no longer effective against modern attacks and data-leak prevention.

    In order to be effective, data-aware storage should have some basic characteristics:

    1. The analytics engine should be seamlessly integrated with the infrastructure, easy to use and shouldn’t impact overall performance of the production environment,
    2. Data insights and visualisations should be accessible to anyone in the organisation who needs to analyse and leverage information coming from stored data,
    3. It should be based on a no-compromise modern design with all the software features and integrations we have come to expect in traditional storage systems (like snapshots, remote replication, VMware integration, etc.)

    Reply
  24. Tomi Engdahl says:

    Turning Open Source into a Multicore Standard
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328899&

    Hoping to forestall potential compatibility issues, the Multicore Association is looking to build an API standard on the shoulders of open-source OpenAMP.

    Ideally, multi-OS deployments on multicore systems promise the best-of-both-worlds combination of (embedded) Linux functionality and RTOS performance. In practice, orchestrating an ideal collaboration between different OSes running on separate cores in an SoC is a tough job and can leave systems hobbled with worst-of-either-world execution. Available for some time in open source, the OpenAMP (Open Asymmetric Multi Processing) framework offers a solution. Yet, crowdsourcing something as delicate as a real-time heterogenous platform is a noble cause but can engender conflicts when participating developers find themselves forced to balance the common good with the specialized needs of their own applications.

    Open source OpenAMP is a framework that defines consistent features for life cycle management, interprocess communication and resource sharing among processors on a single SoC — augmenting mainline Linux’s existing LCM and IPC capabilities for working with other Linux environments. Thus, OpenAMP enables a Linux “master” to bring up a “remote” processor running its own bare-metal or RTOS environment, which in turn establishes communications channels with the master.

    Reply
  25. Tomi Engdahl says:

    Adobe delivers Animate CC (formerly Flash Professional) with many new features, also updates Muse & Bridge
    http://9to5mac.com/2016/02/09/iphone-game-playing-robot/

    Adobe announced back in December that it would be renaming Flash Professional as Animate CC in recognition of the fact that HTML 5 has now taken over from Flash as the main form of web animation. It has now done so, adding in a “seriously long list” of new features at the same time.

    The new features range from new vector art brushes to a rotating stage whose contents scale proportionally to the size – and the company is providing live demos on its Twitch.tv channel …

    Reply
  26. Tomi Engdahl says:

    Data Centers Tap ARM, 100GE
    http://www.eetimes.com/document.asp?doc_id=1328893

    It’s still early days in the growth of cloud service providers who are driving trends in servers and Ethernet networks. Their rise is opening small, but significant opportunities for non-x86-based servers, according to analysts at The Linley Group.

    Applied Micro, AMD and Cavium launched ARM server SoCs last year that together will take less than 5% of the server market this year. By 2020, Intel predicts a third of cloud providers will use FPGAs, analysts noted in a keynote at their annual data center event here.

    On the networking front, 40 Gbit controllers and switches are gaining traction this year. But they will be quickly overtaken by 25/50 and 100G chips starting next year.

    Intel’s Xeon chips will continue to dominate in big data centers for the foreseeable future but “the current products, from Applied Micro, AMD and Cavium will find niches,” said Jag Bolaria, a principal analyst at Linley Group.

    Reply
  27. Tomi Engdahl says:

    Turning Open Source into a Multicore Standard
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328899&

    Hoping to forestall potential compatibility issues, the Multicore Association is looking to build an API standard on the shoulders of open-source OpenAMP.

    Reply
  28. Tomi Engdahl says:

    Julia Carrie Wong / Guardian:
    In study of 3M GitHub pull requests, women’s code accepted at higher rate, unless “gender identifiable”

    Women considered better coders – but only if they hide their gender
    http://www.theguardian.com/technology/2016/feb/12/women-considered-better-coders-hide-gender-github

    Researchers find software repository GitHub approved code written by women at a higher rate than code written by men, but only if the gender was not disclosed

    Reply
  29. Tomi Engdahl says:

    M. Mitchell Waldrop / Nature:
    Semiconductor industry roadmap to abandon pursuit of Moore’s law for the first time as computing becomes increasingly mobile

    The chips are down for Moore’s law
    http://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

    The semiconductor industry will soon abandon its pursuit of Moore’s law. Now things could get a lot more interesting.

    Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.

    A rule of thumb that has come to dominate computing, Moore’s law states that the number of transistors on a microprocessor chip will double every two years or so — which has generally meant that the chip’s performance will, too. The exponential improvement that the law describes transformed the first crude home computers of the 1970s into the sophisticated machines of the 1980s and 1990s, and from there gave rise to high-speed Internet, smartphones and the wired-up cars, refrigerators and thermostats that are becoming prevalent today.

    None of this was inevitable: chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.

    Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.

    The industry road map released next month will for the first time lay out a research and development plan that is not centred on Moore’s law. Instead, it will follow what might be called the More than Moore strategy: rather than making the chips better and letting the applications follow, it will start with applications — from smartphones and supercomputers to data centres in the cloud — and work downwards to see what chips are needed to support them. Among those chips will be new generations of sensors, power-management circuits and other silicon devices required by a world in which computing is increasingly mobile.

    Reply
  30. Tomi Engdahl says:

    Red Hat Drives FPGAs, ARM Servers
    FPGA summit set for March
    http://www.eetimes.com/document.asp?doc_id=1328930&

    FPGA vendors and users will meet next month in an effort to define a standard software interface for accelerators. The meeting is being convened by Red Hat’s chief ARM architect, who gave an update (Wednesday) on efforts to establish ARM servers.

    “There’s a trend towards high-level synthesis so an FPGA programmer can write in OpenCL up front but the little piece that’s been ignored is how OpenCL talks to Linux,” said Jon Masters, speaking at the Linley Data Center event here.

    OS companies don’t ship drivers for OpenCL, so software developers need to understand the intimate details of the FPGA as well as the Linux kernel to make the link. Often it also involves developing a custom direct-memory access engine and fine tuning Java libraries.

    Masters did just that as part of a test board called Trilby that ran a simple search algorithm on an FPGA mounted on a PCI Express card. “Ninety percent of the effort is interface to the FPGA,” he said.

    To fix the problem, Masters has called a meeting of interested parties in March. It will be hosted by a neutral organization. He hopes to have “all the right players” involved, including major FPGA vendors.

    If the meeting is successful, the group will hammer out “in the open” one or more interfaces for standard OS drivers so users can load and configure an FPGA bit stream. It’s a significant hole, and not the only one on the road to taking FPGA accelerators into mainstream markets, according to Masters.

    FPGAs also need to become full citizens in the software world of virtualized functions where telecos in particular are rallying around new standards for network functions virtualization. Separately, programmers are using high-level synthesis especially with OpenCL to write code for FPGAs, however, experts are still needed to map and optimize the results of synthesis to the underlying hardware, he said.

    Reply
  31. Tomi Engdahl says:

    Shopping for PCs? This is what you’ll be offered in 2016
    The world’s big three PC vendors explain what they think you want to buy
    http://www.theregister.co.uk/2016/02/15/2016_business_pc_guide/

    The personal computer market has been in the doldrums for years, with global sales falling under 300 million a year, slipping nine per cent in 2015 alone. But there are also some rays of light in the market, as Intel’s predictions of a sales rebound were confirmed by a nice little bump in sales over Christmas, due in part to Windows 10.

    Windows 10 is expected to help the market again this year, as businesses look at the state of their fleets and consider the fact that Microsoft’s already ended mainstream support for Windows 7. It’s expected that plenty of organisations will look therefore decide 2016′s as good a time as any to take the plunge on a new PC fleet, powered by Windows 10.

    What else will they buy? We asked folks from the top three PC-makers – Dell, Lenovo and HP – what they see as must-haves in a 2016-vintage PC to give you a feel for what you’ll be offered.

    One thing all three companies think you’ll want this year is size. Or more specifically, a lack thereof. Towers and mini-towers are now for workstation-wranglers only. The corporate desktop is now margarine-tub-sized affair.

    That shrinkage has been made possible by three things, the first of which is the demise of optical drives. Nobody needs to load software from disc any more and USB sticks are now the dominant portable data medium. So out goes optical drives and the space they occupy. Disk density helps, too,

    Intel’s Skylake processors are the third and biggest space-saver, as they run cooler and also boast built-in graphics. By requiring less cooling and removing the need for a graphics card, Skylake means PCs can shrink.

    Bolting the client to the back of a monitor is now a common trick.

    VGA is just about dead, since HDMI and DisplayPort have become the norm. 4K is just-about mainstream and will be one reason Thunderbolt appears in more laptops as that interface has the bandwidth required to drive multiple monitors.

    The big three PC-makers are still finding new ways to tweak their kit to make it more manageable.

    Laptop-land

    2016′s laptops will do what laptops have done since day dot: get smaller, lighter and thriftier in the demands placed on batteries.

    Under the hood, the M.2 interface will make plenty more appearances, as a connector for all manner of devices but especially SSDs. The three manufacturers we spoke to see 256GB SSDs as 2016′s sweet spot, with demand for 512GB rising but cost keeping demand muted.

    USB-C won’t appear in all business laptops: consensus is it’s too soon for business users to want it, but it’s tipped for bigger things next year once its potential to replace desktop docks is realised.

    If WiGig doesn’t beat it to the punch.

    Laptops capable of doing duty as tablets are very much in demand, so touch screens are increasingly common across all three vendors’ ranges

    Reply
  32. Tomi Engdahl says:

    Microsoft Patents A Modular PC With Stackable Components
    http://yro.slashdot.org/story/16/02/14/1851219/microsoft-patents-a-modular-pc-with-stackable-components

    Microsoft has patented a “modular computing device” that would enable people to put together the exact PC components they want, allowing for replacement of certain parts rather than forcing people to buy entire new computers when they want upgrades.

    Microsoft patents a modular PC with stackable components
    http://venturebeat.com/2016/02/13/microsoft-patents-a-modular-pc-with-stackable-components/

    Microsoft has patented a “modular computing device” that would enable people to put together the exact PC components they want, allowing for replacement of certain parts rather than forcing people to buy entire new computers when they want upgrades.

    Microsoft applied for the patent in July 2015, and it was published earlier this week, on February 11. One of the patent’s authors, Tim Escolin, is a senior industrial designer on Microsoft’s Surface devices and accessories team.

    As the Surface tablet has picked up traction and led to the launch of similar devices from Google, Apple, and Samsung, the Surface brand has become more valuable within Microsoft. It helps that Microsoft has an innovative and exciting executive for the Surface team — corporate vice president Panos Panay. In October, he very enthusiastically demonstrated the Surface Book, the Surface Pro 4, and the Display Dock, and it wouldn’t be hard to imagine that Microsoft might have his group come out with additional types of new hardware.

    Modular hardware, specifically, has been an area of some interest for Microsoft.

    At CES in 2014, Microsoft helped promote gaming PC maker Razer’s concept for a modular PC called Project Christine. But two years later, the system is still not available for consumers to buy. This past September, Acer introduced a modular PC called the Revo Build Mini PC for the low entry cost of $225.

    Perhaps the most prominent modular consumer electronics project is Google’s Project Ara series of smartphones. Google has been collaborating with crowdfunded Phonebloks on Project Ara.

    Of course, if you build a PC yourself, it is modular in the sense that you can add or remove components, but it’s not very sleek. The device depicted in this patent does look pretty dang cool, and even accessible.

    Interestingly, a display is included in the hardware design (unlike the Acer product). The stackable hardware connected to the display using a hinge can contain a removable battery, a processor, a graphics card, memory, storage, speakers, and a wireless communication element.

    Reply
  33. Tomi Engdahl says:

    The Linux Foundation Forms Open Source Effort to Advance IO Services
    http://www.linuxfoundation.org/news-media/announcements/2016/02/linux-foundation-forms-open-source-effort-advance-io-services

    Industry leaders unite for Fast Data (FD.io) Project; aims to establish a high-performance IO services framework for dynamic computing environments

    SAN FRANCISCO – February 11, 2016 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today is announcing FD.io (“Fido”), a Linux Foundation project. FD.io is an open source project to provide an IO services framework for the next wave of network and storage software. The project is also announcing the availability of its initial software and formation of a validation testing lab.

    Early support for FD.io comes from founding members 6WIND, Brocade, Cavium, Cisco, Comcast, Ericsson, Huawei, Inocybe Technologies, Intel Corporation, Mesosphere, Metaswitch Networks (Project Calico), PLUMgrid and Red Hat.

    Architected as a collection of sub-projects, FD.io provides a modular, extensible user space IO services framework that supports rapid development of high-throughput, low-latency and resource-efficient IO services. The design of FD.io is hardware, kernel, and deployment (bare metal, VM, container) agnostic.

    “The adoption of open source software has transformed the networking industry by reducing technology fragmentation and increasing user adoption,” said Jim Zemlin, executive director, The Linux Foundation. “The FD.io project addresses a critical area needed for flexible and scalable IO services to meet the growing demands of today’s cloud computing environments.”

    Software Features
    Initial code contributions for FD.io include Vector Packet Processing (VPP), technology being donated by one of the project’s founding members, Cisco. The initial release of FD.io is fully functional and available for download, providing an out-of-the-box vSwitch/vRouter utilizing the Data Plane Development Kit (DPDK) for high-performance, hardware-independent I/O. The initial release will also include a full build, tooling, debug, and development environment and an OpenDaylight management agent. FD.io will also include a Honeycomb agent to expose netconf/yang models of data plane functionality to simplify integration with OpenDaylight and other SDN technologies.

    VPP is production code currently running in products available on the market today. VPP runs in user space on multiple architectures, including x86, ARM, and Power, and is deployed on various platforms including servers and embedded devices.

    https://fd.io/

    1 Vector Packet Processing (VPP)

    At the heart of fd.io is Vector Packet Processing (VPP) technology.

    In development since 2002, VPP is production code currently running in shipping products. It runs in user space on multiple architectures including x86, ARM, and Power architectures on both x86 servers and embedded devices. The design of VPP is hardware, kernel, and deployment (bare metal, VM, container) agnostic. It runs completely in userspace.

    VPP helps FD.io push extreme limits of performance and scale. Independent testing shows that, at scale, VPP-powered FD.io is two orders of magnitude faster than currently available technologies.

    The fixed costs of processing the vector of packets are amortized across the entire vector. This lead not only to very high performance, but also statistically reliable performance.

    The graph node architecture of VPP also makes for easy extensibility. You can build an independent binary plugin for VPP from a separate source code base (you need only the headers). Plugins are loaded from the plugin directory. A plugin for VPP can rearrange the packet graph and introduce new graph nodes. This allows new features to be introduce via the plugin, without needing to change the core infrastructure code.

    2 Hardware Acceleration

    This same graph node architecture also allows FD.io to dynamically take advantage of hardware acceleration when available, allowing vendors to continue to innovate in hardware without breaking the “run anywhere” promise of FD.io’s software.

    3 Programmability

    The VPP Technology also provides a very high performance low level API. The API works via a shared memory message bus. The messages passed along the bus are specified in a simple IDL (Interface Definition Language) which is used to create C client libraries and Java client libraries.

    4 Integration With Other Systems

    If the controller supports OpenStack Neutron (as OpenDaylight does) this provides a simple story for Openstack to VPP integration.

    Reply
  34. Tomi Engdahl says:

    Goldman Sachs: VR and AR “Will Be The Next Generation Computing Platform” Worth $80 Billion By 2025
    http://news.slashdot.org/story/16/02/14/2330218/goldman-sachs-vr-and-ar-will-be-the-next-generation-computing-platform-worth-80-billion-by-2025

    As consumer VR headsets from major players like Facebook, Sony, HTC and Valve head to the market this year, the mainstream consumer market is beginning to catch sight of the technology’s potential. Prestigious investment bank Goldman Sachs calls augmented reality and virtual reality “the next generation computing platform” and forecasts an $80 billion market by 2025.

    Goldman Sachs: VR and AR “Will Be the Next Generation Computing Platform”
    Prestigious investment bank predicts an $80 billion market by 2025
    http://www.roadtovr.com/goldman-sachs-vr-and-ar-will-be-the-next-generation-computing-platform/

    Many following the resurgence of VR starting with Oculus’ 2012 Kickstarter have bet their careers that VR and AR will be a disruptive technology, but sometimes it takes buy-in from one of the world’s largest and most influential investment banks to prove to the rest of the world that you aren’t crazy.

    It’s actually a problem many with the VR bug are probably familiar with when talking to people who have never used virtual reality: because it’s nearly impossible to understand without trying it for yourself, people in the VR sphere who tell outsiders that “this is going to change the world” just sound like ever other person who has uttered those words (99.99% of which are dead wrong). Lots of forced smiling and nodding takes place during these conversations.

    “While today virtual reality is primarily thought of as a place for hardcore gamers to spend their spare time, it’s increasingly impacting sectors that people touch every day,” says Bellini. “For example, in real-state: instead of having to go see 50 homes with an agent over the weekend, you might be able to put on a pair of virtual reality glasses or a head mounted display at your realtors office and be able to do a virtual walk-through of what those properties look like and therefore maybe you could eliminate 30 out of 50 on your list and be much more efficient with your time.”

    Reply
  35. Tomi Engdahl says:

    Hired:
    How do salaries compare across 11 major tech hubs? Where are software engineers paid the most accounting for cost of living? — Find out from Hired’s inaugural State of Salaries Report, based on actual offer data.

    State of US Salaries Report
    https://hired.com/whitepapers/software-engineer-salary-data

    Get unprecedented visibility into the market for software engineers based on actual job offers made to real people.

    Reply
  36. Tomi Engdahl says:

    Connie Loizos / TechCrunch:
    Study: investors in unicorns received senior liquidation preferences in 42% of Q4 2015 rounds, up from 15% in previous two quarters

    Doomed-i-corns: Unicorns Seemingly Reach a Tipping Point
    http://techcrunch.com/2016/02/10/doomed-i-corns-unicorns-seemingly-reach-a-tipping-point/

    This morning, the law firm Fenwick & West published new findings about all the U.S.-based unicorn financings that took place during the last nine months of 2015. It’s rife with interesting nuggets, but perhaps most fascinating is that in the fourth quarter of last year, half of the 12 rounds it tracked featured valuations in the $1 billion to $1.1 billion range — and with terms that were far more onerous than earlier in the year.

    Fenwick & West politely suggests these companies may have been “willing to be more flexible” regarding “investor friendly terms” in order to attain their billion-dollar-plus valuations. We’d call it bone-headed.

    Reply
  37. Tomi Engdahl says:

    When it comes to ARM-based servers, Masters’ day job, “We have all the software pieces, it’s a matter of quality engineering and the out-of-box experience,” he said.

    Specifically, current servers have some hardware glitches such as non-standard mobile PCI Express blocks in their SoCs or a few lines of tweaked code in their UEFI (Unified Extensible Firmware Interface) firmware. The anomalies prevent systems from running Linux without some fussy fiddling around — the kind of thing data center operators shouldn’t have to do. In that regard, “one of my goals is to make ARM servers boring,” he said.

    “Take a look at what Qualcomm is doing with its developer system — they have a phenomenally good out-of-box experience,” Masters said.

    A variety of boards are available using Applied Micro, AMD and Cavium SoCs. However many are not the standard dual-socket systems popular in the x86 world, and none carry a tier-one brand.
    Sponsor video, mouseover for sound

    So Red Hat Linux 7.2 is currently only available in a developer preview for ARM. But a standard Linux OS tested on multiple, shipping servers “is very close now,” Master said.

    After a couple years of hard work, most of the standard open-source server software is running on ARM including a LAMP stack, two Java virtual machines, Xen and KVM hypervisors and Linux variants from Suse and Canonical. There’s some work-in-progress polishing off Docker containers and the Ceph storage stack as well as Armband, a version of open-source Network Functions Virtualization.

    Bloomberg is trying out a version of Openstack on ARM. Last year, Masters ran on ARM a version of Apache Spark, a big data analytics engine

    Two years ago, Intel coined the term “microserver…to keep ARM in a box,” he said. “Every single design I’ve seen is highly performant, they may not always win every speed race but they are good enough,”

    He claims integrated FPGAs and a new server memory tier tied to the x86 won’t diminish the “disruptive opportunity” for ARM servers, but the question is a good one.

    Source: http://www.eetimes.com/document.asp?doc_id=1328930&page_number=2

    Reply
  38. Tomi Engdahl says:

    Soft Machines: Promising, Not Proven
    Latest simulations look impressive
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328939&

    Veteran microprocessor analyst Kevin Krewell plumbs startup Soft Machines’ VISC technology following a recent release of updated simulation data of its promising multicore architecture.

    Soft Machines is working on a new architecture that, if successful, will represent a major breakthrough in single- and multicore CPU performance. The company claims it can build a multicore processor where hardware orchestration logic allows multiple CPU cores to act as one, significantly improving instruction per cycle (IPC) performance over a single CPU core and allowing multicore processors to perform significantly better on single-threaded code.

    Reply
  39. Tomi Engdahl says:

    Kotlin 1.0 Released
    http://developers.slashdot.org/story/16/02/15/2138218/kotlin-10-released

    Kotlin, one of the challengers to Java’s VM, has been released in version 1. Kotlin is object-oriented, statically typed and comes with professional IDE support by Jetbrains — which is no big surprise

    planned support for JavaScript — which sounds interesting considering JS has gained quite some traction recently. Kotlin is FOSS and is released under the Apache license.

    Statically typed programming language
    for the JVM, Android and the browser
    100% interoperable with Java™
    https://kotlinlang.org/

    Reply
  40. Tomi Engdahl says:

    Unikernels, Docker, and Why You Should Care
    http://www.linuxjournal.com/content/unikernels-docker-and-why-you-should-care?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    Docker’s recent acquisition of Unikernel Systems has sent pulses racing in the microservice world. At the same time, many people have no clue what to make of it, so here’s a quick explanation of why this move is a good thing.

    Although you may not be involved in building or maintaining microservice-based software, you certainly use it. Many popular Web sites and services are powered by microservices, such as Netflix, eBay and PayPal. Microservice architectures lend themselves to cloud computing and “scale on demand”, so you’re sure to see more of it in the future.

    Better tools for microservices is good news for developers, but it has a benefit for users too. When developers are better supported, they make better software.

    Docker is a tool that allows developers to wrap their software in a container that provides a completely predictable runtime environment.

    VMs have become essential in the high-volume world of enterprise computing. Before VMs became popular, physical servers often would run a single application or service, which was a really inefficient way of using physical resources. Most of the time, only a small percentage of the box’s memory, CPU and bandwidth were used. Scaling up meant buying a new box–and that’s expensive.

    VMs meant that multiple servers could run on the same box at the same time. This ensured that the expensive physical resources were put to use.

    VMs are also a solution to a problem that has plagued developers for years: the so-called “it works on my machine” problem that occurs when the development environment is different from the production environment. This happens very often. It shouldn’t, but it does.

    Although VMs solve a lot of problems, they aren’t without some shortcomings of their own. For one thing, there’s a lot of duplication.

    Containers, such as Docker, offer a more lightweight alternative to full-blown VMs. In many ways, they are very similar to virtual machines. They provide a mostly self-contained environment for running code. The big difference is that they reduce duplication by sharing. To start with, they share the host environment’s Linux kernel. They also can share the rest of the operating system.

    In fact, they can share everything except for the application code and data. For instance, I could run two WordPress blogs on the same physical machine using containers. Both containers could be set up to share everything except for the template files, media uploads and database.

    With some sophisticated filesystem tricks, it’s possible for each container to “think” that it has a dedicated filesystem.

    Containers are much lighter and have lower overhead compared to complete VMs. Docker makes it relatively easy to work with these containers, so developers and operations can work with identical code. And, containers lend themselves to cloud computing too.

    So what about microservices and unikernels?

    Microservices are a new idea–or an old idea, depending on your perspective.

    The concept is that instead of building a big “monolithic” application, you decompose your app into multiple services that talk to each other through a messaging system–a well-defined interface. Each microservice is designed with a single responsibility. It’s focused on doing a single simple task well.

    If that sounds familiar to you as an experienced Linux user, it should. It’s an extension of some of the main tenets of the UNIX Philosophy. Programs should focus on doing one thing and doing it well, and software should be composed of simple parts that are connected by well-defined interfaces.

    Microservices typically run in their own container. They usually communicate through TCP and the host environment (or possibly across a network).

    The advantage of building software using microservices is that the code is very loosely coupled. If you need to fix a bug or add a feature, you need to make only changes in a few places. With monolithic apps, you probably would need to change several pieces of code.

    What’s more, with a microservice architecture, you can scale up specific microservices that are feeling strain. You don’t have to replicate the entire application.

    Linux is a “kitchen sink” system–it includes everything needed for most multi-user environments. It has drivers for the most esoteric hardware combinations known to man.

    Unikernels are a lighter alternative that is well suited to microservices. A unikernel is a self-contained environment that contains only the low-level features that a microservice needs to function. And, that includes kernel features.

    Reply
  41. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Google launches TensorFlow Serving, an open source project for taking machine learning models into production

    Google Makes It Easier To Take Machine Learning Models Into Production
    http://techcrunch.com/2016/02/16/google-makes-it-easier-to-take-machine-learning-models-into-production/

    Google launched TensorFlow Serving today, a new open source project that aims to help developers take their machine learning models into production. Unsurprisingly, TensorFlow Serving is optimized for Google’s own TensorFlow machine learning library, but the company says it can also be extended to support other models and data.

    While projects like TensorFlow make it easier to build machine learning algorithms and train them for certain types of data inputs, TensorFlow Serving specializes in making these models usable in production environments. Developers train their models using TensorFlow and then use TensorFlow Serving’s APIs to react to input from a client. Google also notes that TensorFlow Serving can make use of available GPU resources on a machine to speed up processing.

    As Google notes, having a system like this in place doesn’t just mean developers can take their models into production faster, but they can also experiment with different algorithms and models and still have a stable architecture and API in place.

    As Google notes, TensorFlow Serving is written in C++ (and not Google’s own Go). The software is optimized for performance, and the company says it can handle over 100,000 queries per second per core on a 16-core Xeon machine.

    Reply
  42. Tomi Engdahl says:

    IBM Claims Tamper-Resistant Server
    Patented HD/SW keeps data safe from breaches
    http://www.eetimes.com/document.asp?doc_id=1328942&

    IBM claims its newest z13s server family —announced today at the IBM PartnerWorld Leadership Conference 2016 (February 16 and 17, Orlando, Fla.)—dovetails with hybrid cloud transactions with Internet of Things (IoT) devices by keeping user data safe even if the system is tampered with or breached.

    The key, says IBM, is an end-to-end solution using a hardware/software security infrastructure that guards user-data before, during and after potential breaches. Instead of mere signature spotting, IBM uses analytics to identify malicious behavior even before its signature is known, based on learned behaviors using ever-improving machine-learning. IBM calls the z13s the “world’s most secure server” because all data is encrypted and the decryption keys are erased if a hacker tries to gain entrance.

    “Nothing else comes close to IBM’s z-Systems, including the new z13s,”

    IBM is following its own “big-brother/little-brother” strategy for systems somewhat similar to Intel’s tick-tock strategy for processors, but different in that a smaller “s” version is released after every major mainframe release,

    “To handle today’s analytics-heavy workloads, the z13s comes with a maximum of 4TBs of RAIM (Redundant Array of Independent Memory), while the z13 has a maximum of 10TBs.

    According to DiDio and other analysts, IBM’s z Systems are already predominant at banks and finance, health and welfare organizations as well as in government and defense, and that the former z13, introduced last year, gave mid-sized businesses an expandable mainframe base with a low cost of entry. Now the latest z13s follow-up this year, is giving mid-sized businesses an even lower priced entry point (starting at $75,000) albeit without the expandability of z13—to take advantage of faster more reliable encryption/decryption hardware/software as well as the superior up-time of IBM’s platform.

    “Important to this announcement are many new security offerings, some available through IBM’s newly-announced partners. These focus on the hybrid cloud that can be created within the mainframe (say, with z/OS or with Linux) and the need to secure the ecosystem and to identify threats (both internal and external) as they are happening by using cognitive analysis,” Kahn told us.

    IBM’s security offerings include Guardium—a data activity monitor that keeps track of who is accessing what data complete with an audit trail—identifying inappropriate access attempts by hackers before they decrypt it. The Cyber Security Analytics option (free to try out) uses cognitive analytics running on IBM supercomputers off-site to learn each z13s’s typical usage, becoming more effective as it learns over time, and alerting security personnel when unusual activities are taking place. Working with Cyber Security Analytics is QRadar which does additional analytics correlating data from more than 500 sources to aid in deciding whether anomalous behaviors are potential threats.

    IBM zSecure provides integration that harnesses security-relevant information from across the entire organization using real-time analytics to provide a context that helps detect threats faster, identify vulnerabilities, prioritize risk, and automate compliance activities.

    IBM Security Identity Governance and Intelligence software likewise augments identity and authentication management by coordinating policies and preventing critical data from being accessed by inappropriate parties.

    Reply
  43. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    Microsoft now selling licenses to deploy Red Hat Enterprise Linux on Azure, also announces Azure support for Walmart’s OneOps app lifecycle management platform

    Microsoft Brings Red Hat Enterprise Linux To Azure
    http://techcrunch.com/2016/02/17/microsoft-brings-red-hat-enterprise-linux-to-azure/

    Microsoft is now selling Red Hat Enterprise Linux licenses. Starting today, you will be able to deploy Red Hat Linux Enterprise (RHLE) from the Azure Marketplace and get support for your deployments from both Microsoft and Red Hat.

    In addition, Microsoft today announced that it is now offering certified Bitnami images in the Azure Marketplace and it now supports Walmart‘s (yes — that Walmart‘s) open source OneOps application lifecycle management platform. Until today, OneOps only offered a machine image for Amazon’s AWS platform.

    2016-02-17_0957Seeing the words ‘Microsoft’ and ‘Linux’ together in a single sentence may still come as a shock to a few people, but Microsoft says more than 60 percent of images in the Azure Marketplace are now Linux-based.

    As Red Hat’s Mike Ferris, its senior director for business architecture, and Microsoft’s director of program management for Azure Corey Sanders told me earlier this week, the two companies are also working closely together on supporting customers who choose to go the RHEL route on Azure. Red Hat and Microsoft’s support specialists are actually sitting together to answer their customers’ questions, which is a first for both companies.

    Reply
  44. Tomi Engdahl says:

    2016: the year IT sales will go sdrawkcaB
    Chinese spending to shrink for the first time ever, dragging growth down to just two percent
    http://www.theregister.co.uk/2016/02/18/2016_the_year_it_sales_will_go_backwards/

    The world will spend US$2.3 trillion on information technology hardware, software and services in 2016, but that represents a “major slowdown” according to analyst firm IDC.

    Smartphones and China are mostly to blame for the decline. The former is at fault because new buyers are drying up, which makes it harder for smartphones to contribute their current half of 2015′s six per cent growth rate. The latter is a problem because it’s experiencing uncertain economic conditions and local organisations just aren’t spending as a result..

    The result is a prediction of global growth at around two per cent, rather lower than the five or six percent the world’s clocked up in years since the worst of the financial crisis ebbed.

    But even that growth produced two per cent fall in technology spend when counted in US dollars, thanks to that currency’s appreciation which meant buyers beyond the land of the free sent less money to US companies.

    There were some bright spots in 2015. IDC says “Spending on cloud infrastructure was also strong throughout the year, resulting in growth of 16% for the server market and 10% for storage systems.” Spending on enterprise software rose seven per cent, as organisations snapped up “analytics, security, and collaborative applications.”

    But for this year IDC thinks that growth will “soften” and it’s also backing away from previous optimism about PC sales.

    Reply
  45. Tomi Engdahl says:

    Backblaze Dishes On Drive Reliability In their 50k+ Disk Data Center
    http://hardware.slashdot.org/story/16/02/17/1750220/backblaze-dishes-on-drive-reliability-in-their-50k-disk-data-center

    Hard Drive Reliability Review for 2015
    https://www.backblaze.com/blog/hard-drive-reliability-q4-2015/

    By the end of 2015, the Backblaze datacenter had 56,224 spinning hard drives containing customer data. These hard drives reside in 1,249 Backblaze Storage Pods. By comparison 2015 began with 39,690 drives running in 882 Storage Pods. We added 65 Petabytes of storage in 2015 give or take a Petabyte or two.

    Hard Drive Statistics for 2015

    The table below contains the statistics for the 18 different models of hard drives in our datacenter as of 31 December 2015. These are the hard drives used in our Storage Pods to store customer data. The Failure Rates and Confidence Intervals are cumulative from Q2 2013 through Q4 2015.

    Reply
  46. Tomi Engdahl says:

    Laptop Drives faster and their capacity to grow. Toshiba’s new SG5 series is a good example. It brings M.2 format for the first time up to a terabyte portable storage.

    SG5 Series hard drives come in six gigabit SATA interface. Capacity Versions reach 128 GB to TB.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4004:teratavun-nopea-muisti-lappariin&catid=13&Itemid=101

    Reply
  47. Tomi Engdahl says:

    PVS-Studio Analyzer Spots 40 Bugs In the FreeBSD Kernel
    http://tech.slashdot.org/story/16/02/19/001202/pvs-studio-analyzer-spots-40-bugs-in-the-freebsd-kernel

    Svyatoslav Razmyslov from PVS-Studio Team published an article on the check of the FreeBSD kernel. PVS-Studio developers are known for analyzing various projects to show the abilities of their product, and do some advertisement, of course. Perhaps, this is one of the most acceptable and useful ways of promoting a proprietary application. They have already checked more than 200 projects and detected 9355 bugs. At least that’s the number of bugs in the error base of their company.

    So now it was FreeBSD kernel’s turn.

    PVS-Studio is a tool for bug detection in the source code of programs, written in C, C++ and C#. It performs static code analysis and generates a report that helps a programmer find and fix the errors in the code.

    PVS-Studio delved into the FreeBSD kernel
    http://www.viva64.com/en/b/0377/

    Reply
  48. Tomi Engdahl says:

    3 common cloud pitfalls IT should avoid
    http://www.cio.com/article/3033820/cloud-computing/3-common-cloud-pitfalls-it-should-avoid.html

    As modern businesses flock to the cloud, many fail to perform the necessary due diligence and accordingly fall victim to these common mistakes.

    Here are three mistakes IT professionals making a switch to the cloud should avoid:
    1. Cloud means lots of support for staff
    2. Limitations of cloud’s one-size-fits-all approach
    3. Proactive planning for growth objectives

    IT leaders need to rethink how their businesses run and determine how assets — software, hardware and data — can be optimized to drive business value

    Cloud-based tools are incredibly powerful, but businesses need to do their homework, know the challenges they’ll likely encounter and develop plans that incorporate new organizational visions, as well as realistic business goals.

    Reply
  49. Tomi Engdahl says:

    Hey British coders: DevOps – you’re doing it wrong
    Plus you need new metrics. Chin up, let’s get this sorted
    http://www.theregister.co.uk/2016/02/19/devops_needs_new_success_metrics/

    If you want a brief summary of DevOps, it goes like this: a lot of those who claim to be implementing DevOps aren’t getting it right. And British companies are doing worse than their peers abroad.

    That’s the potted findings of a CA Technologies earlier this year that claimed there exists a gap in perception, pointing out that 84 per cent of UK organisations agreed it is important to have IT and business alignment in relation for DevOps, but just 36 per cent already had this goal in place.

    But this distinction is jumping the gun. One of the main elements of DevOps is the way it breaks down barriers and dispenses with some of the existing metrics used to measure IT delivery.

    Given this, then, how is it possible to measure elements such as “business and IT alignment” – areas where business units may have a different idea of success from the IT teams.

    It’s a bit like asking people how much they’re drinking: they fill in questionnaires saying they’re moderate imbibers when it reality, they’re knocking it back like George Best on a Saturday night. People aren’t always good at self-assessment and can’t be relied upon to deliver accurate figures.

    Georg von Sperling, senior director a CA Technologies, reckons we need to develop a whole new set of metrics when it comes to DevOps deployment.

    “Traditional metrics are useless,” Von Sperling told The Reg.

    The usual indicators of what marks success are outdated while, in other cases, there are elements that companies shouldn’t be measuring at all, according to Von Sperling. Examples include “vanity” metrics such as lines of code produced or function points created as these reward the wrong type of behaviour.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*