Computer trends for 2014

Here is my collection of trends and predictions for year 2014:

It seems that PC market is not recovering in 2014. IDC is forecasting that the technology channel will buy in around 34 million fewer PCs this year than last. It seem that things aren’t going to improve any time soon (down, down, down until 2017?). There will be no let-up on any front, with desktops and portables predicted to decline in both the mature and emerging markets. Perhaps the chief concern for future PC demand is a lack of reasons to replace an older system: PC usage has not moved significantly beyond consumption and productivity tasks to differentiate PCs from other devices. As a result, PC lifespan continue to increase. Death of the Desktop article says that sadly for the traditional desktop, this is only a matter of time before its purpose expires and that it would be inevitable it will happen within this decade. (I expect that it will not completely disappear).

When the PC business is slowly decreasing, smartphone and table business will increase quickly. Some time in the next six months, the number of smartphones on earth will pass the number of PCs. This shouldn’t really surprise anyone: the mobile business is much bigger than the computer industry. There are now perhaps 3.5-4 billion mobile phones, replaced every two years, versus 1.7-1.8 billion PCs replaced every 5 years. Smartphones broke down that wall between those industries few years ago – suddenly tech companies could sell to an industry with $1.2 trillion annual revenue. Now you can sell more phones in a quarter than the PC industry sells in a year.

After some years we will end up with somewhere over 3bn smartphones in use on earth, almost double the number of PCs. There are perhaps 900m consumer PCs on earth, and maybe 800m corporate PCs. The consumer PCs are mostly shared and the corporate PCs locked down, and neither are really mobile. Those 3 billion smartphones will all be personal, and all mobile. Mobile browsing is set to overtake traditional desktop browsing in 2015. The smartphone revolution is changing how consumers use the Internet. This will influence web design.

crystalball

The only PC sector that seems to have some growth is server side. Microservers & Cloud Computing to Drive Server Growth article says that increased demand for cloud computing and high-density microserver systems has brought the server market back from a state of decline. We’re seeing fairly significant change in the server market. According to the 2014 IC Market Drivers report, server unit shipment growth will increase in the next several years, thanks to purchases of new, cheaper microservers. The total server IC market is projected to rise by 3% in 2014 to $14.4 billion: multicore MPU segment for microservers and NAND flash memories for solid state drives are expected to see better numbers.

Spinning rust and tape are DEAD. The future’s flash, cache and cloud article tells that the flash is the tier for primary data; the stuff christened tier 0. Data that needs to be written out to a slower response store goes across a local network link to a cloud storage gateway and that holds the tier 1 nearline data in its cache. Never mind software-defined HYPE, 2014 will be the year of storage FRANKENPLIANCES article tells that more hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins. FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that object storage finally takes off.

IT managers are increasingly replacing servers with SaaS article says that cloud providers take on a bigger share of the servers as overall market starts declining. An in-house system is no longer the default for many companies. IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding. IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers. In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers. The shift will slow the purchase of server sales to enterprise IT. Big cloud providers are more and more using their own designs instead of servers from big manufacturers. Data center consolidations are eliminating servers as well. For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining.

I hope that the IT business will start to grow this year as predicted. Information technology spends to increase next financial year according to N Chandrasekaran, chief executive and managing director of Tata Consultancy Services (TCS), India’s largest information technology (IT) services company. IDC predicts that IT consumption will increase next year to 5 per cent worldwide to $ 2.14 trillion. It is expected that the biggest opportunity will lie in the digital space: social, mobility, cloud and analytics. The gradual recovery of the economy in Europe will restore faith in business. Companies are re-imaging their business, keeping in mind changing digital trends.

The death of Windows XP will be on the new many times on the spring. There will be companies try to cash in with death of Windows XP: Microsoft’s plan for Windows XP support to end next spring, has received IT services providers as well as competitors to invest in their own services marketing. HP is peddling their customers Connected Backup 8.8 service to prevent data loss during migration. VMware is selling cloud desktop service. Google is wooing users to switch to ChromeOS system by making Chrome’s user interface familiar to wider audiences. The most effective way XP exploiting is the European defense giant EADS subsidiary of Arkoon, which promises support for XP users who do not want to or can not upgrade their systems.

There will be talk on what will be coming from Microsoft next year. Microsoft is reportedly planning to launch a series of updates in 2015 that could see major revisions for the Windows, Xbox, and Windows RT platforms. Microsoft’s wave of spring 2015 updates to its various Windows-based platforms has a codename: Threshold. If all goes according to early plans, Threshold will include updates to all three OS platforms (Xbox One, Windows and Windows Phone).

crystalball

Amateur programmers are becoming increasingly more prevalent in the IT landscape. A new IDC study has found that of the 18.5 million software developers in the world, about 7.5 million (roughly 40 percent) are “hobbyist developers,” which is what IDC calls people who write code even though it is not their primary occupation. The boom in hobbyist programmers should cheer computer literacy advocates.IDC estimates there are almost 29 million ICT-skilled workers in the world as we enter 2014, including 11 million professional developers.

The Challenge of Cross-language Interoperability will be more and more talked. Interfacing between languages will be increasingly important. You can no longer expect a nontrivial application to be written in a single language. With software becoming ever more complex and hardware less homogeneous, the likelihood of a single language being the correct tool for an entire program is lower than ever. The trend toward increased complexity in software shows no sign of abating, and modern hardware creates new challenges. Now, mobile phones are starting to appear with eight cores with the same ISA (instruction set architecture) but different speeds, some other streaming processors optimized for different workloads (DSPs, GPUs), and other specialized cores.

Just another new USB connector type will be pushed to market. Lightning strikes USB bosses: Next-gen ‘type C’ jacks will be reversible article tells that USB is to get a new, smaller connector that, like Apple’s proprietary Lightning jack, will be reversible. Designed to support both USB 3.1 and USB 2.0, the new connector, dubbed “Type C”, will be the same size as an existing micro USB 2.0 plug.

1,615 Comments

  1. Tomi Engdahl says:
    ‘Reactive’ Development Turns 2.0
    http://developers.slashdot.org/story/14/09/21/0547231/reactive-development-turns-20

    First there was “agile” development. Now there’s a new software movement—called ‘reactive’ development—that sets out principles for building resilient and failure-tolerant applications for cloud, mobile, multicore and Web-scale systems.

    As Systems Get More Complex, Programming Is Getting “Reactive”
    A new way to develop for the cloud.
    http://readwrite.com/2014/09/19/reactive-programming-jonas-boner-typesafe

    Hardware keeps getting smaller, more powerful and more distributed. To keep up with growing system complexity, there’s a growing software revolution—called “reactive” development—that defines how to architect applications that are going to participate in this new world of multicore, cloud, mobile and Web-scale systems.

    One of the leaders of the reactive-software movement is distributed computing expert and Typesafe co-founder and CTO Jonas Bonér, who published the original Reactive Manifesto in September 2013.

    Similar to the early days of the “agile” software development movement, reactive programming got early traction with a hardcore fan base (mostly functional programming, distributed computing and performance experts) but is starting to creep into more mainstream development conversations as high-profile organizations like Netflix adopt and evangelize the reactive model.

    A Reactive Solution To Broken Development

    ReadWrite: So what’s not reactive about software today, and what needs to change?

    Jonas Bonér: Basically what’s “broken” ties back to software having synchronous call request chains and poor isolation, yielding single points of failure and too much contention. The problem exists in different parts of the application infrastructure.

    At the database layer, most SQL/RDBMS databases still rely on a thread pool or connection pool accessing the database through blocking APIs.

    In the service layer, we usually see a tangled mix of highly contended, shared mutable state managed by strongly coupled deep request chains. This makes this layer immensely hard to scale and to make resilient.

    The problem is usually “addressed” by adding more tools and infrastructure; clustering products, data grids, etc. But unfortunately this won’t help much at all unless we address the fundamental underlying problem.

    This is where reactive can help; good solid principles and practices can make all the difference—in particular relying on share nothing designs and asynchronous message passing.

    ReadWrite: What’s the goal of the reactive movement? What are you trying to accomplish?

    JB: A lot of companies have been doing reactive without calling it “reactive” for quite some time, in the same way companies did agile software development before it was called “agile.” But giving an idea a name and defining a vocabulary around it makes it easier to talk about and communicate with people.

    We found these core principles to work well together in a cohesive story. People have used these approaches years before

    JB: The reactive principles trace all the way back to the 1970s (e.g., Tandem Computers) and 1980s (e.g., Erlang), but scale challenges are for everybody today. You don’t have to be Facebook or Google anymore to have these types of problems. There’s more data being produced by individual users, who consume more data, and expect so much more, faster.

    There’s more data to shuffle around at the service layer; replication that needs to be done instantaneously, and the need to go to multiple nodes almost instantaneously.

    And the opportunities have changed, where virtualization and containerization make it easy to spin up nodes and cost almost nothing—but where it’s much harder for the software to keep up with those nodes in an efficient way.

    Reply
  2. Tomi Engdahl says:
    Why big data evangelists should be sent to re-education camps
    http://www.zdnet.com/why-big-data-evangelists-should-be-sent-to-re-education-camps-7000033862/

    Summary: Big data is a dangerous, faith-based ideology. It’s fuelled by hubris, it’s ignorant of history, and it’s trashing decades of progress in social justice.

    In 2008, Chris Anderson talked up a thing called The Petabyte Age in The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.

    “The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all,” he wrote.

    Declaring the scientific method dead after 2,700 years is quite a claim. Hubris, even. But, Anderson wrote, “There’s no reason to cling to our old ways.” Oh, OK then.

    Now, this isn’t the first set of claims that correlation would supersede causation, and that the next iteration of computing practices would “make everything different”.

    Privacy issues are obviously a concern. As I’ve said before, privacy fears could burst the second dot-com bubble.

    In their paper Critical questions for big data, danah boyd and Kate Crawford describe the core mythology of big data as “the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy”.

    “Too often, big data enables the practice of apophenia: Seeing patterns where none actually exist, simply because enormous quantities of data can offer connections that radiate in all directions. In one notable example, Leinweber (2007) demonstrated that data mining techniques could show a strong but spurious correlation between the changes in the S&P 500 stock index and butter production in Bangladesh,” they wrote.

    Over the last four decades, more countries have adopted data protection laws, and more of those laws are including measures similar to the 1995 European Union Data Protection Directive rather than the 1980 OECD Privacy Guidelines

    Against the increasingly “Europeanised” data privacy laws, the US is the laggard. Greenleaf compares this with the situation a century ago, when the US was the pirates’ harbour of the copyright world

    CRITICAL QUESTIONS FOR BIG DATA
    Provocations for a cultural, technological, and scholarly phenomenon
    http://www.tandfonline.com/doi/abs/10.1080/.VB-t4xYbBmY#.VB_SBBYbNI0

    The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing?

    Reply
  3. Tomi Engdahl says:
    Apple pours a cup of JavaScript for its Automator robot
    A quiet revolution in Automation
    http://www.theregister.co.uk/2014/09/22/apple_pours_a_cup_of_javascript_for_its_automator_robot/

    Apple has quietly started toying with the idea of using JavaScript as a task automator in the Yosemite version of OS X.

    The JavaScript host environment adds properties for automation, application, Library, Path, Progress, ObjectSpecifier, delay, console.log, and others.

    Reply
  4. Tomi Engdahl says:
    Hack runs Android apps on Windows, Mac, and Linux computers
    Google’s “App Runtime for Chrome” gets hacked to run on any major desktop OS.
    http://arstechnica.com/gadgets/2014/09/hack-runs-android-apps-on-windows-mac-and-linux-computers/

    If you remember, about a week ago, Google gave Chrome OS the ability to run Android apps through the “App Runtime for Chrome.” The release came with a lot of limitations—it only worked with certain apps and only worked on Chrome OS. But a developer by the name of “Vladikoff” has slowly been stripping away these limits. First he figured out how to load any app on Chrome OS, instead of just the four that are officially supported. Now he’s made an even bigger breakthrough and gotten Android apps to work on any desktop OS that Chrome runs on. You can now run Android apps on Windows, Mac, and Linux.

    The hack depends on App Runtime for Chrome (ARC), which is built using Native Client, a Google project that allows Chrome to run native code safely within a web browser. While ARC was only officially released as an extension on Chrome OS, Native Client extensions are meant to be cross-platform. The main barrier to entry is obtaining ARC Chrome Web Store, which flags desktop versions of Chrome as “incompatible.”

    While this hack is buggy and crashy, at its core it works. Apps turn on and load up, and, other than some missing dependencies, they work well. It’s enough to make you imagine a future when all the problems get worked out, and Google opens the floodgates on the Play Store, putting 1.3 million Android apps onto nearly every platform.

    Reply
  5. Tomi Engdahl says:
    Toshiba to shed 900 jobs in rocky PC market
    Will restructure after PC sales fell off a cliff
    http://www.theinquirer.net/inquirer/news/2371214/toshiba-to-shed-900-jobs-in-rocky-pc-market

    TOSHIBA WILL CUT 900 jobs in a PC business restructuring that will see the firm exit the business to consumer (B2C) industry in some regions.

    The firm plans to make the job cuts during the current financial year.
    The restructuring could have something to do with the dwindling PC market. Last year, the industry saw its largest drop ever in PC sales with a 10 percent decline (total of 316 million PCs were shipped in 2013 overall).

    While research firm Canalys included both tablet and PC shipments in its figures, the firm reported in May that PC sales were up five percent year on year, with shipments reaching 123.7 million in the first quarter of 2014.

    Reply
  6. Tomi Engdahl says:
    Vrvana’s Totem HMD Puts a Camera Over Each Eye
    http://hardware.slashdot.org/story/14/09/22/0052210/vrvanas-totem-hmd-puts-a-camera-over-each-eye

    The Verge reports that Montreal startup Vrvana has produced a prototype of its promised (and crowd-funded) VR Totem headset. One interesting aspect of the Totem is the inclusion of front-facing cameras, one over each eye, the output of which can be fed to the displays.

    The clarity was impressive, rivaling some of the best experiences I’ve had with a Rift or Morpheus.

    Reply
  7. Tomi Engdahl says:
    Oracle’s biggest threat: ‘No changes whatsoever’
    http://www.zdnet.com/oracles-biggest-threat-no-changes-whatsoever-7000033888/

    Summary: Oracle changed titles among its top three execs and tried to calm the troops by promising that nothing will change. Is that really a good thing?

    Larry Ellison’s move to step down as CEO of Oracle to become chief technology officer as well as making Safra Catz and Mark Hurd co-CEOs was smoothed over by promises that nothing will change about the company’s approach, day-to-day operations or strategy. Are those steady-as-Oracle goes promises a good thing?

    Let’s get real. Ellison’s move to step down from CEO may not amount to much given that Catz and Hurd were effectively running the company anyway.

    The issue is that Oracle has moved from a technology company to a cross selling machine. Oracle acquires software, cloud and hardware companies, bundles them and sells. Yet Oracle is facing multiple challenges ranging from a transition to the cloud, declining hardware sales and a customer base that is likely to use the company’s core relational database as well as alternatives for big data workloads. In other words, Oracle isn’t the only database in town.

    Oracle’s strategy — embrace cloud, win on applications and keep database customers with in-memory options — can work. But the transition will take time.

    Applications. Oracle is moving its applications customers to a subscription and cloud delivery model. However, cloud customers aren’t locked in as easily.

    Hardware. Oracle’s hardware business has stumbled for years.

    Weak performance. Oracle has missed five out of the last seven quarters. At some point, patience wears thin.

    Reply
  8. Tomi Engdahl says:
    Once Again, Oracle Must Reinvent Itself
    As Larry Ellison Leaves CEO Post, Company Faces Major Shifts Reshaping Its Market
    http://online.wsj.com/articles/once-again-oracle-must-reinvent-itself-1411167886
    Reply
  9. twitter shouldnt says:
    I am not sure where you’re getting your information, but great topic.
    I needs to spend some time learning more or
    understanding more. Thanks for fantastic info I
    was looking for this information for my mission.
    Reply
  10. Tomi Engdahl says:
    DisplayPort Alternate Mode for USB Type-C Announced – Video, Power, & Data All Over Type-C
    by Ryan Smith on September 22, 2014 9:01 AM EST
    http://www.anandtech.com/show/8558/displayport-alternate-mode-for-usb-typec-announced

    Earlier this month the USB Implementers Forum announced the new USB Power Delivery 2.0 specification. Long awaited, the Power Deliver 2.0 specification defined new standards for power delivery to allow Type-C USB ports to supply devices with much greater amounts of power than the previous standard allowed, now up to 5A at 5V, 12V, and 20V, for a maximum power delivery of 100W. However also buried in that specification was an interesting, if cryptic announcement regarding USB Alternate Modes, which would allow for different (non-USB) signals to be carried over USB Type-C connector. At the time the specification simply theorized just what protocols could be carried over Type-C as an alternate mode, but today we finally know what the first alternate mode will be: DisplayPort.

    Today the VESA is announcing that they are publishing the “DisplayPort Alternate Mode on USB Type-C Connector Standard.” Working in conjunction with the USB-IF, the DP Alt Mode standard will allow standard USB Type-C connectors and cables to carry native DisplayPort signals.

    From a technical level the DP Alt Mode specification is actually rather simple. USB Type-C – which immediately implies using/supporting USB 3.1 signaling – uses 4 lanes (pairs) of differential signaling for USB Superspeed data, which are split up in a 2-up/2-down configuration for full duplex communication. Through the Alt Mode specification, DP Alt Mode will then in turn be allowed to take over some of these lanes – one, two, or all four – and run DisplayPort signaling over them in place of USB Superspeed signaling. By doing so a Type-C cable is then able to carry native DisplayPort video alongside its other signals, and from a hardware standpoint this is little different than a native DisplayPort connector/cable pair.

    From a hardware perspective this will be a simple mux. USB alternate modes do not encapsulate other protocols (ala Thunderbolt) but instead allocate lanes to those other signals as necessary

    Along with utilizing USB lanes for DP lanes, the DP Alt Mode standard also includes provisions for reconfiguring the Type-C secondary bus (SBU) to carry the DisplayPort AUX channel. This half-duplex channel is normally used by DisplayPort devices to carry additional non-video data such as audio, EDID, HDCP, touchscreen data, MST topology data, and more.

    Reply
  11. Tomi Engdahl says:
    Outlining Thin Linux
    http://linux.slashdot.org/story/14/09/22/2245217/outlining-thin-linux

    Deep End’s Paul Venezia follows up his call for splitting Linux distros in two by arguing that the new shape of the Linux server is thin, light, and fine-tuned to a single purpose. “Those of us who build and maintain large-scale Linux infrastructures would be happy to see a highly specific, highly stable mainstream distro that had no desktop package or dependency support whatsoever”

    “It’s only a matter of time before a Linux distribution that caters solely to these considerations becomes mainstream and is offered alongside more traditional distributions.”

    The skinny on thin Linux
    http://www.infoworld.com/article/2686094/linux/the-skinny-on-thin-linux.html

    In the leap from Web to cloud, the new shape of the Linux server is thin, light, and fine-tuned to a single purpose

    Let’s put that mostly to bed. Those of us who build and maintain large-scale Linux infrastructures would be happy to see a highly specific, highly stable mainstream distro that had no desktop package or dependency support whatsoever, so was not beholden to architectural changes made due to desktop package requirements. When you’re rolling out a few hundred Linux VMs locally, in the cloud, or both, you won’t manually log into them, much less need any type of graphical support. Frankly, you could lose the framebuffer too; it wouldn’t matter unless you were running certain tests. They’re all going to be managed by Puppet, Chef, Salt, or Ansible, and they’re completely expendable.

    Now with VMs, the lack of framebuffer support is somewhat immaterial because it’s not a hardware consideration anymore. But the overall concept still applies — in many cases, any interactive administrative access to Linux servers other than SSH is simply not useful.

    This, again, is at scale and for certain use cases. It is, however, the predominant way that cloud server instances are administered. In fact, at scale, most cloud instances are never interactively accessed at all. They are built on the fly from gold images and turned up and down as load requires.

    Further, these instances are usually one-trick ponies. They perform one task, with one service, and that’s it. This is one of the reasons that Docker and other container technologies are gaining traction: They are designed to do one thing quickly and easily, with portability, and to disappear once they are no longer needed.

    These systems can be pared down to the barest of bare bones because they’re running Memcached or Nginx. They’re doing nothing else, and they never will. This is a vastly different use case than most other types of Linux servers running today

    To create such a beast, most vendors have taken existing distributions, excised as much as possible, and tuned them for their infrastructure. They then offer these images to build base images for provisioning. It’s only a matter of time before a Linux distribution that caters solely to these considerations becomes mainstream and is offered alongside more traditional distributions.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*