Computer trends for 2014

Here is my collection of trends and predictions for year 2014:

It seems that PC market is not recovering in 2014. IDC is forecasting that the technology channel will buy in around 34 million fewer PCs this year than last. It seem that things aren’t going to improve any time soon (down, down, down until 2017?). There will be no let-up on any front, with desktops and portables predicted to decline in both the mature and emerging markets. Perhaps the chief concern for future PC demand is a lack of reasons to replace an older system: PC usage has not moved significantly beyond consumption and productivity tasks to differentiate PCs from other devices. As a result, PC lifespan continue to increase. Death of the Desktop article says that sadly for the traditional desktop, this is only a matter of time before its purpose expires and that it would be inevitable it will happen within this decade. (I expect that it will not completely disappear).

When the PC business is slowly decreasing, smartphone and table business will increase quickly. Some time in the next six months, the number of smartphones on earth will pass the number of PCs. This shouldn’t really surprise anyone: the mobile business is much bigger than the computer industry. There are now perhaps 3.5-4 billion mobile phones, replaced every two years, versus 1.7-1.8 billion PCs replaced every 5 years. Smartphones broke down that wall between those industries few years ago – suddenly tech companies could sell to an industry with $1.2 trillion annual revenue. Now you can sell more phones in a quarter than the PC industry sells in a year.

After some years we will end up with somewhere over 3bn smartphones in use on earth, almost double the number of PCs. There are perhaps 900m consumer PCs on earth, and maybe 800m corporate PCs. The consumer PCs are mostly shared and the corporate PCs locked down, and neither are really mobile. Those 3 billion smartphones will all be personal, and all mobile. Mobile browsing is set to overtake traditional desktop browsing in 2015. The smartphone revolution is changing how consumers use the Internet. This will influence web design.

crystalball

The only PC sector that seems to have some growth is server side. Microservers & Cloud Computing to Drive Server Growth article says that increased demand for cloud computing and high-density microserver systems has brought the server market back from a state of decline. We’re seeing fairly significant change in the server market. According to the 2014 IC Market Drivers report, server unit shipment growth will increase in the next several years, thanks to purchases of new, cheaper microservers. The total server IC market is projected to rise by 3% in 2014 to $14.4 billion: multicore MPU segment for microservers and NAND flash memories for solid state drives are expected to see better numbers.

Spinning rust and tape are DEAD. The future’s flash, cache and cloud article tells that the flash is the tier for primary data; the stuff christened tier 0. Data that needs to be written out to a slower response store goes across a local network link to a cloud storage gateway and that holds the tier 1 nearline data in its cache. Never mind software-defined HYPE, 2014 will be the year of storage FRANKENPLIANCES article tells that more hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins. FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that object storage finally takes off.

IT managers are increasingly replacing servers with SaaS article says that cloud providers take on a bigger share of the servers as overall market starts declining. An in-house system is no longer the default for many companies. IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding. IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers. In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers. The shift will slow the purchase of server sales to enterprise IT. Big cloud providers are more and more using their own designs instead of servers from big manufacturers. Data center consolidations are eliminating servers as well. For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining.

I hope that the IT business will start to grow this year as predicted. Information technology spends to increase next financial year according to N Chandrasekaran, chief executive and managing director of Tata Consultancy Services (TCS), India’s largest information technology (IT) services company. IDC predicts that IT consumption will increase next year to 5 per cent worldwide to $ 2.14 trillion. It is expected that the biggest opportunity will lie in the digital space: social, mobility, cloud and analytics. The gradual recovery of the economy in Europe will restore faith in business. Companies are re-imaging their business, keeping in mind changing digital trends.

The death of Windows XP will be on the new many times on the spring. There will be companies try to cash in with death of Windows XP: Microsoft’s plan for Windows XP support to end next spring, has received IT services providers as well as competitors to invest in their own services marketing. HP is peddling their customers Connected Backup 8.8 service to prevent data loss during migration. VMware is selling cloud desktop service. Google is wooing users to switch to ChromeOS system by making Chrome’s user interface familiar to wider audiences. The most effective way XP exploiting is the European defense giant EADS subsidiary of Arkoon, which promises support for XP users who do not want to or can not upgrade their systems.

There will be talk on what will be coming from Microsoft next year. Microsoft is reportedly planning to launch a series of updates in 2015 that could see major revisions for the Windows, Xbox, and Windows RT platforms. Microsoft’s wave of spring 2015 updates to its various Windows-based platforms has a codename: Threshold. If all goes according to early plans, Threshold will include updates to all three OS platforms (Xbox One, Windows and Windows Phone).

crystalball

Amateur programmers are becoming increasingly more prevalent in the IT landscape. A new IDC study has found that of the 18.5 million software developers in the world, about 7.5 million (roughly 40 percent) are “hobbyist developers,” which is what IDC calls people who write code even though it is not their primary occupation. The boom in hobbyist programmers should cheer computer literacy advocates.IDC estimates there are almost 29 million ICT-skilled workers in the world as we enter 2014, including 11 million professional developers.

The Challenge of Cross-language Interoperability will be more and more talked. Interfacing between languages will be increasingly important. You can no longer expect a nontrivial application to be written in a single language. With software becoming ever more complex and hardware less homogeneous, the likelihood of a single language being the correct tool for an entire program is lower than ever. The trend toward increased complexity in software shows no sign of abating, and modern hardware creates new challenges. Now, mobile phones are starting to appear with eight cores with the same ISA (instruction set architecture) but different speeds, some other streaming processors optimized for different workloads (DSPs, GPUs), and other specialized cores.

Just another new USB connector type will be pushed to market. Lightning strikes USB bosses: Next-gen ‘type C’ jacks will be reversible article tells that USB is to get a new, smaller connector that, like Apple’s proprietary Lightning jack, will be reversible. Designed to support both USB 3.1 and USB 2.0, the new connector, dubbed “Type C”, will be the same size as an existing micro USB 2.0 plug.

2,130 Comments

  1. Tomi Engdahl says:

    ‘Reactive’ Development Turns 2.0
    http://developers.slashdot.org/story/14/09/21/0547231/reactive-development-turns-20

    First there was “agile” development. Now there’s a new software movement—called ‘reactive’ development—that sets out principles for building resilient and failure-tolerant applications for cloud, mobile, multicore and Web-scale systems.

    As Systems Get More Complex, Programming Is Getting “Reactive”
    A new way to develop for the cloud.
    http://readwrite.com/2014/09/19/reactive-programming-jonas-boner-typesafe

    Hardware keeps getting smaller, more powerful and more distributed. To keep up with growing system complexity, there’s a growing software revolution—called “reactive” development—that defines how to architect applications that are going to participate in this new world of multicore, cloud, mobile and Web-scale systems.

    One of the leaders of the reactive-software movement is distributed computing expert and Typesafe co-founder and CTO Jonas Bonér, who published the original Reactive Manifesto in September 2013.

    Similar to the early days of the “agile” software development movement, reactive programming got early traction with a hardcore fan base (mostly functional programming, distributed computing and performance experts) but is starting to creep into more mainstream development conversations as high-profile organizations like Netflix adopt and evangelize the reactive model.

    A Reactive Solution To Broken Development

    ReadWrite: So what’s not reactive about software today, and what needs to change?

    Jonas Bonér: Basically what’s “broken” ties back to software having synchronous call request chains and poor isolation, yielding single points of failure and too much contention. The problem exists in different parts of the application infrastructure.

    At the database layer, most SQL/RDBMS databases still rely on a thread pool or connection pool accessing the database through blocking APIs.

    In the service layer, we usually see a tangled mix of highly contended, shared mutable state managed by strongly coupled deep request chains. This makes this layer immensely hard to scale and to make resilient.

    The problem is usually “addressed” by adding more tools and infrastructure; clustering products, data grids, etc. But unfortunately this won’t help much at all unless we address the fundamental underlying problem.

    This is where reactive can help; good solid principles and practices can make all the difference—in particular relying on share nothing designs and asynchronous message passing.

    ReadWrite: What’s the goal of the reactive movement? What are you trying to accomplish?

    JB: A lot of companies have been doing reactive without calling it “reactive” for quite some time, in the same way companies did agile software development before it was called “agile.” But giving an idea a name and defining a vocabulary around it makes it easier to talk about and communicate with people.

    We found these core principles to work well together in a cohesive story. People have used these approaches years before

    JB: The reactive principles trace all the way back to the 1970s (e.g., Tandem Computers) and 1980s (e.g., Erlang), but scale challenges are for everybody today. You don’t have to be Facebook or Google anymore to have these types of problems. There’s more data being produced by individual users, who consume more data, and expect so much more, faster.

    There’s more data to shuffle around at the service layer; replication that needs to be done instantaneously, and the need to go to multiple nodes almost instantaneously.

    And the opportunities have changed, where virtualization and containerization make it easy to spin up nodes and cost almost nothing—but where it’s much harder for the software to keep up with those nodes in an efficient way.

    Reply
  2. Tomi Engdahl says:

    Why big data evangelists should be sent to re-education camps
    http://www.zdnet.com/why-big-data-evangelists-should-be-sent-to-re-education-camps-7000033862/

    Summary: Big data is a dangerous, faith-based ideology. It’s fuelled by hubris, it’s ignorant of history, and it’s trashing decades of progress in social justice.

    In 2008, Chris Anderson talked up a thing called The Petabyte Age in The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.

    “The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all,” he wrote.

    Declaring the scientific method dead after 2,700 years is quite a claim. Hubris, even. But, Anderson wrote, “There’s no reason to cling to our old ways.” Oh, OK then.

    Now, this isn’t the first set of claims that correlation would supersede causation, and that the next iteration of computing practices would “make everything different”.

    Privacy issues are obviously a concern. As I’ve said before, privacy fears could burst the second dot-com bubble.

    In their paper Critical questions for big data, danah boyd and Kate Crawford describe the core mythology of big data as “the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy”.

    “Too often, big data enables the practice of apophenia: Seeing patterns where none actually exist, simply because enormous quantities of data can offer connections that radiate in all directions. In one notable example, Leinweber (2007) demonstrated that data mining techniques could show a strong but spurious correlation between the changes in the S&P 500 stock index and butter production in Bangladesh,” they wrote.

    Over the last four decades, more countries have adopted data protection laws, and more of those laws are including measures similar to the 1995 European Union Data Protection Directive rather than the 1980 OECD Privacy Guidelines

    Against the increasingly “Europeanised” data privacy laws, the US is the laggard. Greenleaf compares this with the situation a century ago, when the US was the pirates’ harbour of the copyright world

    CRITICAL QUESTIONS FOR BIG DATA
    Provocations for a cultural, technological, and scholarly phenomenon
    http://www.tandfonline.com/doi/abs/10.1080/.VB-t4xYbBmY#.VB_SBBYbNI0

    The era of Big Data has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and other scholars are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people. Significant questions emerge. Will large-scale search data help us create better tools, services, and public goods? Or will it usher in a new wave of privacy incursions and invasive marketing?

    Reply
  3. Tomi Engdahl says:

    Apple pours a cup of JavaScript for its Automator robot
    A quiet revolution in Automation
    http://www.theregister.co.uk/2014/09/22/apple_pours_a_cup_of_javascript_for_its_automator_robot/

    Apple has quietly started toying with the idea of using JavaScript as a task automator in the Yosemite version of OS X.

    The JavaScript host environment adds properties for automation, application, Library, Path, Progress, ObjectSpecifier, delay, console.log, and others.

    Reply
  4. Tomi Engdahl says:

    Hack runs Android apps on Windows, Mac, and Linux computers
    Google’s “App Runtime for Chrome” gets hacked to run on any major desktop OS.
    http://arstechnica.com/gadgets/2014/09/hack-runs-android-apps-on-windows-mac-and-linux-computers/

    If you remember, about a week ago, Google gave Chrome OS the ability to run Android apps through the “App Runtime for Chrome.” The release came with a lot of limitations—it only worked with certain apps and only worked on Chrome OS. But a developer by the name of “Vladikoff” has slowly been stripping away these limits. First he figured out how to load any app on Chrome OS, instead of just the four that are officially supported. Now he’s made an even bigger breakthrough and gotten Android apps to work on any desktop OS that Chrome runs on. You can now run Android apps on Windows, Mac, and Linux.

    The hack depends on App Runtime for Chrome (ARC), which is built using Native Client, a Google project that allows Chrome to run native code safely within a web browser. While ARC was only officially released as an extension on Chrome OS, Native Client extensions are meant to be cross-platform. The main barrier to entry is obtaining ARC Chrome Web Store, which flags desktop versions of Chrome as “incompatible.”

    While this hack is buggy and crashy, at its core it works. Apps turn on and load up, and, other than some missing dependencies, they work well. It’s enough to make you imagine a future when all the problems get worked out, and Google opens the floodgates on the Play Store, putting 1.3 million Android apps onto nearly every platform.

    Reply
  5. Tomi Engdahl says:

    Toshiba to shed 900 jobs in rocky PC market
    Will restructure after PC sales fell off a cliff
    http://www.theinquirer.net/inquirer/news/2371214/toshiba-to-shed-900-jobs-in-rocky-pc-market

    TOSHIBA WILL CUT 900 jobs in a PC business restructuring that will see the firm exit the business to consumer (B2C) industry in some regions.

    The firm plans to make the job cuts during the current financial year.
    The restructuring could have something to do with the dwindling PC market. Last year, the industry saw its largest drop ever in PC sales with a 10 percent decline (total of 316 million PCs were shipped in 2013 overall).

    While research firm Canalys included both tablet and PC shipments in its figures, the firm reported in May that PC sales were up five percent year on year, with shipments reaching 123.7 million in the first quarter of 2014.

    Reply
  6. Tomi Engdahl says:

    Vrvana’s Totem HMD Puts a Camera Over Each Eye
    http://hardware.slashdot.org/story/14/09/22/0052210/vrvanas-totem-hmd-puts-a-camera-over-each-eye

    The Verge reports that Montreal startup Vrvana has produced a prototype of its promised (and crowd-funded) VR Totem headset. One interesting aspect of the Totem is the inclusion of front-facing cameras, one over each eye, the output of which can be fed to the displays.

    The clarity was impressive, rivaling some of the best experiences I’ve had with a Rift or Morpheus.

    Reply
  7. Tomi Engdahl says:

    Oracle’s biggest threat: ‘No changes whatsoever’
    http://www.zdnet.com/oracles-biggest-threat-no-changes-whatsoever-7000033888/

    Summary: Oracle changed titles among its top three execs and tried to calm the troops by promising that nothing will change. Is that really a good thing?

    Larry Ellison’s move to step down as CEO of Oracle to become chief technology officer as well as making Safra Catz and Mark Hurd co-CEOs was smoothed over by promises that nothing will change about the company’s approach, day-to-day operations or strategy. Are those steady-as-Oracle goes promises a good thing?

    Let’s get real. Ellison’s move to step down from CEO may not amount to much given that Catz and Hurd were effectively running the company anyway.

    The issue is that Oracle has moved from a technology company to a cross selling machine. Oracle acquires software, cloud and hardware companies, bundles them and sells. Yet Oracle is facing multiple challenges ranging from a transition to the cloud, declining hardware sales and a customer base that is likely to use the company’s core relational database as well as alternatives for big data workloads. In other words, Oracle isn’t the only database in town.

    Oracle’s strategy — embrace cloud, win on applications and keep database customers with in-memory options — can work. But the transition will take time.

    Applications. Oracle is moving its applications customers to a subscription and cloud delivery model. However, cloud customers aren’t locked in as easily.

    Hardware. Oracle’s hardware business has stumbled for years.

    Weak performance. Oracle has missed five out of the last seven quarters. At some point, patience wears thin.

    Reply
  8. Tomi Engdahl says:

    Once Again, Oracle Must Reinvent Itself
    As Larry Ellison Leaves CEO Post, Company Faces Major Shifts Reshaping Its Market
    http://online.wsj.com/articles/once-again-oracle-must-reinvent-itself-1411167886

    Reply
  9. twitter shouldnt says:

    I am not sure where you’re getting your information, but great topic.
    I needs to spend some time learning more or
    understanding more. Thanks for fantastic info I
    was looking for this information for my mission.

    Reply
  10. Tomi Engdahl says:

    DisplayPort Alternate Mode for USB Type-C Announced – Video, Power, & Data All Over Type-C
    by Ryan Smith on September 22, 2014 9:01 AM EST
    http://www.anandtech.com/show/8558/displayport-alternate-mode-for-usb-typec-announced

    Earlier this month the USB Implementers Forum announced the new USB Power Delivery 2.0 specification. Long awaited, the Power Deliver 2.0 specification defined new standards for power delivery to allow Type-C USB ports to supply devices with much greater amounts of power than the previous standard allowed, now up to 5A at 5V, 12V, and 20V, for a maximum power delivery of 100W. However also buried in that specification was an interesting, if cryptic announcement regarding USB Alternate Modes, which would allow for different (non-USB) signals to be carried over USB Type-C connector. At the time the specification simply theorized just what protocols could be carried over Type-C as an alternate mode, but today we finally know what the first alternate mode will be: DisplayPort.

    Today the VESA is announcing that they are publishing the “DisplayPort Alternate Mode on USB Type-C Connector Standard.” Working in conjunction with the USB-IF, the DP Alt Mode standard will allow standard USB Type-C connectors and cables to carry native DisplayPort signals.

    From a technical level the DP Alt Mode specification is actually rather simple. USB Type-C – which immediately implies using/supporting USB 3.1 signaling – uses 4 lanes (pairs) of differential signaling for USB Superspeed data, which are split up in a 2-up/2-down configuration for full duplex communication. Through the Alt Mode specification, DP Alt Mode will then in turn be allowed to take over some of these lanes – one, two, or all four – and run DisplayPort signaling over them in place of USB Superspeed signaling. By doing so a Type-C cable is then able to carry native DisplayPort video alongside its other signals, and from a hardware standpoint this is little different than a native DisplayPort connector/cable pair.

    From a hardware perspective this will be a simple mux. USB alternate modes do not encapsulate other protocols (ala Thunderbolt) but instead allocate lanes to those other signals as necessary

    Along with utilizing USB lanes for DP lanes, the DP Alt Mode standard also includes provisions for reconfiguring the Type-C secondary bus (SBU) to carry the DisplayPort AUX channel. This half-duplex channel is normally used by DisplayPort devices to carry additional non-video data such as audio, EDID, HDCP, touchscreen data, MST topology data, and more.

    Reply
  11. Tomi Engdahl says:

    Outlining Thin Linux
    http://linux.slashdot.org/story/14/09/22/2245217/outlining-thin-linux

    Deep End’s Paul Venezia follows up his call for splitting Linux distros in two by arguing that the new shape of the Linux server is thin, light, and fine-tuned to a single purpose. “Those of us who build and maintain large-scale Linux infrastructures would be happy to see a highly specific, highly stable mainstream distro that had no desktop package or dependency support whatsoever”

    “It’s only a matter of time before a Linux distribution that caters solely to these considerations becomes mainstream and is offered alongside more traditional distributions.”

    The skinny on thin Linux
    http://www.infoworld.com/article/2686094/linux/the-skinny-on-thin-linux.html

    In the leap from Web to cloud, the new shape of the Linux server is thin, light, and fine-tuned to a single purpose

    Let’s put that mostly to bed. Those of us who build and maintain large-scale Linux infrastructures would be happy to see a highly specific, highly stable mainstream distro that had no desktop package or dependency support whatsoever, so was not beholden to architectural changes made due to desktop package requirements. When you’re rolling out a few hundred Linux VMs locally, in the cloud, or both, you won’t manually log into them, much less need any type of graphical support. Frankly, you could lose the framebuffer too; it wouldn’t matter unless you were running certain tests. They’re all going to be managed by Puppet, Chef, Salt, or Ansible, and they’re completely expendable.

    Now with VMs, the lack of framebuffer support is somewhat immaterial because it’s not a hardware consideration anymore. But the overall concept still applies — in many cases, any interactive administrative access to Linux servers other than SSH is simply not useful.

    This, again, is at scale and for certain use cases. It is, however, the predominant way that cloud server instances are administered. In fact, at scale, most cloud instances are never interactively accessed at all. They are built on the fly from gold images and turned up and down as load requires.

    Further, these instances are usually one-trick ponies. They perform one task, with one service, and that’s it. This is one of the reasons that Docker and other container technologies are gaining traction: They are designed to do one thing quickly and easily, with portability, and to disappear once they are no longer needed.

    These systems can be pared down to the barest of bare bones because they’re running Memcached or Nginx. They’re doing nothing else, and they never will. This is a vastly different use case than most other types of Linux servers running today

    To create such a beast, most vendors have taken existing distributions, excised as much as possible, and tuned them for their infrastructure. They then offer these images to build base images for provisioning. It’s only a matter of time before a Linux distribution that caters solely to these considerations becomes mainstream and is offered alongside more traditional distributions.

    Reply
  12. Tomi Engdahl says:

    The object of the game: NetApp ‘Amazon-izes’ StorageGRID
    Web-scale object storage with geo-distributed erasure coding
    http://www.theregister.co.uk/2014/09/23/netapp_amazonises_storagegrid/

    NetApp has announced a new version of its object storage software, StorageGRID Webscale, and extended its hybrid public:private facilities by “Amazon-izing” it with the addition of an interface with AWS’s online file storage web service S3.

    Geo-distributed erasure coding technology is coming.

    NetApp views object storage as a good means of storing massive amounts of unstructured data that does not need FAS ONTAP-level data management services, requiring secure data management at a reduced cost.

    It was more than four years ago, in April 2010, that NetApp bought Canadian firm Bycast along with its StorageGrid technology – which provided object-based storage across heterogeneous arrays and geographic boundaries. There were then more than 250 StorageGRID customers, with NetApp saying the product was good for petabyte-scale, globally distributed repositories of images, video and records for enterprises and service providers.

    The software runs inside a virtual machine running on a server, obviously, and handles the metadata processing and policy-driven work, writing and reading objects to/from attached storage resources.

    The target use-cases are for on-premise and public cloud storage of:

    Data archives storing larger objects with long retention periods, low transaction loads and latency-tolerant access
    Media repositories with streaming data access to globally distributed large object stores and large throughput rates
    Web data-stores with billions of small objects and high transaction rates

    NetApp says data placement is decided upon “according to cost, compliance, availability, and performance requirements,” and this is policy-driven.

    Reply
  13. Tomi Engdahl says:

    That 8TB Seagate MONSTER? It’s HERE… (You’ll have to squint, ‘cos there are no specs)
    Data gulping disk drive
    http://www.theregister.co.uk/2014/08/26/seagates_eight_terabyte_spinner/

    Seagate is shipping an 8TB disk drive to selected OEM customers including object data-storing CleverSafe, with general availability next quarter. Tech details are sparse, however.

    We know the data devouring beast fits in a standard 3.5-inch drive slot and has a 6Gbit/s SATA interface.

    Reply
  14. Tomi Engdahl says:

    Siemon to educate on data center storage evolution
    http://www.cablinginstall.com/articles/2014/09/siemon-storage-evolution.html

    “Storage solutions are plentiful, and there is no one size fits all for today’s data centers,” says Higbie. “While Fibre Channel remains the predominate SAN technology, Ethernet has some advantages such as speed, support for switched fabric topologies, interoperability and management. As a result, newer storage technologies like Fiber Channel over Ethernet and SCSI over IP are worth examining.”

    As 10 Gigabit Ethernet becomes increasingly popular for providing an open, standards-based data center infrastructure to support multiple technologies, leveraging IP and Ethernet for storage is a potential progression that is driving evolving storage technologies. It’s important for data center managers to understand the variety of storage architectures available, allowing them to make an informed choice depending on their specific needs.”

    Reply
  15. Tomi Engdahl says:

    Exclusive: Samsung exits laptop market including Chromebooks
    Following in Sony’s footsteps, Samsung has stopped sales of its laptops in Europe
    http://www.pcadvisor.co.uk/news/laptop/3573470/samsung-exits-laptop-market-including-chromebooks/

    Following in the path of Sony and its Vaio PCs, Samsung has decided to exit the laptop market stopping sales of Ativ Windows and Chromebook devices in Europe, PC Advisor can confirm.

    It’s common knowledge that the PC market is in decline with Sony pulling out and selling its Vaio business back in February of this year. Despite being a giant of the tech world, Samsung has now followed suit.

    “We quickly adapt to market needs and demands. In Europe, we will be discontinuing sales of laptops including Chromebooks for now. This is specific to the region – and is not necessarily reflective of conditions in other markets,” said a Samsung spokesperson.

    Reply
  16. Tomi Engdahl says:

    Supercapacitors have the power to save you from data loss
    Learn all about them
    http://www.theregister.co.uk/2014/09/24/storage_supercapacitors/

    As solid state drives (SSDs) become a critical part of today’s storage, it is becomes increasingly important to learn about the supercapacitors that help prevent data loss.

    The presence – and type – of supercapacitors in SSDs should be as important a consideration as choosing between MLC, eMLC and SLC-based drives.

    Supercapacitors in SSDs ensure that any writes sent to the DRAM cache on the drive are successfully written in the event of a power loss.

    All modern hard drives, be they of the traditional spinning magnetic variety or the modern solid state persuasion, have a DRAM buffer to improve performance. (The DRAM buffer is generally called a disk cache, though this is an incorrect usage in all but a few configurations.)

    The DRAM on most of today’s drives is a paltry 64MB, but we usually read and write far more data than 64MB at a time.

    Solving this problem is the point of the DRAM buffer on a traditional magnetic drive: commands are executed out of the buffer in the order that most reduces latency.

    When the power is cut DRAM loses all the data stored in it. Files are rendered corrupt and sysadmins are called into the pointy-haired boss’s office with the door shut.

    Load the UPS software onto your servers and configure them to shut down when the UPS detects a power outage.
    The second option is the battery backup in a RAID card.
    The RAID card can be equipped with a battery backup unit so that if the power fails and the computer goes off the writes are held in DRAM until the computer is back on. The RAID card will then flush the writes to the disks.

    In an SSD there is no head that moves across a spinning platter to read and write information. Electrical impulses are sent to chips consisting of multiple layers of integrated circuits which respond in various ways, resulting in either a “read”, “write” or “erase” operation.

    Also unlike magnetic drives, SSDs read and write in pages but must erase in blocks, and every write must be preceded by an erase. The size of both pages and blocks varies according to manufacturer and product.

    Let’s say that your page size is 4KiB and your block size is 512KiB. To read a single bit the SSD would need to read an entire 4KiB page. It simply wouldn’t be capable of operating at smaller increments. But to write a single bit, an entire 512KiB block would have to be erased and all 128 4KiB pages rewritten.

    The best of the best flash drives, SLC ones, have a typical endurance of 100,000 writes. This means you can erase a block and then write something to its pages about 100,000 times before you can never write to that block again.

    MLC is an order of magnitude less capable, with the consumer-grade stuff typically being capable of 10,000 writes. eMLC (short for enterprise MLC) might get 30,000 writes at the outside, though 20,000 is more common.

    In one sense, the use of DRAM buffers on SSDs is not all that different from their use on magnetic disks.

    Of course, DRAM buffers on SSDs suffer the same problem as those on magnetic disks: cut the power and the data in the buffer is gone. And pending writes are not written, so it’s back to the pointy-haired boss’s office for you.

    You can forget the RAID card trick of disabling the DRAM buffer and relying on the battery-backed RAID card’s DRAM. Try this and you will annihilate your SSD’s write lifetime in short order.

    Supercapacitors are like batteries, but more awesome.
    Supercapacitors can’t store nearly as much energy as a battery
    That is perfectly okay for SSDs, however, as they don’t need to be on for very long to dump the contents of their DRAM cache into flash. Typically, they need to remain up for less than a second.

    Supercapacitor-equipped SSDs are available from almost every SSD manufacturer out there, so there is absolutely no excuse not to be using them. If you have non-supercapacitor SSDs in service today, give some very serious thought to replacing them.

    Reply
  17. Tomi Engdahl says:

    Canonical, Oracle go two on one against Red Hat in OpenStack bout
    Each to support its own Linux on other’s cloud stack
    http://www.theregister.co.uk/2014/09/23/canonical_oracle_openstack_teamup/

    While Red Hat is trumpeting that it wants to be the “undisputed” OpenStack market leader, its rivals Canonical and Oracle have teamed up to ensure that each’s Linux distro plays well with the other’s OpenStack implementation, even though they also compete.

    “As we have said in the past, while Oracle provides solutions for OpenStack, Linux, and virtualization, Oracle also wants to help ensure that customers can receive the same world class support when running Oracle Linux on virtually any platform,” the database giant said in a blog post on Tuesday.

    Under the new partnership, customers who install Oracle Linux as a guest OS on Canonical’s Ubuntu OpenStack distro will be able to receive OS support from Oracle. Likewise, Canonical will support Ubuntu as a guest OS on Oracle OpenStack.

    The move could be seen as a potshot at Red Hat, which is eager to graduate from being an enterprise Linux vendor to a full-scale infrastructure supplier.

    “Red Hat Enterprise Linux and our OpenStack offerings are developed, built, integrated, and supported together to create Red Hat Enterprise Linux OpenStack Platform. This requires tight feature and fix alignment between the Kernel, the hypervisor, and OpenStack services,” Red Hat executive veep Paul Cormier wrote in May, implying that customers would really be better off with an all-Shadowman solution.

    Reply
  18. Tomi Engdahl says:

    Out in the Open: The Site That Teaches You to Code Well Enough to Get a Job
    http://www.wired.com/2014/09/exercism/

    Wanna be a programmer? That shouldn’t be too hard. You can sign-up for an iterative online tutorial at a site like Codecademy or Treehouse. You can check yourself into a “coding bootcamp” for a face-to-face crash course in the ways of programming. Or you could do the old fashioned thing: buy a book or take a class at your local community college.

    But if want to be a serious programmer, that’s another matter. You’ll need hundreds of hours of practice—and countless mistakes—to learn the trade. It’s often more of an art than a skill—where the best way of doing something isn’t the most obvious way. You can’t really learn to craft code that’s both clear and efficient without some serious trial and error, not to mention an awful lot of feedback on what you’re doing right and what you’re doing wrong.

    That’s where a site called Exercism.io is trying to help. Exercism is updated every day with programming exercises in a variety of different languages. First, you download these exercises using a special software client, and once you’ve completed one, you upload it back to the site, where other coders from around the world will give you feedback. Then you can take what you’ve learned and try the exercise again.

    Exercism
    http://exercism.io/

    Exercism is your place to engage in thoughtful conversations about code. Explore simplicity, idiomatic language features, and expressive readable code.

    Exercises are currently available in Clojure, CoffeeScript, C#, C++, Elixir, Erlang, F#, Go, Haskell, JavaScript, Lua, Objective-C, OCaml, Perl5, Python, Ruby, Scala, and Swift. Coming Up: Java, Rust, Erlang, PHP, and Common Lisp.

    Reply
  19. Tomi Engdahl says:

    CRM is dead

    German software giant SAP says his resignation crm-term use when speaking of customer relationship management solutions. The company launched a new term is the Customer Engagement & Commerce (CEC).

    This is due to SAP by the fact that the role of the client can no longer be managed in a single channel. Customers do their shopping web browser and smartphone applications, as well as through the traditional foundation of the shops.

    “The difference between traditional CRM solutions is the fact that the new cec solutions combine big data and the different channels to each other”

    “CRM experiment failed. Today’s customer requires companies to a new approach for managing customer engagement,”

    Source: http://www.tivi.fi/kaikki_uutiset/sap+crm+on+kuollut/a1014025

    Reply
  20. Tomi Engdahl says:

    Cloud? We prefer, er, reselling tech, say tech resellers
    But managed services gets the industry’s thumbs-up
    http://www.channelregister.co.uk/2014/09/24/canalys_reselling_survey/

    Good old fashioned kit and licence reselling remains the primary way local tech suppliers pay the bills, with IT services still accounting for less than a quarter of revenue generation.

    This is according to a Canalys survey, which probed 352 channel businesses across the globe to ascertain the impact that classic product sales, alongside off-premises hosting and managed/public cloud services, are having on their top and bottom lines.
    More Reading
    Microsoft vs the long arm of US law: Straight outta DublinNO SALE! Rackspace snubs all buyout offers, appoints new CEOCitrix rips up and rebuilds CloudStack businessCloud? Nah, we’re not bothering with that, say HALF of enterprisesDragons’ Den man and co-CEO to work for FREE at loss-making Outsourcery

    “Product resell is still the most important business model for over 60 per cent of channel partners,” said research director Rachel Brindley.

    Margin erosion, the need to reduce reliance on post-sale vendor handouts (rebates) and customers wanting to change the way they buy and consume technology, are heaping more pressure on tech suppliers to change the way they operate.

    Some 96 per cent of the companies surveyed said they have sold some form of tech as a service. For the “majority” of those questioned, the delivery model comprises less than 25 per cent of overall annual turnover – but two thirds predicted that by 2017 services will go the other way.

    Public cloud is proving to be less popular among channel types than perhaps Microsoft, Google or AWS would like, the survey showed.

    “Channel partners see the top two cloud opportunities as productivity applications, such as email, and infrastructure-as-a-service, both of which are under growing margin pressure. Higher-value applications are not yet seen as the primary opportunities”.

    “Vendors developing-go-to-market strategies for cloud must ensure they are not increasing competition with their established partners, but recognise that this is typically delivered as part of a hybrid IT offering,” said Alex Smith, senior analyst.

    Reply
  21. Tomi Engdahl says:

    Now That It’s Private, Dell Targets High-End PCs, Tablets
    http://hardware.slashdot.org/story/14/09/23/2244225/now-that-its-private-dell-targets-high-end-pcs-tablets

    If Dell has a reputation in the PC market, it’s as the company that got low-end PCs to customers cheaply.

    “Because they are no longer reporting to Wall Street, they can be more competitive.”

    Dell’s PC, tablet innovations draw attention
    Dell’s 6-millimeter Venue 8 7000 tablet is highlighting the company’s progress in design
    http://www.itworld.com/print/437632

    In a copycat PC industry, Dell is trying to attract attention with the innovative features and technology firsts that it is bringing to PCs and tablets.

    Dell is adding new hardware and software features that could make an otherwise mundane PC or tablet more attractive to customers. Buyers may have to pay more for the features, but like Apple, Dell hopes to establish a reputation as an innovator and establish a fan base.

    The 8-inch Venue 8 7000 tablet, for example, has drawn attention for its creative design. Unveiled earlier this month at the Intel Developer Forum, it’s the world’s thinnest tablet at 6 millimeters thick and includes Intel’s RealSense 3D depth-sensing camera.

    Historically, the company was not known as a great innovator. It started out in CEO Michael Dell’s dorm room 30 years ago and made strides as a maker of low-cost IBM PC clones selling direct to end users. After going public, and various ups and downs over the years, it became a private company once again last year.

    The company can now boast some industry firsts, several of which are tied to the Venue 8. For example, it was also the first to bring wireless charging capabilities to tablets with a dock for Venue 8.

    In the external display market, Dell was among the first to introduce a 5K screen with the UltraSharp 27 Ultra HD, which can display images at a 5120 x 2880 pixel resolution and will become available later this year.

    Dell is also the only top PC maker with a gaming console, the Alienware Alpha Steam Machine, which will compete against Microsoft’s Xbox One and Sony’s PlayStation 4. The Steam Machine taps into the growing excitement around PC gaming, and will ship in November with Windows 8.1 as the default OS. Users in the future will have the option to install the Linux-based SteamOS, which is being developed by Valve, the world’s largest independent game distributor.

    Reply
  22. Tomi Engdahl says:

    Debian Switching Back To GNOME As the Default Desktop
    http://linux.slashdot.org/story/14/09/23/2312251/debian-switching-back-to-gnome-as-the-default-desktop

    Debian will switch back to using GNOME as the default desktop environment for the upcoming Debian 8.0 Jessie release, due out in 2015.

    Reply
  23. Tomi Engdahl says:

    Ukrainian separatists threaten surge in *gasp*… dealers, PCs
    Don’t mention the war, resellers
    http://www.channelregister.co.uk/2014/09/24/pc_resellers_bounce_back/

    The IT industry has pulled back from the brink of disaster but conflicts in Russia and the Middle East have the potential to push it back to the edge, resellers and distributors were warned today.

    While vendors were seeing flat growth overall, Europe had outperformed Asia and the US over the last nine months.

    “The channel has won. The channel is thriving. The cloud will not kill the channel,” he thundered.

    More counter-intuitively, he declared that while the industry’s pulse was no longer tied to the release of new Intel processors and Windows OSes, the PC form factor had returned to health.

    This was in part down to vendor consolidation, with the exit of vendors such as Samsung and Sony from the market recently boosting the consumer market.

    At the same time, he said, tablet sales had stalled. While the iPad had initially been competitively priced against laptops, this was no longer the case. Meanwhile, at the lower end, phone screens had gotten larger, eroding smaller tablets’ USP.

    Reply
  24. Tomi Engdahl says:

    Texas T Rex Dell: Terabytes shipped. Count ‘em and weep…
    We’re king of the storage world
    http://www.theregister.co.uk/2014/09/24/dell_is_terabytes_shipped_tyranosausrus_rex/

    Dell sold more storage capacity than any other supplier in the first half of 2014, says analyst IDC.

    In IDC’s worldwide Quarterly Disk Storage Systems Tracker reports Dell is singled out as having sold 4,311,728 terabytes of disk storage, 4.3 exabytes, by the mid-point of 2014 – pretty damned impressive.

    The terabytes shipped numbers IDC recorded were:

    Dell – 4,311,728
    EMC – 4,076,546
    HP – 3,428,016
    NetApp – 2,725,513
    IBM – 1,641,182
    Others – 3,003,154

    Dell storage man Alan Atkinson has blogged about this, saying the Dell total “equates to more than 57,000 years of continuously running high definition video”. Ugh! What a nightmarish prospect.

    Reply
  25. Tomi Engdahl says:

    NVIDIA’s Tegra K1: A Game-Changer for Rugged Embedded Computing
    http://rtcmagazine.com/articles/view/103708

    The migration of a powerful parallel GPU architecture along with a compatible software platform and computing model into a low-power ARM multicore SoC, promises to bring a range of capabilities into the mobile and embedded arena that have so far not been possible.

    Reply
  26. Tomi Engdahl says:

    On integrating flash arrays with server-side flash
    Cache storage gets faster and faster
    http://www.theregister.co.uk/2014/09/25/flash_storage_server_integration/

    If you’re buying flash storage today, you’re doing it for speed. After all, you’re not doing it to save money and you’re definitely not rich enough to be doing it because you want to be green and save a few kilowatt-hours on your power bill.

    With spinning disk, the disks themselves were probably the bottleneck in your SAN-based storage arrays. With flash, though, the drives are so fast that the storage infrastructure itself becomes the weakest link: that is, it’s slower than both the storage and the servers.

    When you buy servers, what do you buy? And what do you put on them?

    Where will flash storage go?

    Of course you still have the potential for failure in your single chassis, but that’s fine because the other benefit of the chassis is that it provides shared access to high-speed peripherals such as SAS-based storage adaptors and 10Gbit/s iSCSI links (which, if you’re feeling so inclined, you can trunk with EtherChannel for added oomph – and ten-gig links are already the connection of choice for many flash-based array vendors). And of course having this in a shared environment will be less expensive and easier to manage than if you had a load of separate server boxes.

    So where will flash storage go? Will it gravitate to the server because you need all that speed next to the applications and the hardware guys have given you fab new ways to hook the storage in directly with the processors and memory? Or will it stick in the shared storage because it’s the only place you can afford to put it?

    The answer is that it’ll do both, and that the caching algorithms and storage subsystems of the operating systems and virtualisation engines will continue to become cleverer and cleverer (just as they’ve done for years anyway).

    Let’s look at server-based storage first. If you’re running hypervisor-based hosts (i.e. you’re in a virtualised server world) the vendors are already banking on there being some solid-state storage sitting there in a directly accessible form.

    In vSphere 5.5, for example, VMware has a funky new concept called the Flash Read Cache which pools multiple flash-based storage items into single usable entities called vSphere Flash Resources.

    And on array-based storage, the multi-tier hierarchy is alive and kicking and will happily continue to exist for as long as spinning disk and SSD continue both to exist.

    Reply
  27. Tomi Engdahl says:

    Database virtualisation outfit Delphix: Lotsa dosh, few competitors
    Nice niche if you can get it. El Reg chats to founder
    http://www.theregister.co.uk/2014/09/25/delphix_delicate_data_store_dance/

    Production data copier and virtualiser Delphix says it makes the provision of largely structured data to non-production applications a heck of a lot easier, claiming it speeds it up from days or weeks to minutes.

    The company’s software receives a copy of production data and stores it in server engines: servers with direct-attached storage that run its software. On request it delivers a virtual copy of a data set to applications that need it, such as test and dev, or compliance and analysis.

    Crunchbase describes Delphix as a “firm developing software for simplifying building, testing and upgrading of apps on relational databases.”

    Delphix’s CEO and founder, Jedidiah Yueh, says the company has no direct supplier competitors: instead, according to him, it competes with the “status quo.”

    Data management and delivery to teams using production data copies are the problems that Delphix’s software is intended to deal with.

    The data can come from 20-50 separate production environments and is synchronised by Delphix, with up to 400 application environments using a single Delphix engine.

    The Delphix store becomes the master store for test and dev, etc.

    There is an enhanced security angle to Delphix use as well, according to Yueh. “We eliminate the chain of risk from having multiple sysadms touching the data. We can reduce the at-risk surface of data. We can take the volume of data at risk and collapse it by 80 per cent.”

    Reply
  28. Tomi Engdahl says:

    Bringing Tablets To Restaurant Tables Nationwide Nets E la Carte $35 Million
    http://techcrunch.com/2014/09/24/bringing-tablets-to-restaurant-tables-nationwide-nets-e-la-carte-35-million/

    On the heels of a nationwide rollout with Applebee’s, E la Carte has raised $35 million to bring its tablet technology to more restaurant tables around the country and around the world.

    As technology becomes increasingly embedded in every aspect of a consumer’s life, from a smart home to the smart phone in people’s pockets, there are certain public spaces which have been slow to adapt to the change that’s happening around them, according to E la Carte chief executive Rajat Suri. Restaurants have been late adopters, when it comes to bringing tech into the dining experience, he says, but that’s about to change.

    “Your home is being transformed by Nest and other smart home companies. We see restaurants as a key area of daily life that has not transformed yet — especially the dining room,”says Suri. “One can imagine the restaurant of the future being very different. Restaurants should know what you want and when you want it.”

    The company’s “Presto” tablets allow diners to order and pay for food at their table, which Suri says enables waiters to concentrate more on the customer experience. Restaurants which have installed the company’s tablets have seen a roughly 5% increase in sales and an increase in table turnover of up to 7 to 10 minutes. So diners are spending more, and leaving faster

    “Restaurants are a tough business,” says Suri. “for them to be able to increase sales is key.”

    The tablets have also managed to win over waitstaff, who have seen tips increase thanks to the suggested gratuity feature that’s part of the checkout process through the tablets.

    Reply
  29. Tomi Engdahl says:

    Startup temptation to fade away

    According to a recent survey, only 16 percent of jobs in the Startup dreams. 60 percent of those surveyed wanted, preferably a medium-sized IT company. A quarter would go from large companies breads.

    Robert Half Technology survey was answered by 2,300 American IT professionals.

    Younger people are more interested in startups, but the age and financial obligations of the enthusiasm of a small growth company drudgery, mere agony faded among the respondents.

    Robert Half Technology’s John Reed said startups to present more more acceptable career path. In recent years, the poor economic situation has prompted people to consider small business risks and made more stable medium-sized enterprises more attractive.

    Source: http://www.tivi.fi/kaikki_uutiset/startupien+houkutus+hiipuu+nyt+halutaan+toihin+tallaisiin+itfirmoihin/a1014771

    Reply
  30. Tomi Engdahl says:

    Surface Pro 3 cleared for take-off, with FAA/EASA Electronic Flight Bag approval
    http://www.neowin.net/news/surface-pro-3-cleared-for-take-off-with-faaeasa-electronic-flight-bag-certification

    Microsoft’s popular and highly-regarded Surface Pro 3 could well be soaring to new heights soon, as the company has announced that the tablet has qualified for authorization to be used as an Electronic Flight Bag (EFB), under conditions defined by the US Federal Aviation Administration (FAA) and the European Aviation Safety Agency (EASA).

    EFBs replace the heavy and bulky paper documentation that airline pilots must carry with them on board, which includes flight navigational charts, aircraft technical reference materials, and other important information that may be needed in flight.

    Reply
  31. Tomi Engdahl says:

    Transforming IT: building a business-driven infrastructure for the software-defined enterprise
    http://research.gigaom.com/report/transforming-it-building-a-business-driven-infrastructure-for-the-software-defined-enterprise/?utm_source=media&utm_medium=editorial&utm_campaign=auto3&utm_term=875997+binge-alert-subscribers-now-watch-more-than-90-minutes-of-netflix-every-single-day&utm_content=jroettgers

    Getting the customer’s digital experience right is paramount to a company’s survival. Between the customers and their apps stand the IT leaders whose teams design, build, and run the apps and infrastructure that enables this digital experience. As the business demand for new capabilities grows so too do expectations for rapid delivery. This is creating new practices within the software-defined enterprise in which changes to complex, distributed apps are deployed in virtual and cloud environments in a continuous fashion. New releases are not big events; they are non-events.

    The application performance management (APM) market has grown from first-generation solutions used for monitoring static backend systems into next-generation solutions for monitoring dynamic customer apps.

    Key findings in this report include:
    Customer experience is driving business performance.
    Proactively managing this experience requires new methods and tools.
    Solutions require a balance of modernization and the “human element.”
    Analytics is rapidly changing, fueled by the growth of big data.

    Reply
  32. Tomi Engdahl says:

    PostgreSQL Outperforms MongoDB In New Round of Tests
    http://developers.slashdot.org/story/14/09/26/1330228/postgresql-outperforms-mongodb-in-new-round-of-tests

    PostgreSQL outperformed MongoDB, the leading document database and NoSQL-only solution provider, on larger workloads than initial performance benchmarks.

    PostgreSQL outperformed MongoDB in selecting, loading and inserting complex document data in key workloads involving 50 million records.

    Postgres Outperforms MongoDB and Ushers in New Developer Reality
    http://blogs.enterprisedb.com/2014/09/24/postgres-outperforms-mongodb-and-ushers-in-new-developer-reality/

    The newest round of performance comparisons of PostgreSQL and MongoDB produced a near repeat of the results from the first tests that proved PostgreSQL can outperform MongoDB. The advances Postgres has made with JSON and JSONB have transformed Postgres’ ability to support a document database.

    Creating document database capabilities in a relational database that can outperform the leading NoSQL-only solution is an impressive achievement. But perhaps more important is what this means to the end user – new levels of speed, efficiency and flexibility for developers with the protections of ACID compliance enterprises require for mission critical applications.

    Postgres vs. Mongo

    EnterpriseDB (EDB) began running comparative evaluations to help users correctly assess Postgres’ NoSQL capabilities. The initial set of tests compared MongoDB v2.6 to Postgres v9.4 beta, on single machine instances.

    EDB found that Postgres outperforms MongoDB in selecting, loading and inserting complex document data in key workloads involving 50 million records:

    Ingestion of high volumes of data was approximately 2.1 times faster in Postgres
    MongoDB consumed 33% more the disk space
    Data inserts took almost 3 times longer in MongoDB
    Data selection took more than 2.5 times longer in MongoDB than in Postgres

    With the newest version, PostgreSQL has ushered in a new era of developer flexibility exceeding the freedom they discovered with NoSQL-only solutions. The use of niche solutions, like MongoDB, increased because developers needed freedom from the structured data model required by relational databases. They needed to move quickly and work with new data types. They choose powerful but limited solutions that addressed immediate needs, that let them make changes without having to wait for a DBA.

    However, many organizations have discovered that successful applications often require structure down the road, as data becomes more valuable across the organization.
    Postgres gives developers broad new powers to start out unstructured, and then when the need arises, combine unstructured and structured data using the same database engine and within an ACID-compliant environment.

    The code shows Postgres has the capability, and now our performance comparisons demonstrate Postgres can handle the loads.

    Reply
  33. Tomi Engdahl says:

    The Yahoo Directory — Once The Internet’s Most Important Search Engine — Is To Close
    Once the Google of its time, the Yahoo Directory is finally coming to the end of its slow death.
    http://searchengineland.com/yahoo-directory-close-204370

    Reply
  34. Tomi Engdahl says:

    Microsoft resurrects its WinHEC conference for hardware companies
    The first new event will be in Shenzhen, China, with a Taipei, Taiwan, event following.
    http://arstechnica.com/information-technology/2014/09/microsoft-resurrects-its-winhec-conference-for-hardware-companies/

    Microsoft has announced the return of its WinHEC conferences. The first of the new conferences will be March 18 and 19 next year and is being held in Shenzhen, China.

    The old WinHEC events, standing for Windows Hardware Engineering Conference, were held annually in the US.
    The last old WinHEC was in 2008

    The new WinHEC, now standing for Windows Hardware Engineering Community, will cater to a similar audience and cover similar ground. But it’s broader in a key way: it’s not just for PC hardware any more, but also smartphone and tablet hardware.

    Reply
  35. Tomi Engdahl says:

    Seagate embiggens its spy drive, waves it at CCTV cameras
    A terabyte of storage weighs just 90g. Fancy that
    http://www.theregister.co.uk/2014/09/29/seagate_stuffs_more_into_surveillance_drive/

    CCTV operators can breathe sighs of relief. They won’t run out of space so fast as Seagate has upped the capacity of its 4TB Surveillance disk drive to 6TB.

    Seagate is using its 6TB drive technology, announced in April. Back then the company said the 6TB whopper could be used for centralised surveillance, and now here is the branded Surveillance version. It comes in 5TB and 6TB capacity points.

    These double the cache used to 128MB.

    Not a load of Tosh: 5TB ‘surveillance drive’ from Toshiba hits shelves
    It’s vitally important you have huge storage for videos, y’see
    http://www.theregister.co.uk/2014/08/06/toshiba_5tb_surveillance_drive_mc04acannne/

    Toshiba has wrung another market slot-filling product out of its 5TB disk tech – a surveillance drive.

    The company has two existing 5TB disk drive products. There is the MC04ACAnnnE bulk storage drive spinning at 7,200rpm for servers and arrays with cloud-scale needs. It has a 6Gbit/s SATA interface and an 800,000 hours MTBF rating.

    Secondly, there is the MG04 with 6Gbit/s SAS or SATA interface with a faster throughput.

    Reply
  36. Tomi Engdahl says:

    Lenovo set to complete acquisition of IBM x86 server business
    http://www.pcworld.com/article/2688772/lenovo-set-to-complete-acquisition-of-ibm-x86-server-business.html

    Nine months after it was first announced, Lenovo’s acquisition of IBM’s x86 server business is headed towards closing.

    Having received regulatory approval from the U.S., the European Commission and China, the companies will start closing the US$2.1 billion deal on Wednesday. The handover will take place in most major markets starting this week, and will be completed in the remaining countries into early next year, the companies said.

    Reply
  37. Tomi Engdahl says:

    Why Microsoft’s engineering changes will be the real Windows 9 (Threshold) story
    http://www.zdnet.com/why-microsofts-engineering-changes-will-be-the-real-windows-threshold-story-7000034147/

    Summary: Microsoft is building, testing and updating Windows in a very different way, starting with Windows Threshold. In many ways, those changes matter more than the new bits themselves.

    Reply
  38. Tomi Engdahl says:

    Adobe brings Creative Cloud to Chromebooks starting w/ ‘Project Photoshop Streaming’ beta
    http://9to5google.com/2014/09/29/google-adobe-announce-creative-cloud-for-chromebooks-starting-with-photoshop/

    Google announced a new partnership with Adobe today that will see the companies bring Adobe’s suite of popular Creative Cloud apps to Chromebooks. Initially, Adobe will launch just the Photoshop app as a beta (pictured above) and make it available to only its education customers.

    Reply
  39. Tomi Engdahl says:

    20 years of false business intelligence promises
    Where enterprises are going wrong in the age of big data
    - See more at: http://www.information-age.com/technology/information-management/123458490/20-years-false-business-intelligence-promises#sthash.FMTazvcP.dpuf

    Reply
  40. Tomi Engdahl says:

    89 percent of companies expect their systems to crash

    Suse commissioned an international survey shows that companies are going to invest in the next 12 months IT infrastructures reliability. When asked, for information management professionals in three of the four considered an important goal of IT infrastructure, with shutdowns need to do and do not crash systems with regard to the most important workloads.

    The survey revealed that the reality is not expected to obey the wishes, as much as 89 per cent of respondents believe it would encounter the most critical work load controlled and uncontrolled shutdowns.

    The fear is not unfounded as the uncontrolled shutdowns has experienced 80 per cent of the respondents, on average, more than twice a year. Technical fault situations are by far the main cause of uncontrolled down times.

    “Information systems and workloads shutdowns and crashes have a negative impact in all sizes and in all industries operating companies. CIOs and IT experts acknowledge the need for major systems significantly reduce the duration of the shut-down. They have to do with hardware and software suppliers who are able to provide a sufficiently reliable solutions and technologies, ”

    51 per cent of respondents say they have made provision for high availability clustering to reduce uncontrolled shut-down.

    Source: http://www.tivi.fi/kaikki_uutiset/89+prosenttia+yrityksista+odottaa+jarjestelmiensa+kaatuvan/a1015713

    Reply
  41. Tomi Engdahl says:

    How Is Big Data Like Corporate Real Estate?
    http://www.forbes.com/sites/netapp/2014/07/01/big-data-real-estate/?utm_source=taboola&utm_medium=referral

    Your business’s data is growing—exponentially. But your competitiveness depends on your ability to gather, store and get value from that “big” data.

    Sources like mobile devices, ubiquitous sensors and social media can tell you things—but only if you can understand what they’re saying.

    Here’s how…

    Successful companies find signals in this sea of data: They make informed decisions, which create massive competitive advantage.

    But others see no signals. They see only the cost and burden of managing explosive data growth. They’re wasting a business-critical asset.

    Buying infrastructure that supports and extracts value from this kind of growth is always challenging.

    Reply
  42. Tomi Engdahl says:

    On The Future of Apple and Google
    http://stevecheney.com/on-the-future-of-apple-and-google/

    When Tim Cook was interviewed by Charlie Rose after Apple’s mega launch event a few weeks ago, he scoffed at any mention of competitors, highlighting only Google as Apple’s arch-rival.

    Apple and Google are entrenched in a modern version of the PC war, and are the only two players with relevancy at the operating system level. Here are some thoughts on why this is important and what’s next as we enter the golden years for mobile and approach the early beginnings of the post-mobile era

    Reply
  43. Tomi Engdahl says:

    Microsoft unveils the future of Windows
    Sept. 30, 2014
    Company gives first look at Windows 10, highlighting enterprise advancements and open collaboration.
    http://www.microsoft.com/en-us/news/press/2014/sep14/09-30futureofwindowspr.aspx

    Reply
  44. Tomi Engdahl says:

    Electronic Brain by 2023
    E.U.’s Human Brain Project ramps up
    http://www.eetimes.com/document.asp?doc_id=1324121&

    Like a Manhattan Project, resources are coming together for the big push to simulate the human brain. Personnel on European Union (EU)’s Human Brain Project reported their progress toward the primary directive — an artificial brain by 2023 — at the annual HBP Summit at the University of Heidelberg in Germany, yesterday, September 29.

    The 10-year-long Human Brain Project, funded to the tune of $1 billion euro (US$1.3 billion) by the European Commission Future and Emerging Technologies as one of its “Flagship Programs,” aims to simulate the entire human brain on supercomputers first, then build a special hardware emulator that will reproduce its functions so accurately that diseases and their cures can be tried out on it. Ultimately, the long-term goal is to build artificial brains that are inexpensive enough to outperform traditional von Neuman supercomputers at a fraction of the cost.

    Reply
  45. Tomi Engdahl says:

    Meet AMD’s pole-dancing 64-bit ARM chip: Hierofalcon wants to be in a mast near you
    Virtual servers, virtual storage, virtual networks … will it end?
    http://www.theregister.co.uk/2014/10/01/amd_64bit_arm_network_virtualized_functions/

    AMD is today pitching its 64-bit ARMv8 system-on-chip codenamed Hierofalcon at software-defined networks in telcos. Essentially, it thinks the processor can do the job of dedicated hardware better, in terms of size and performance per watt.

    We’re told the processor can have up to eight Cortex-A57 cores, is being sampled by “the usual suspects” in the industry, and will later today, at ARM TechCon in Santa Clara, California, be demonstrated running “Network Functions Virtualization” (NFV). AMD defines NVF as…

    the abstraction of numerous network devices such as routers and gateways which enable relocation of network functions from dedicated hardware appliances to generic servers. With NFV, much of the intelligence currently built into proprietary, specialized hardware is accomplished with software running on general purpose hardware.

    Reply
  46. Tomi Engdahl says:

    News & Analysis
    Microsoft Announces Windows 10
    http://www.eetimes.com/document.asp?doc_id=1324147&

    Microsoft on Tuesday announced Windows 10 at an event in San Francisco. No, you didn’t somehow miss Windows 9. According to Microsoft OS chief Terry Myerson, who presided over the event with corporate VP Joe Belfiore, the new operating system is so substantial, “it wouldn’t have been right” to simply tick up from version 8 to version 9.

    Microsoft’s detractors would no doubt counter that Microsoft chose the new name in order to further distance its latest release from Windows 8, which has been criticized as difficult to use on traditional, non-touch PCs. Myerson and Belfiore alluded to this criticism, emphasizing that Windows 10′s user interface will be familiar to legacy desktop users.

    “Whether you’re coming from Windows 7 or Windows 8, [Windows 10] will let you be immediately productive,” said Myerson. He said the new OS, which won’t be available until next year, will be compatible with the apps, tools, and systems that desktop customers use today.

    Reply
  47. Tomi Engdahl says:

    Network Function Virtualization goes open source
    http://www.zdnet.com/network-function-virtualization-goes-open-source-7000034207/

    Summary: Telecom and networking powers are uniting under The Linux Foundation to create an open source Network Function Virtualization reference platform.

    In 2014, companies and open source programmers alike are working as hard as they can to virtualize hardware into software. The latest example of this is Network Functions Virtualization (NFV).

    The name of the NFV game is to take such appliances or server-based network operations as Network Address Translation (NAT), firewalls, intrusion detection, and Domain Name Service (DNS) and move them to virtual machines. Of course, there are all kinds of ways to do this on a single server, but NFV takes it far beyond that to a level where an entire carrier’s network services can be deployed and managed virtually.

    To turn this idea into reality, almost 40 telecomm and network companies, such as AT&T, Cisco, HP, NTT DOCOMO, Telecom Italia and Vodafone, joined forces with The Linux Foundation to create a new collaborative project: Open Platform for NFV, (OPNFV). The ultimate goal to build a carrier-grade, integrated, open source NFV reference platform.

    But, according to the Foundation, “While not developing standards, OPNFV will work closely with ETSI’s NFV ISG [NFV's standardization organization], among others, to drive consistent implementation of standards for an open NFV reference platform.” In short, the other major companies working on NFV may not be members of OPNFV, but OPNFV isn’t going to try to steer its own course away from the ETSI. In time, Jim Zemlin, the Linux Foundation’s Executive Director, expects other telecomm and networking companies to join OPNFV.

    The initial focus of OPNFV will be on building NFV infrastructure (NFVI) and Virtualized Infrastructure Management (VIM), leveraging existing open source components where possible. These, Zemlin said, include OpenDaylight (software defined networking), OpenStack, Open vSwitch and the Linux kernel

    Reply
  48. Tomi Engdahl says:

    We cannot afford re-inventing the wheel

    We do not come up with it, that is, not invented here, it is a familiar phenomenon for all those working in IT. Everyone has come across a type for which no ready-made solution invalid.

    The undersigned has been through this first hand so many times, that the reinvention of the wheel is enough already. To any data technical problem can usually be found ready workable solution, or are currently being developed.

    For example, Node.js, which is currently the most extensive and fastest-growing platform for programming, on any given day hundreds of new modules hundred thousand existing alongside. Must be some kind of wizard to come up with something that does not already exist.

    It’s not that everything possible had already been invented, but the fact that inventors have a ton of a lot and the discovery rate increases. If you want to stay productive, is to raise the abstraction of their own work and make good use of it in manufacturing, which is currently around us is invented.

    If you work in a couple of technical progress, the result of work need to be innovative. Otherwise, you can be sure that the professional shall be replaced before long by automation.

    Fortunately, the innovation does not require the invention of new. Such can also be created by combining the previous discoveries in a new way. Consolidation just need to know how to do so, that will result in real added value.

    Source: http://summa.talentum.fi/article/tv/9-2014/89512

    Reply
  49. Tomi Engdahl says:

    Proprietary OS source code LEAKED to web – from 40 years ago
    No Shellshock bug spotted in ancient CP/M code so far
    http://www.theregister.co.uk/2014/10/02/cpm_source_code_release/

    Forty years after Gary Kildall released the first version of CP/M, the Computer History Museum in Mountain View, California has made the source code to several versions of the landmark eight-bit OS available as a free download from its website.

    The code, which is written in a combination of assembly language and Kildall’s homegrown PL/M, is archived as a 147MB Zip file that includes both ASCII source files and scanned printouts from the 1970s.

    CP/M was a runaway success in the early days of the PC industry, owing largely to its portability, in an era when computer makers typically wrote operating systems to run on their own hardware only.

    CP/M’s widespread adoption in turn spurred the growth of the commercial software industry, giving rise to some of the bestselling applications of the 1970s and 1980s, including dBase and WordStar.

    The first version of CP/M shipped in 1974, and by 1982, Digital Research, the company Kildall founded to market the OS, employed more than 200 people and was said to bring in revenues of more than $20m per year.

    But as rapid as CP/M’s rise was, its decline was even faster. In 1981, seeing that IBM was having trouble negotiating a license from Digital Research to use CP/M in its new IBM PC line, Microsoft bought a CP/M lookalike called Q-DOS and licensed a reworked version of it to Big Blue as PC-DOS (and later to other PC clone makers as MS-DOS).

    The rest is history. The IBM PC platform exploded, and by the mid-1980s, Microsoft’s OS was outselling CP/M and Digital Research’s own sales figures were rapidly declining, eventually to fade into oblivion.

    Reply
  50. Tomi Engdahl says:

    Bangladesh Considers Building World’s 5th-largest Data Center In Earthquake Zone
    http://hardware.slashdot.org/story/14/10/01/2156236/bangladesh-considers-building-worlds-5th-largest-data-center-in-earthquake-zone

    The Bangladesh Ministry of Information is considering the establishment of a Tier 4 data centre in Kaliakair, in the Gazipur region, an ambitious build which would constitute the fifth largest data centre in the world, if completed. And if it survives – the site planned for the project is prone to earthquakes.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*