Computer trends for 2014

Here is my collection of trends and predictions for year 2014:

It seems that PC market is not recovering in 2014. IDC is forecasting that the technology channel will buy in around 34 million fewer PCs this year than last. It seem that things aren’t going to improve any time soon (down, down, down until 2017?). There will be no let-up on any front, with desktops and portables predicted to decline in both the mature and emerging markets. Perhaps the chief concern for future PC demand is a lack of reasons to replace an older system: PC usage has not moved significantly beyond consumption and productivity tasks to differentiate PCs from other devices. As a result, PC lifespan continue to increase. Death of the Desktop article says that sadly for the traditional desktop, this is only a matter of time before its purpose expires and that it would be inevitable it will happen within this decade. (I expect that it will not completely disappear).

When the PC business is slowly decreasing, smartphone and table business will increase quickly. Some time in the next six months, the number of smartphones on earth will pass the number of PCs. This shouldn’t really surprise anyone: the mobile business is much bigger than the computer industry. There are now perhaps 3.5-4 billion mobile phones, replaced every two years, versus 1.7-1.8 billion PCs replaced every 5 years. Smartphones broke down that wall between those industries few years ago – suddenly tech companies could sell to an industry with $1.2 trillion annual revenue. Now you can sell more phones in a quarter than the PC industry sells in a year.

After some years we will end up with somewhere over 3bn smartphones in use on earth, almost double the number of PCs. There are perhaps 900m consumer PCs on earth, and maybe 800m corporate PCs. The consumer PCs are mostly shared and the corporate PCs locked down, and neither are really mobile. Those 3 billion smartphones will all be personal, and all mobile. Mobile browsing is set to overtake traditional desktop browsing in 2015. The smartphone revolution is changing how consumers use the Internet. This will influence web design.

crystalball

The only PC sector that seems to have some growth is server side. Microservers & Cloud Computing to Drive Server Growth article says that increased demand for cloud computing and high-density microserver systems has brought the server market back from a state of decline. We’re seeing fairly significant change in the server market. According to the 2014 IC Market Drivers report, server unit shipment growth will increase in the next several years, thanks to purchases of new, cheaper microservers. The total server IC market is projected to rise by 3% in 2014 to $14.4 billion: multicore MPU segment for microservers and NAND flash memories for solid state drives are expected to see better numbers.

Spinning rust and tape are DEAD. The future’s flash, cache and cloud article tells that the flash is the tier for primary data; the stuff christened tier 0. Data that needs to be written out to a slower response store goes across a local network link to a cloud storage gateway and that holds the tier 1 nearline data in its cache. Never mind software-defined HYPE, 2014 will be the year of storage FRANKENPLIANCES article tells that more hype around Software-Defined-Everything will keep the marketeers and the marchitecture specialists well employed for the next twelve months but don’t expect anything radical. The only innovation is going to be around pricing and consumption models as vendors try to maintain margins. FCoE will continue to be a side-show and FC, like tape, will soldier on happily. NAS will continue to eat away at the block storage market and perhaps 2014 will be the year that object storage finally takes off.

IT managers are increasingly replacing servers with SaaS article says that cloud providers take on a bigger share of the servers as overall market starts declining. An in-house system is no longer the default for many companies. IT managers want to cut the number of servers they manage, or at least slow the growth, and they may be succeeding. IDC expects that anywhere from 25% to 30% of all the servers shipped next year will be delivered to cloud services providers. In three years, 2017, nearly 45% of all the servers leaving manufacturers will be bought by cloud providers. The shift will slow the purchase of server sales to enterprise IT. Big cloud providers are more and more using their own designs instead of servers from big manufacturers. Data center consolidations are eliminating servers as well. For sure, IT managers are going to be managing physical servers for years to come. But, the number will be declining.

I hope that the IT business will start to grow this year as predicted. Information technology spends to increase next financial year according to N Chandrasekaran, chief executive and managing director of Tata Consultancy Services (TCS), India’s largest information technology (IT) services company. IDC predicts that IT consumption will increase next year to 5 per cent worldwide to $ 2.14 trillion. It is expected that the biggest opportunity will lie in the digital space: social, mobility, cloud and analytics. The gradual recovery of the economy in Europe will restore faith in business. Companies are re-imaging their business, keeping in mind changing digital trends.

The death of Windows XP will be on the new many times on the spring. There will be companies try to cash in with death of Windows XP: Microsoft’s plan for Windows XP support to end next spring, has received IT services providers as well as competitors to invest in their own services marketing. HP is peddling their customers Connected Backup 8.8 service to prevent data loss during migration. VMware is selling cloud desktop service. Google is wooing users to switch to ChromeOS system by making Chrome’s user interface familiar to wider audiences. The most effective way XP exploiting is the European defense giant EADS subsidiary of Arkoon, which promises support for XP users who do not want to or can not upgrade their systems.

There will be talk on what will be coming from Microsoft next year. Microsoft is reportedly planning to launch a series of updates in 2015 that could see major revisions for the Windows, Xbox, and Windows RT platforms. Microsoft’s wave of spring 2015 updates to its various Windows-based platforms has a codename: Threshold. If all goes according to early plans, Threshold will include updates to all three OS platforms (Xbox One, Windows and Windows Phone).

crystalball

Amateur programmers are becoming increasingly more prevalent in the IT landscape. A new IDC study has found that of the 18.5 million software developers in the world, about 7.5 million (roughly 40 percent) are “hobbyist developers,” which is what IDC calls people who write code even though it is not their primary occupation. The boom in hobbyist programmers should cheer computer literacy advocates.IDC estimates there are almost 29 million ICT-skilled workers in the world as we enter 2014, including 11 million professional developers.

The Challenge of Cross-language Interoperability will be more and more talked. Interfacing between languages will be increasingly important. You can no longer expect a nontrivial application to be written in a single language. With software becoming ever more complex and hardware less homogeneous, the likelihood of a single language being the correct tool for an entire program is lower than ever. The trend toward increased complexity in software shows no sign of abating, and modern hardware creates new challenges. Now, mobile phones are starting to appear with eight cores with the same ISA (instruction set architecture) but different speeds, some other streaming processors optimized for different workloads (DSPs, GPUs), and other specialized cores.

Just another new USB connector type will be pushed to market. Lightning strikes USB bosses: Next-gen ‘type C’ jacks will be reversible article tells that USB is to get a new, smaller connector that, like Apple’s proprietary Lightning jack, will be reversible. Designed to support both USB 3.1 and USB 2.0, the new connector, dubbed “Type C”, will be the same size as an existing micro USB 2.0 plug.

2,130 Comments

  1. Tomi Engdahl says:

    Microsoft to expand business within the meaning of the Azure cloud to its base. For example, the Linux support is far more comprehensive.

    Microsoft’s general manager Satya Nadellan of the Azure supports the now five different Linux distribution. Today, about 20 percent of the Azure cloud of virtual machines are Linux-based systems.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=1942:microsoft-laajentaa-pilveaan&catid=13&Itemid=101

    Reply
  2. Tomi Engdahl says:

    Samsung Acknowledges and Fixes Bug On 840 EVO SSDs
    http://hardware.slashdot.org/story/14/10/21/1914208/samsung-acknowledges-and-fixes-bug-on-840-evo-ssds

    Samsung has issued a firmware fix for a bug on its popular 840 EVO triple-level cell SSD. The bug apparently slows read performance tremendously for any data more than a month old that has not been moved around on the NAND.

    Samsung delivers fix for SSD slowdowns
    http://www.computerworld.com/article/2836082/samsung-delivers-fix-for-ssd-slowdowns.html

    Reply
  3. Tomi Engdahl says:

    NPR: ’80s Ads Are Responsible For the Lack of Women Coders
    http://tech.slashdot.org/story/14/10/21/1852246/npr-80s-ads-are-responsible-for-the-lack-of-women-coders

    Back in the day, computer science was as legitimate a career path for women as medicine, law, or science. But in 1984, the number of women majoring in computing-related subjects began to fall, and the percentage of women is now significantly lower in CS than in those other fields.

    Reply
  4. Tomi Engdahl says:

    IBM storage revenues sink: ‘We are disappointed,’ says CEO
    Time to put the storage biz up for sale?
    http://www.theregister.co.uk/2014/10/20/big_blues_storage_still_slumping/

    IBM’s storage revenues are continuing to slump, with the latest overall IBM results causing the abandonment of a long-term earnings/share goal.

    Reply
  5. Tomi Engdahl says:

    Moving Towards Requirements-Driven Verification & Test
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1324352&

    Due to the rising complexity, time-to-market demands, and variability involved in building requirements of critical hardware and software systems, it is absolutely essential to have a robust requirements sign-off capability. It’s particularly applicable for systems where the financial cost of failure is significant, when systems are safety-critical, or where there is a high security factor.

    Current industry practice: “Mind the Gap”
    Even though a wide array of tools is available for analysing source code and testing executable code, there are no tools that automatically track the results of tests as they apply to requirements.

    Current common practice in requirements tracing stops at test definition, leaving unattended the need to ensure that requirements have tests defined against them and that these tests have successfully completed. This creates a gap between the requirements capture tools available and the features available in the wide array of test-only products.

    To ensure companies produce a correct product in a timely manner there is a clearly identified need for an approach and a tool that can address the key issues in requirements engineering and ensure requirements traceability through the complete data flow.

    Typical factors that can cause issues are requirement interpretation may change through the data flow, link to proofs of implementation/results is currently manual, visibility of requirements through the entire tree is complex and communication across domains (pre-silicon, post-silicon, FPGA, boards, firmware, software, and system) is complex.

    The requirements-driven approach
    The Requirements-Driven Verification and Test (RDVT) methodology enables project progress to be analyzed and managed by accumulating data on the status of verification and test metrics over the duration of the project and automatically relating these back to the specified requirements. In this way every functional requirement can be mapped to a proof of implementation. Additionally, any verification and test activity not relating to a requirement can be identified and questioned.

    Reply
  6. Tomi Engdahl says:

    Mission Critical Operating System Linux With PRIMEQUEST
    Posted Oct 22, 2014 at 7:30 am
    http://www.eeweb.com/company-blog/fujitsu_semiconductor/mission-critical-operating-system-linux-with-primequest/

    Fujitsu, together with Red Hat Enterprise, offers high level security, high quality, and high scalability platforms. Combining both PRIMEQUEST and Red Hat Enterprise Linux make the best-in-class mission critical operating system. This article discusses the features, benefits, and advantages of LINUS Mission Critical.

    The most rigorous OS

    Fewer vulnerabilities and lower average vulnerability severity than UNIX OS
    Practical hack prevention well-blended in to the OS – application code change or recompilation is required
    No Trusted Extensions required as high security features are already embedded in Red Hat Enterprise Linux

    High performance and scalability

    Performance growth has skyrocketed with Red Hat Enterprise Linux 6, with 3.3 times the I/O throughput of Red Hat Enterprise Linux 5.5

    Reply
  7. Tomi Engdahl says:

    Satya Nadella: I Want Microsoft to Be Loved by Users
    http://news.softpedia.com/news/Satya-Nadella-I-Want-Microsoft-to-Be-Loved-by-Users-462642.shtml

    Microsoft is going through a major reorganization process and the new CEO Satya Nadella is the pioneer of a new approach that puts the focus on customers and the feedback they submit.

    In an interview with USA Today, Nadella explains that he wants to change the world’s perception of Microsoft, trying not only to make the software giant loved by users worldwide, but also to offer them the solutions they need to get things done.

    Nadella is only the third Microsoft CEO after Bill Gates and Steve Ballmer, but as compared to the first two, he is the first who tried to put consumers at the core of everything. In the Steve Ballmer-era, Microsoft was often criticized by users for not listening to their opinions, but that’s going to change with the new CEO at the helm of the company, Nadella promises.

    Reply
  8. Tomi Engdahl says:

    Upstart brags about cheaper-than-Amazon private cold data cloud
    Storiant man asks you to check out their racks
    http://www.theregister.co.uk/2014/10/23/storiants_private_cold_data_cloud_cheaper_than_amazon/

    Storiant is an object storage startup which claims its customers can use its technology to store petabyte-scale data in a private cloud at a price below public cloud storage. How does it pull this trick off?

    A dozen pennies per gig for a year certainly sounds cheap as chips.

    The product is a hardware array and ZFS-based software. The hardware features rack enclosures of desktop drives and shingled media drives. The firm claims its “Storiant Power Governor reduces the active drive time to 10 per cent or less, effectively doubling the life of the drive.”

    Reply
  9. Tomi Engdahl says:

    Who’s that at the door, storage box flingers? It’s the hard drive makers. No, they are not smiling
    Nice treasure chest ye have there. Shame if it fell overboard
    http://www.theregister.co.uk/2014/09/19/nas_invaders/

    Hard drive makers are, metaphorically speaking, shifting from being gunsmiths to arms dealers. In other words, their customers, who take the drives and put them in boxes, better watch out.

    Let’s take a look at where the industry is heading:

    Seagate acquires Xyratex and sells its ClusterStor HPC and enterprise big data arrays.
    HGST develops an object storage array with Amplidata and Avere.
    Seagate acquires LaCie and sells its range of desktop and rack-mount storage systems.
    Seagate brings out a line of NAS boxes.
    WD’s brings out a NAS product line.

    Now, of course, the drive manufacturers say they are not competing with their mainstream storage array makers, such as Dell, EMC, Fujitsu, HDS, HP, IBM, and NetApp, nor their other storage array and enclosure-building customers like DotHill, Imation (Nexsan), Huawei, Synology and the other Taiwanese firms.

    To which we say, don’t bank on that promise. The drivesmiths are in intense competition with each other and are building commodity items in the millions with every point of margin fought over like hyenas scrapping over a carcass.

    Reply
  10. Tomi Engdahl says:

    Microsoft and Dell’s cloud in a box: Instant Azure for the data centre
    A less painful way to run Microsoft’s private cloud
    http://www.theregister.co.uk/2014/10/21/microsoft_and_dells_cloud_in_a_box_instant_azure_for_the_datacentre/

    Two words were missing from Microsoft’s cloudy event in San Francisco yesterday, where CEO Satya Nadella and Cloud and Enterprise VP Scott Guthrie presented an update to the company’s Azure and hybrid cloud strategy. Those two words were System Center.

    Instead, Nadella and Guthrie talked about another way to build a private cloud, using a cloud-in-a-box offering from Microsoft and Dell called the Cloud Platform System (CPS). This is a private deployment of technology from Microsoft’s public cloud which you install in your own data centre.

    The software is based on an existing product, the Azure Pack, while Dell provides the hardware: a rack stuffed with 32 Dell PowerEdge C6220 II servers for Hyper-V hosts, additional PowerEdge servers for managing storage, and 4 PowerVault MD3060e enclosures for HDD and SSD arrays.

    You will be able to buy from one to four 42U racks. A full four-rack setup will offer up to 8,000 VMs (each with two virtual CPU cores, 1.75GB RAM and 50GB disk) and 0.7PB of workload storage (there is additional storage used for backup).

    Fault-tolerance is built in at every level, including networking. Patch management, monitoring, backup and data recovery are all integrated.

    Azure Pack is a subset of the software used to run Azure, configured to run on private infrastructure.

    Reply
  11. Tomi Engdahl says:

    Can Marten Mickos make ‘Linux for the cloud’ work for HP?
    The ‘not-another-Unix play’ play
    http://www.theregister.co.uk/2014/09/23/marten_mickos_hp_convert/

    Hewlett-Packard didn’t just buy cloudy startup Eucalyptus Systems to build its fledgling OpenStack cloud biz, it also bought Marten Mickos, the firm’s Finnish CEO.

    HP isn’t the first to pay for Mickos’ expertise – that was Sun Microsystems, when it acquired his venture previous venture, MySQL AB, for $1bn in 2008

    Just who is this Mickos bloke and why do big systems companies like him and what he has to offer?

    Eucalyptus lets you build clouds using APIs compatible with those of Amazon Web Services – both for EC2 on compute and S3 on storage. OpenStack was spun up in 2010 to provide a set of open-source APIs for those who didn’t wish to use AWS.

    HP has also given Mickos a seat at the top table: he’s been made general manager of HP’s Cloud organisation, tasked with building HP’s OpenStack-based Helion cloud. Helion is HP’s supported spin of OpenStack code. Mickos is reporting straight to the queen herself, HP CEO Meg Whitman.

    Reply
  12. Tomi Engdahl says:

    Entity Framework goes ‘code first’ as Microsoft pulls visual design tool
    Visual Studio database diagramming’s out the window
    http://www.theregister.co.uk/2014/10/23/entity_framework_goes_codefirst_only_as_microsoft_shutters_yet_another_visual_modelling_tool/

    Microsoft will retire the visual design tool for its Entity Framework (EF) database tool in the upcoming version 7, in favour of a text-based “code first” approach.

    Entity Framework is an object-relational mapping (ORM) tool. It lets developers work at a higher level of abstraction, coding with application objects rather than having to think about the SQL (Structured Query Language) that is sent to the database engine. In principle, an ORM can save developers from writing a lot of tedious code, speeding production of business applications.

    EF in versions up to 6 (the current iteration) supports an XML-based model (stored in .edmx files) together with a diagramming tool for Visual Studio, Microsoft’s all-purpose development tool. Using the visual designer, you can design a database, complete with relationships and constraints, and then apply it to a database to generate the tables and other elements. You can also generate a diagram from an existing database.

    Microsoft will retire the visual design tool for its Entity Framework (EF) database tool in the upcoming version 7, in favour of a text-based “code first” approach.

    Entity Framework is an object-relational mapping (ORM) tool. It lets developers work at a higher level of abstraction, coding with application objects rather than having to think about the SQL (Structured Query Language) that is sent to the database engine. In principle, an ORM can save developers from writing a lot of tedious code, speeding production of business applications.

    EF in versions up to 6 (the current iteration) supports an XML-based model (stored in .edmx files) together with a diagramming tool for Visual Studio, Microsoft’s all-purpose development tool. Using the visual designer, you can design a database, complete with relationships and constraints, and then apply it to a database to generate the tables and other elements. You can also generate a diagram from an existing database.

    Reply
  13. Tomi Engdahl says:

    Ello Raises $5.5 Million to Grow Its Ad-Free Social Network
    http://recode.net/2014/10/23/ello-raises-5-5-million-to-grow-its-ad-free-social-network/

    Ello, the ad-free social network that promises users it will never sell their data, has raised $5.5 million in new venture funding, according to CEO Paul Budnitz.

    Reply
  14. Tomi Engdahl says:

    Satya Nadella wanted cloud, mobile first – Microsoft gives him Windows, Office first
    New areas show growth, but still only 13% of the profit pie
    http://www.theregister.co.uk/2014/10/23/microsoft_q1_2015_earnings/

    Microsoft reported revenues and earnings that topped analysts’ estimates for the first quarter of its fiscal 2015, buoyed by strong growth in cloud and mobility. But its solid-looking numbers belied ongoing softness in its traditional core markets.

    Total revenues for the three months ending on September 30 were $23.20bn, a 25 per cent year-on-year increase but flat from the previous sequential quarter. Earnings were $0.54 per diluted share.

    About those revenues, though. Microsoft crowed that revenues from its Consumer divisions were up 47 per cent, year-over-year, to $10.96 billion. But $2.61bn of those were from the Phone Hardware subdivision, which didn’t exist in the year-ago quarter. Ignoring those ex-Nokia sales, Consumer revenues were only up 12 per cent.

    Windows sales to businesses weren’t so hot, either. The all-important Commercial Licensing subdivision – which includes volume sales of client-side Windows, Windows Server, and Windows Embedded, plus various tools and server software – saw revenues of $9.87bn, which was up just 3 per cent from last year’s quarter and down 12 per cent sequentially.

    Reply
  15. Tomi Engdahl says:

    How Sony, Intel, and Unix Made Apple’s Mac a PC Competitor
    http://apple.slashdot.org/story/14/10/23/2221234/how-sony-intel-and-unix-made-apples-mac-a-pc-competitor

    In 2007, Sony’s supply chain lessons, the network effect from the shift to Intel architecture, and a better OS X for developers combined to renew the Mac’s growth. The network effects of the Microsoft Wintel ecosystem that Rappaport explained 20 years ago in the Harvard Business Review are no longer a big advantage.

    Opinion
    How Sony, Intel, and Unix made Apple’s Mac a PC competitor
    http://www.networkworld.com/article/2837793/opensource-subnet/how-sony-intel-and-unix-made-apples-mac-a-pc-competitor.html

    Recent numbers show Apple’s Mac as a rare bright spot in an otherwise bleak PC industry. It’s come a long way.

    Reply
  16. Tomi Engdahl says:

    Microsoft Cloud Strength and Hardware Progress Drive Record First-Quarter Revenue
    Strong performance across commercial and consumer segments delivers revenue of $23.20 billion.
    http://www.microsoft.com/investor/EarningsAndFinancials/Earnings/PressReleaseAndWebcast/FY15/Q1/default.aspx

    Reply
  17. Tomi Engdahl says:

    Venture Capitalists’ Confidence Is Waning — or So It Seems
    http://blogs.wsj.com/venturecapital/2014/10/23/venture-capitalists-confidence-is-waning-or-so-it-seems/

    A quarterly survey that gauges the confidence level in Silicon Valley shows that venture capitalists downgraded their enthusiasm in the third quarter. But that doesn’t necessarily mean the Bay Area’s big-spending climate is about to change.

    Still, Camnice threw sun on what little dark clouds were gathering over the industry.

    “[A] still strong if moderating exit market for venture-backed businesses, healthy levels of investment and fundraising, rampant disruptive innovation, and the ever present belief in the determination of Silicon Valley entrepreneurs kept sentiment at a relatively high level.”

    That barely wavering enthusiasm is fueled in part by persistently low interest rates. Pension funds, university endowments and other big investors continue to pump money into equities and venture capital – in fact, venture firms are on pace to raise more money in any year since 2007.

    Reply
  18. Tomi Engdahl says:

    Microsoft Eyes Expanding FPGA Role
    Network chips not keeping pace
    http://www.eetimes.com/document.asp?doc_id=1324372

    Microsoft is exploring the possibility of putting an FPGA on every server in its datacenters. It’s only a rough concept right now, but it could ease a very real pain point on the horizon.

    The company runs more than a million servers, and it sees a network bottleneck coming sometime in the next three years, Kushagra Vaid, vice president of sever engineering at Microsoft, said in a keynote at the Linley Tech Processor Conference here.

    “We are in position now where none of the silicon providers can keep up with the rate of change in Azure,” one of the largest of 200 workloads Microsoft’s datacenters run, he said. The networks need “new features for programmability, for flow control, [and virtual] switches. It’s changing so fast the network silicon can’t keep up with it, so that’s raising the question of going with an FPGA.”

    Earlier this year Microsoft announced plans to use FPGA cards in a significant, but limited way to accelerate ranking of its Bing searches. The additional performance was greater than the cost of the custom Altera Stratix V cards the company designed.

    Whether such a strategy will work to deliver new networking speeds and features remains to be seen.

    What’s clear is the looming pain point. In the past four years servers in Microsoft’s datacenters have shifted from using 1- to 10- to (most recently) 40-Gbit/s interfaces. All new servers the company buys now use four 10G chips to send data at 40G rates to a top-of-rack switch, a rate most silicon vendors had anticipated to be used only for top-of-rack switches.

    Not only must the network chips be fast, they are being asked to handle an increasingly wide array of functions.

    For example, Vaid described the need to perform real-time encryption at 40 Gbit/s rates on all data leaving any of its 15 global datacenters.

    “That’s a huge amont of processing power. We have done studies showing it takes 16 of 24 cores in an Intel Ivy Bridge server processor… That’s not very economical, so we have a need for offloading crypto. This is a whole new level of hardware design that needs to be done.”

    Reply
  19. Tomi Engdahl says:

    HuddleLamp turns Multiple Tablets into Single Desktop
    http://hackaday.com/2014/10/24/huddlelamp-turns-multiple-tablets-into-single-desktop/

    Imagine you’ve got a bunch of people sitting around a table with their various mobile display devices, and you want these devices to act together. Maybe you’d like them to be peepholes into a single larger display, revealing different sections of the display as you move them around the table. Or maybe you want to be able to drag and drop across these devices with finger gestures. HuddleLamp lets you do all this.

    How does it work? Basically, a 3D camera sits above the tabletop, and watches for your mobile displays and your hands. Through the magic of machine vision, a server sends the right images to each screen in the group. (The “lamp” in HuddleLamp is a table lamp arranged above the space with a 3D camera built into it.)

    A really nice touch is that the authors also provide JavaScript objects that you can embed into web apps to enable devices to join the group without downloading special software.

    HuddleLamp
    Spatially-Aware Mobile Displays for Ad-hoc Around-the-Table Collaboration and Cross-Device Interaction
    http://huddlelamp.org/

    Reply
  20. Tomi Engdahl says:

    Pure Storage developing converged systems
    Commodity server hardware the next big target market
    http://www.theregister.co.uk/2014/10/27/pure_storage_developing_converged_systems/

    Pure Storage is set to build converged server and storage systems “to compete with commodity server hardware”, said CEO Scott Dietzen, as the highly funded startup enters the next phase of its development.

    The opportunity for Pure will be more about shaping the next-generation of cloud and web-scale data storage than just replacing legacy disk arrays. Our products will ultimately compete with commodity server hardware and help change the way systems software — such as databases and file systems — are designed and implemented.

    Reply
  21. Tomi Engdahl says:

    UNIX greybeards threaten Debian fork over systemd plan
    ‘Veteran Unix Admins’ fear desktop emphasis is betraying open source
    http://www.theregister.co.uk/2014/10/21/unix_greybeards_threaten_debian_fork_over_systemd_plan/

    A group of “Veteran Unix Admins” reckons too much input from GNOME devs is dumbing down Debian, and in response, is floating the idea of a fork.

    As the rebel greybeards put it, “… current leadership of the project is heavily influenced by GNOME developers and too much inclined to consider desktop needs as crucial to the project, despite the fact that the majority of Debian users are tech-savvy system administrators.”

    The anonymous rebels’ says “Some of us are upstream developers, some professional sysadmins: we are all concerned peers interacting with Debian and derivatives on a daily basis.” Their beef is that “We don’t want to be forced to use systemd in substitution to the traditional UNIX sysvinit init, because systemd betrays the UNIX philosophy.”

    “Debian today is haunted by the tendency to betray its own mandate, a base principle of the Free Software movement: put the user’s rights first,” they write at debianfork.org. “What is happening now instead is that through a so called ‘do-ocracy’ developers and package maintainers are imposing their choices on users.”

    Reply
  22. Tomi Engdahl says:

    PC Hardware Requirements Are Escalating
    http://news.softpedia.com/news/PC-Hardware-Requirements-Are-Escalating-463093.shtml

    The last few weeks saw the reveal of some pretty steep hardware requirements for upcoming games, like Assassin’s Creed Unity or Call of Duty: Advanced Warfare, while already-released games surprised with their quite demanding features, like Middle-earth: Shadow of Mordor.
    More specifically, both Assassin’s Creed Unity and Advanced Warfare require some relatively new processors, such as the Intel Core i5 2500K, not to mention a lot of hard drive space, of over 50GB.

    However, the most interesting aspects relate to memory and graphics cards. Both games need at least 6GB of RAM and it’s unclear if PCs with 4GB will be able to run the two titles, provided they close all the applications in the background.

    Besides having lots of RAM, which isn’t that big of an issue considering the price of new memory modules, PC gamers are also required by these upcoming titles to have pretty new graphics cards with a lot of VRAM.

    Play the waiting game
    For now, the best strategy, at least for me, is to stick to my current build, endure playing a few games on not the best quality, and wait it out in terms of a full computer upgrade. It’s going to be interesting to see what requirements next year’s games will have on the PC.

    Reply
  23. Tomi Engdahl says:

    Chrome 38′s new HTML tag support makes fatties FIT and SKINNIER
    First browser to protect networks’ bandwith using official spec
    http://www.theregister.co.uk/2014/10/15/chrome_38_first_picture_element/

    Google has recently pushed out Chrome 38, for desktop and mobile devices.

    Among the changes Chrome 38 has support for new features in JavaScript, as part of the support for the ECMAScript 6 draft specification.

    But the big news is Chrome 38 is the first browser to support the brand new HTML Picture element.

    The Picture element is one of several new tools for web developers that lets websites serve different images based on the screen size of the device you’re using. Though Picture gets all of the attention, much of the time developers won’t even need the new element, just the new attributes for the element.

    What’s the big deal? You’ve probably noticed it’s increasingly common for websites to adapt their layout to fit your device. For example, on small screens a site might collapse menus and vertically stack content blocks that would be arranged differently on a larger screen. These flexible layouts are part of what’s known as responsive web design. When done properly it means a single website, with all the same content, works well on every device.

    Yet while developers have tools to handle changing the layout, there isn’t much they can do about the size of images these layouts contain. While an image might be scaled down to fit your phone, behind the scenes your browser still downloads a large file. That’s a waste of bandwidth – sending a huge image to a tiny screen. So, when building responsive websites, developers have resorted to various hacks when handling images. Until now.

    For now, Chrome 38 is the only browser with support for responsive images, though Opera 25 will have support when it emerges from Opera’s beta channel. Firefox will also support responsive images in a release later this year, and Microsoft’s Internet Explorer team has indicated that responsive images support is on their roadmap as well.

    Reply
  24. Tomi Engdahl says:

    Recovery-ware upstart Zerto: From Hyper-V to VMware and back again
    Intros new cross-hypervisor replication product
    http://www.theregister.co.uk/2014/10/27/zertos_vm_translating_replication/

    In a blow against hypervisor lock-in, Recovery-ware startup Zerto has extended its VMware hypervisor-based technology to embrace Hyper-V, providing cross-hypervisor replication from VMware to Hyper-V or vice versa.

    Virtual machines can now be continuously protected by Zerto’s changed block-level replication to a second server or migrated to one running a second hypervisor, with restoration from the replica taking seconds and being to any point in time.

    It makes the point that other replication products from, for example, NetApp, are based on snapshots and don’t offer continuous data protection while having more of a performance impact on the source server than Zerto’s low-touch replication.

    The second server can be in the same data centre or a a remote one.

    Zerto says it is embracing the idea of a cloud fabric and Hyper-V support is a first step. The idea is to have its replication support “any type of workload on any hypervisor or any cloud, regardless of underlying infrastructure”.

    Reply
  25. Tomi Engdahl says:

    SDI Wars: EMC must FORGET ARRAYS, adapt or disappear
    Can the storage giant overcome a lack of necessary leadership?
    http://www.theregister.co.uk/2014/10/20/sdi_wars_emc_must_forget_arrays_adapt_disappear/

    Register storage supremo Chris Mellor has recently been reporting on EMC’s slow descent into corporate depression due to a combination of activist investors and recalcitrant internecine political strife.

    There’s nothing surprising here, I’ve been hearing the same things all across the EMC federation, though I’ve no inclination to be anywhere near as polite about the affair as Mellor.

    Taking the kid gloves off, EMC, VMware – and to a lesser extent Pivotal – appear to be in the middle of a self-induced political battle, the super special variety that can only be fuelled by the rare combination of the Silicon Valley echo chamber, Wall Street idiocy and billions upon billions of dollars on the line.

    Pretty much the only way it gets worse is if Carl Icahn gets involved.

    In explaining why EMC appears to be freaking out about picking a direction and then going forth and making incalculable gobs of cash, Mellor writes “there is no potential storage array acquisition which would not damage VMAX/VNX/Isilon interests”.

    I find this a horrible rationale for hobbling the growth of your own company. As the much-venerated Steve Jobs said: “If you don’t cannibalize yourself, someone else will.” That’s a truism no matter who you are in the technology industry.

    Don’t drag along these arrays like ageing bar stars increasingly desperate for someone — anyone — to notice them. Don’t keep around staff who can’t cope with change. Put arrays in maintenance mode, keep them moving forward incrementally, but handle growth/innovation in the array space through M&A + rationalisation.

    EMC is a goliath. It is huge, well-resourced and possessed of some of the smartest people on the planet. It has bought some great companies filled with other smart people, many of whom have demonstrated an ability to innovate. EMC has all the tools it needs to pivot and embrace growth opportunities adjacent to its traditional businesses.

    EMC’s problems aren’t that the mean old startups have come along and kicked it in the shins. EMC’s problems are being caused by internal politics, if recent media reports are anything to go by.

    Where’d EMC go?

    What we’re talking about here is no less than the final commoditisation of the x86 market. It’s servers into smartphones and there’s no coming back from that. Margins will shrink. The weak will perish. Those with an inability to overcome a lack of necessary leadership (or a failure of vision) will be swept aside.

    If Cisco doesn’t do anything too stupid, it is going to take the high end by virtue of inertia. Dell is probably going to take the mass market. HP will self-destruct in spectacular fashion.

    VMware could be the next Microsoft: capturing enterprise and mass market alike, but it would require such radical changes in corporate culture I’m not sure it’ll pull through. That leaves it being little more than a neutral arms dealer. If that happens, its star will fade. Of course, Huawei, Lenovo and others may have a few things to say about this being an all-West list of companies. Lenovo in particular seems able to execute where others have proven incapable.

    So where’s EMC in all of this? It has the beginnings of an SDI play in the form of ScaleIO, a massive existing logistics and support operation and quality backup software. It has experience in the immediate precursor technology to SDI in the form of VCE.

    EMC has the goods to be a credible major IT player in the SDI wars.

    Reply
  26. Tomi Engdahl says:

    Surface finally produced profit

    After two years and $ 2 billion in losses after the Microsoft Surface business has turned a profit, the company reported.

    June and September at the end of the time interval between their tablet Microsoft sold 908 million dollars worth.

    Microsoft has not told the actual profit figures.
    Computerworld estimates that equipment would be produced around 13.4 percent of the profit, or $ 122 million. Other analysts have estimated the figure to be much smaller.

    Source: http://www.tivi.fi/uutisia/surface+tuotti+vihdoin+voittoa/a1023424

    Reply
  27. Tomi Engdahl says:

    ARM unveils a trio of graphics chip designs for mobile media processing
    http://venturebeat.com/2014/10/27/arm-unveils-a-trio-of-graphics-chip-designs-for-mobile-media-processing/

    When it comes to mobile graphics, ARM wants to cover all the bases. The Cambridge, England-based chip design company is announcing three new graphics processing unit (GPU) chip designs based on its Mali mobile graphics architecture.

    ARM is coming up with multiple Mali GPUs because mobile is becoming a much more segmented market. The company’s Cortex microprocessors and Mali GPUs cover the gamut from feature phones to premium smartphones. The categories in the mainstream market include entry-level smartphones, mid-range tablets, mid-range smartphones, and premium tablets.

    “We have products from the very low end to the high end,” Steve Steele, senior product manager of media processing, told VentureBeat.

    Reply
  28. Tomi Engdahl says:

    Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts
    Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong
    http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts

    Reply
  29. Tomi Engdahl says:

    Microsoft Said to Work on Software for ARM-Based Servers
    http://www.bloomberg.com/news/2014-10-27/microsoft-said-to-work-on-software-for-arm-based-servers.html

    Microsoft Corp. (MSFT) is working on a version of its software for server computers that run on chips based on ARM Holdings Plc (ARM)’s technology, people familiar with its plans said, a move that could help loosen Intel Corp. (INTC)’s grip on the market.

    The world’s largest software maker has a test version of Windows Server that’s already running on ARM-based servers, according to the people, who asked not to be identified because the plans aren’t public yet. Microsoft, based in Redmond, Washington, hasn’t yet decided whether to make the software commercially available, one of the people said. Microsoft now only offers a server operating system for use on Intel’s X86 technology-based processors.

    Reply
  30. Tomi Engdahl says:

    Implementing FPGA Design with the OpenCL Standard
    http://www.altera.com/literature/wp/wp-01173-opencl.pdf

    Utilizing the Khronos Group’s OpenCL™ standard on an FPGA may offer significantly higher performance and at much lower power than is available today
    from hardware architectures such as CPUs, graphics processing units (GPUs), and digital signal processing (DSP) units. In addition, an FPGA-based heterogeneous system (CPU + FPGA) using the OpenCL standard has a significant time-to-market advantage compared to traditional FPGA development using lower level hardware description languages (HDLs) such as Verilog or VHDL

    OpenCL applications consist of two parts. The OpenCL host program is a pure software routine written in standard C/C++ that runs on any sort of microprocessor.
    That processor may be, for example, an embedded soft processor in an FPGA, a hard ARM processor, or an external x86 processor

    At a certain point during the execution of this host software routine, there is likely to be a function that is computationally expensive and can benefit from the highly parallel acceleration on a more parallel device: a CPU, GPU, FPGA, etc. This function to be accelerated is referred to as an OpenCL kernel. These kernels are written in standard C; however, they are annotated with constructs to specify parallelism and memory hierarchy.

    Unlike CPUs and GPUs, where parallel threads can be executed on different cores, FPGAs offer a different strategy. Kernel functions can be transformed into dedicated and deeply pipelined hardware circuits that are inherently multithreaded using the concept of pipeline parallelism.

    The most important concept behind the Open CL-to-FPGA compiler is the notion of pipeline parallelism.

    The creation of designs for FPGAs using an OpenCL description offers several advantages in comparison to traditional methodologies based on HDL design.

    Development for software-programmable devices typically follows the flow of conceiving an idea, coding the algorithm in a high-level language such as C, and then using an automatic compiler to create the instruction stream.

    This approach can be contrasted with traditional FPGA-based design methodologies.
    Here, much of the burden is placed on the designer to create cycle-by-cycle descriptions of hardware that are used to implement their algorithm. The traditional flow involves the creation of datapaths, state machines to control those datapaths, connecting to low-level IP cores using system level tools (e.g., SOPC Builder, Platform Studio), and handling the timing closure problems since external interfaces impose fixed constraints that must be met. The goal of an OpenCL compiler is to perform all of these steps automatically for the designers, allowing them to focus on defining their algorithm rather than focusing on the tedious details of hardware design.

    One of the most important benchmarks in financial markets is the computation of option prices via the Monte Carlo Black-Scholes method.
    Utilizing an OpenCL framework developed for Altera® FPGAs produces excellent benchmark results

    Utilizing the OpenCL standard on an FPGA may offer significantly higher performance and at much lower power than is available today from hardware architectures (CPU, GPUs, etc). In addition, an FPGA-based heterogeneous system (CPU + FPGA) using the OpenCL standard has a significant time-to-market advantage compared to traditional FPGA development using lower level hardware description languages (HDLs) such as Verilog or VHDL.

    Reply
  31. Tomi Engdahl says:

    MATLAB takes on big data
    http://www.edn.com/design/design-tools/development-kits/4436296/MATLAB-takes-on-big-data?_mc=NL_EDN_EDT_EDN_productsandtools_20141027&cid=NL_EDN_EDT_EDN_productsandtools_20141027&elq=e6c636712dfb45b291ce09bc7dc50cd4&elqCampaignId=19870

    MathWorks has introduced Release 2014b with a range of new capabilities in MATLAB, including graphics and big data, and options in Simulink for accelerating model building and running consecutive simulations.

    In addition, there are numerous updates in the areas of Signal Processing and Communications, Code Generation, and Verification and Validation.

    Reply
  32. Tomi Engdahl says:

    Debate Over Systemd Exposes the Two Factions Tugging At Modern-day Linux
    http://linux.slashdot.org/story/14/10/28/0026200/debate-over-systemd-exposes-the-two-factions-tugging-at-modern-day-linux

    In discussions around the Web in the past few months, I’ve seen an overwhelming level of support of systemd from Linux users who run Linux on their laptops and maybe a VPS or home server. I’ve also seen a large backlash against systemd from Linux system administrators who are responsible for dozens, hundreds, or thousands of Linux servers, physical and virtual. … The release of RHEL 7 has brought the reality of systemd to a significant number of admins whose mantra is stability over all else

    What we talk about when we talk about Linux and systemd
    http://www.infoworld.com/article/2837182/linux/back-into-linux-systemd-wars.html

    Reply
  33. Tomi Engdahl says:

    Microsoft Is Bringing WebRTC To Explorer, Eyes Plugin-Free Skype Calls
    http://slashdot.org/story/14/10/27/1913243/microsoft-is-bringing-webrtc-to-explorer-eyes-plugin-free-skype-calls

    Microsoft today announced it is backing the Web Real-Time Communication (WebRTC) technology and will be supporting the ORTC API in Internet Explorer. Put another way, the company is finally throwing its weight behind the broader industry trend of bringing voice and video calling to the browser without the need for plugins.

    Bringing Interoperable Real-Time Communications to the Web
    http://blogs.msdn.com/b/ie/archive/2014/10/27/bringing-interoperable-real-time-communications-to-the-web.aspx

    Reply
  34. Tomi Engdahl says:

    Bridging the IT Skills Gap
    http://www.infoworld.com/article/2684445/it-careers/bridging-the-it-skills-gap.html

    Despite a nearly 7 percent national unemployment rate, the unemployment rate in IT hovers just below 3 percent, according to a Dice report. Among IT hiring managers, 62% report that filling open positions is taking longer than it did last year, with some technology professionals reporting up to a six-month hiring cycle.

    Results of the Shortage

    The effects of the IT skill shortage are far reaching. According to the CIO 2014 IT Workplace Trends and Salary Guide, 6 in 10 IT hiring managers experience the following in their organizations:

    Lower morale from heavier workloads
    Incomplete or late work
    Deterioration in customer service
    Low quality work due to staff being overworked
    Unmotivated employees
    Lost revenue – $14,000 for every unfilled job

    Why is it so Hard?
    High salaries, the ongoing need for training and lack of skill portability are among the main drivers of this stubborn problem.

    Here are some tangible ways to bridge the IT skills gap:

    Use managed hosting providers with hybrid capabilities to extend existing resources or spin up new projects quickly.
    Seek out SaaS or application hosting options that alleviate the back-end management of business critical services like email, collaboration, and content delivery.
    Seek out online educational platforms like TrueAbility or CloudU to help your staff easily access hands-on training with the latest skills and technologies.
    Engage partners to help plan or handle one-off projects or time-intensive deployment, migration or replatforming projects.

    Reply
  35. Tomi Engdahl says:

    IT bosses are afraid of rookie big data know-how but are lagging behind in their own projects,

    Less than a third of the companies use predictive analytics, a new report shows. Big data recovery shall be considered a high priority in industrial applications, the Internet penetration.

    29 percent of surveyed 250 IT managers to use big data predictive analytics and business optimization. 65 percent of the companies use analytics to its equipment and resources monitoring.

    Nearly the same proportion (62 per cent) use of online technologies to collect large volumes of scattered data, such as those remote wind farms and oil pipelines. Two-thirds (66 per cent) of the respondents executives believe their company could lose its market position within three years of failure to comply with the introduction of big data.

    Up to 93 per cent of directors assessed the market for new entrants already using big data differentiation. 88 per cent stated analytics to be of paramount importance for their business.

    “The industrial internet has thousands of billions of revenue potential new services such as business growth in general”

    Accenture and GE conducted inventories in China, France, Germany, India, South Africa, the United Kingdom and the United States. Were involved in the aviation industry, wind power, power generation and distribution, oil and gas, railways, manufacturing, and mining.

    Source: http://www.tivi.fi/kaikki_uutiset/itpomot+pelkaavat+keltanokkien+big+data+osaamista+mutta+hidastelevat+omissa+hankkeissaan/a1023579

    Reply
  36. Tomi Engdahl says:

    Looking for a job in Europe? Experienced IT staff needed in UK, Italy and Germany
    New graduates need life skills, says Euro Commish report
    http://www.theregister.co.uk/2014/10/28/europe_needs_you_but_only_in_the_uk_italy_and_germany/

    Qualified IT worker? Fancy working in Europe? Come on over… but only to the UK, Italy and Germany.

    According to a new report funded by the European Commission, the number of ICT jobs in Europe is growing by about 4 per cent per year. But the number of suitably skilled workers is not keeping pace and by 2015, it predicts a structural shortage of over 500,000 jobs caused by a lack of available talent.

    But the biggest gaps are in the UK, Germany, and Italy, which together account for 60 per cent of all vacancies in Europe. Part of the problem according to respondents is that newly qualified worker lack the necessary life skills. Fifty-one per centof CIOs said recent graduates needed additional training in business and interpersonal skills.

    However, it seems you can teach old dogs new tricks – 71 per cent said that experienced workers were able to keep their skills up to date through regular training.

    Reply
  37. Tomi Engdahl says:

    The two biggest drivers of change in business computing today are multi-device computing and cloud. Multidevice and cloud are driving a rapid evolution in application architecture toward more powerful front ends and more flexible back-ends.

    Mobile devices are becoming important gateways to business data and applications. Cloud back-ends – often implemented as rich API service points – are fast becoming the back-end complement to this new wave of applications.

    The last five years there has been an explosion of innovation in web and native technologies.

    Source: http://subscriber.emediausa.com/Bulletins/BulletinPreview.aspx?BF=1&BRID=75461

    Reply
  38. Tomi Engdahl says:

    Build or buy: A PC case study
    http://www.edn.com/electronics-blogs/brians-brain/4436502/Build-or-buy–A-PC-case-study?_mc=NL_EDN_EDT_EDN_today_20141028&cid=NL_EDN_EDT_EDN_today_20141028&elq=8ef13f67868a4c3a87929bcbd1601d17&elqCampaignId=19883

    It wasn’t that long ago, it seems, that building one’s own PC was a fairly common thing to do. I used to do it all the time, in fact. But I haven’t done so in years, and the fairly consistent conclusion of all the tech press coverage I’ve seen on the topic suggests that I’m not alone

    In my case, the primary two factors are a near-complete move from Windows to Mac OS, coupled with a near-complete transition from the desktop to laptop computer form factor. And again, I suspect I’m not alone, particularly with the latter factor.

    But in brainstorming prior to beginning to write this particular post, I came up with a few other possibilities, as well:

    Large PC OEMs are increasingly targeting the traditional customers of niche “enthusiast” OEMs, as well as those folks who historically might have done a DIY system

    Overclocking the CPU and/or memory, something that’s conceptually easier to do with a motherboard manufacturer’s BIOS versus the comparatively more “locked down” BIOS of an OEM system, is increasingly unlikely to be achievable at all.

    The graphics cores embedded within CPUs are becoming increasingly robust, reducing PC users’ motivations to purchase external graphics cards at all

    There’s always the potential that you’ll ruin expensive hardware by, for example, bending and breaking off processor package pins during socket plug-in or incorrectly attaching the heat sink

    With that said, there are some situations when a DIY system may still make sense, particularly with ultra-high-end gaming, Bitcoin mining, and other “power” applications

    Reply
  39. Tomi Engdahl says:

    It’s Official: HTML5 Is a W3C Standard
    Tuesday October 28, 2014
    http://developers.slashdot.org/story/14/10/28/1429224/its-official-html5-is-a-w3c-standard

    The Worldwide Web Consortium today has elevated the HTML5 specification to ‘recommendation’ status , giving it the group’s highest level of endorsement, which is akin to becoming a standard.

    Comment:
    But it’s already a de facto standard. I think W3C’s clout in this area is diminished because the market already decided it was a standard long before they did.
    Turning de facto standards that have been implemented in actual browsers into a formal specification is how standards work best.
    Coming up with a specification first and hoping someone will be able to implement it is how we wound up with Perl 6.

    Reply
  40. Tomi Engdahl says:

    Alert Icon
    By | Paul Kunert 28th October 2014 15:17
    HP crowns one veep to rule ALL server, storage, network sales
    Centralises powerbase to speed decision-making
    http://www.channelregister.co.uk/2014/10/28/hp_enterprise_group_rejig/

    Reply
  41. Tomi Engdahl says:

    Mozilla: Spidermonkey ATE Apple’s JavaScriptCore, THRASHED Google V8
    Moz man claims the win on rivals’ own benchmarks
    http://www.theregister.co.uk/2014/10/28/mozilla_claims_fastest_javascript_engine_beats_googles_v8_and/

    Mozilla Distinguished Engineer Robert O’Callahan reports that the Spidermonkey JavaScript engine, used by the Firefox web browser, has surpassed the performance of Google’s V8 engine (used by Chrome) and Apple’s JavaScript Core (used by Safari) on three popular benchmarks: Mozilla’s own Kraken, Webkit’s SunSpider and Google’s Octane.

    “Beating your competitors on their own benchmarks is much more impressive than beating your competitors on benchmarks which you co-designed along with your engine,” writes O’Callahan. “We can say ‘these benchmarks are not very interesting; let’s talk about other benchmarks (e.g. asm.js-related) and language features’ without being accused of being sore losers.”

    The asm.js JavaScript subset, to which O’Callahan refers, is used as an intermediate language and is designed to be amenable to optimisation by just-in-time compilers.

    Both Mozilla and Google have an interest in making browser-hosted applications deliver performance sufficiently good that it reduces users’ need for native applications and can do more of their work (or play) on the web. However, Google also has a project called Native Client (NaCl), which runs portable native code in the browser, avoiding the JavaScript engine. Google has emulated the Android runtime in NaCl, for example, enabling Android apps to run.

    Reply
  42. Tomi Engdahl says:

    Entity Framework goes ‘code first’ as Microsoft pulls visual design tool
    Visual Studio database diagramming’s out the window
    http://www.theregister.co.uk/2014/10/23/entity_framework_goes_codefirst_only_as_microsoft_shutters_yet_another_visual_modelling_tool/

    Reply
  43. Tomi Engdahl says:

    Microsoft Said to Work on Software for ARM-Based Servers
    http://www.bloomberg.com/news/2014-10-27/microsoft-said-to-work-on-software-for-arm-based-servers.html

    Microsoft Corp. (MSFT) is working on a version of its software for server computers that run on chips based on ARM Holdings Plc (ARM)’s technology, people familiar with its plans said, a move that could help loosen Intel Corp. (INTC)’s grip on the market.

    The world’s largest software maker has a test version of Windows Server that’s already running on ARM-based servers, according to the people, who asked not to be identified because the plans aren’t public yet.

    Hewlett-Packard Co. (HPQ) and other companies have said that ARM-based chips have a place in servers, where they can compete with Intel’s products on power savings and price.

    Reply
  44. Tomi Engdahl says:

    OpenBSD Drops Support For Loadable Kernel Modules
    http://bsd.slashdot.org/story/14/10/28/1852214/openbsd-drops-support-for-loadable-kernel-modules

    The OpenBSD developers have decided to remove support for loadable kernel modules from the BSD distribution’s next release.

    OpenBSD Drops Support For Loadable Kernel Modules
    http://www.phoronix.com/scan.php?page=news_item&px=MTgyNDI

    Interestingly the OpenBSD developers have decided to remove support for loadable kernel modules from the BSD distribution’s next release.

    Going back many years has been OpenBSD’s LKM support to allow kernel mouldes to be dynamically added/removed from a system, just as kernel modules are on Linux and other operating systems. However, OpenBSD developers have decided to strip out this functionality.

    Reply
  45. Tomi Engdahl says:

    Storage array giants will point their back ends at Azure
    Azure Site Recovery expands to save and serve SAN snapshots, scare backup vendors
    http://www.theregister.co.uk/2014/10/29/storage_array_giants_will_point_their_back_ends_at_azure/

    Azure has turned itself into a destination for storage of SAN snapshots captured on devices provided by EMC, NetApp, HP and Hitachi Data Systems, further enhancing the Microsoft Cloud’s disaster recovery prowess.

    Microsoft already offers share ‘n’ sync for virtual machines under the “Azure Site Recovery” (ASR) service.

    At TechEd in Barcelona this week, Microsoft revealed ASR will soon gain the ability to hook into arrays using the SMI-S spec, with the result that SANs capable of taking snapshots of themselves can do so and send them to Azure. Once the snapshots are in Microsoft’s conveniently world-spanning-and-ever-so-redundant (mostly) cloud, the snapshots are available for later retrieval when disaster strikes.

    You’ll need System Center Virtual Machine Manager (SCVMM) to make this new feature work, and it will also help if your SAN vendor points their SMI-S implementation at Azure. The good news is that EMC (VNX and VMAX Symmetrix) and NetApp (Clustered Data ONTAP 8.2) are on board, with HDS and HP (3Par) ready to joint the party.

    Microsoft’s calling the rig above “end-to-end storage array-based replication and disaster recovery”, and is putting its money where its mouth

    Reply
  46. Tomi Engdahl says:

    AMD Hires Dell’s Server Chief
    ARM server skeptic may get religion
    http://www.eetimes.com/document.asp?doc_id=1324406&

    Just weeks after sharing skeptical views on the future of ARM-based servers, Forest Norrod is departing Dell to join Advanced Micro Devices where he will be the general manager of AMD’s group that sells, among other things, ARM-based server SoCs.

    AMD named Norrod senior vice president and general manager of its Enterprise, Embedded and Semi-Custom (EESC) group, reporting to chief executive Lisa Su. The EESC group is the faster growing half of AMD after a reorg earlier this year that split the company in two. AMD’s other division sells mobile and PC processors and graphics chips.

    The semi-custom products in the EESC group are seen as one of the biggest hopes for the company that has long struggled to get out from the shadow of Intel. It current makes x86 and graphics chips for Microsoft, Nintendo and Sony videogame consoles and recently struck two more deals to make chips for unannounced systems expected to ship in 2016.

    Reply
  47. Tomi Engdahl says:

    IT JOB OUTSOURCING: Will it ever END?
    Let’s look at the economics behind it…
    http://www.theregister.co.uk/2014/10/29/when_is_this_endless_outsourcing_of_tech_jobs_going_to_end/

    Well, there are two answers to this one and they are: around 2080 (2090 maybe) and never. And there are two flavours of that “never” answer too.

    The first is the one we usually think of, the bastard capitalists nipping off in the search of cheap labour and leaving their devoted domestic workforces starving in the gutters. That one has very definitely got to the third and fourth levels.

    Of this first type of outsourcing, the end is presumably going to be when there’s no more really poor places where the plutocrats can go oppress people. Exactly when that’s going to be, well, I would like the King of Sweden to give me a gold medal one day and he would if I knew when this was going to happen.

    ‘Convergence’ – or how does your poor country grow

    This concept is called “convergence” and is based upon the idea that – absent really bad public policy – a poor country should be able to grow faster than a rich one. This is simply because the poor one is, by definition, nowhere near the technological frontier and can thus copy those that are, while the rich ones are the people doing the difficult work (and making investments) in trying to expand that frontier. We expect, in essence, to reverse The Great Divergence that was the Industrial Revolution, before which average living standards might have diverged by two or five times by locality, but by nothing like the 10 to 50 times that is global inequality today.

    It is worth noting that if globalisation doesn’t continue, currently, to seek out those lower labour costs, then the convergence might well not happen – meaning that there still will be places to go exploit poor people. So it’s a case of: exploit it now – so that it no longer exists to be exploited – or don’t exploit it now, leaving it to exist to be used in the future.

    And this would continue along, even after convergence. We would still be dividing and specialising labour in this manner, still trading the resultant production, even after full convergence had been achieved. Lower labour costs in general aren’t the only reason we do this: we can still gain greater efficiency through the specialisation, even if wage levels in general are equal

    So, when is offshoring going to stop? In terms of the straight pursuit of cheap exploitable labour, it will be around and about when there’s no more cheap labour to exploit. At best, this is 65 to 70 years away

    Reply
  48. Tomi Engdahl says:

    Promise Theory—What Is It?
    http://www.linuxjournal.com/content/promise-theory—what-it

    During the past 20 years, there has been a growing sense of inadequacy about the “command and control” model for managing IT systems. Years in front of the television with a remote control have left us hard pressed to think of any other way of making machines work for us. But, the truth is that point-and-click, imperative scripting and remote execution do not scale very well when you are trying to govern the behavior of a large number of things.

    IT installations grow to massive size in data centers, and the idea of remote command and control, by an external manager, struggles to keep pace, because it is an essentially manual human-centric activity. Thankfully, a simple way out of this dilemma was proposed in 2005 and has acquired a growing band of disciples in computing and networking. This involves the harnessing of autonomous distributed agents.

    This command and control model is called an obligation model in computer science. It has many problems. One of those problems is that it separates intent from implementation, creating uncertainty of outcome.

    Luckily, there is a complementary approach to looking at design that fixes these deficiencies, not in terms of obligations, but in terms of promises.

    In a promise-based design, each part behaves only according to the promises it makes to others. Instead of instructions from without, we have behavior promised from within. Since the promises are made by “self” (human self or machine self), it means that the decision is always made with knowledge of the same circumstances under which implementation will take place. Moreover, if two promises conflict with one another, the agent has complete information about those circumstances and conflicting intentions to be able to resolve them without having to ask for external help.

    A promise-oriented view is somewhat like a service view. Instead of trying to remote-control things with strings and levers, one makes use of an ecosystem of promised services that advertise intent and offer a basic level of certainty about how they will behave. Promises are about expectation management, and knowing the services and their properties that will help us to compose a working system. It doesn’t matter here how we get components in a system to make the kinds of promises we need—that is a separate problem.

    Electronics are built in this way, as is plumbing and other commoditized construction methods. You buy components (from a suitable supplier) that promise certain properties (resistance, capacitance, voltage-current relationships), and you combine them based on those expectations into a circuit that keeps a greater promise (like being a radio transmitter or a computer).

    To offer an example of a promise-oriented language, think of HTML and cascading style sheets on the Web.

    The language looks superficially different, but it basically is the same kind of declarative association between patterns of objects and what they promise. The promise a region makes itself is the one it will render for all time, until the promise has changed. So this is not a fire-and-forget push-button command, but rather the description of a state to be continuously maintained.

    Promise Theory deals with how to think, in this way, about a much wider range of problems. It was proposed (by myself) in 2005 as a way to formalize how the UNIX configuration engine CFEngine intuitively addressed the problem of managing distributed infrastructure. Such formal models are important in computer science to prove correctness. It has since been developed by myself and Jan Bergstra, and it is being adopted by an increasing number of others.

    This complementary non-command way of thinking seems unnatural to us in the context of infrastructure, but more usual in the context of Web pages.

    The way we view the world in Promise Theory is as a collection of agents or nested containers of things that can keep promises.

    Promises turn design and configuration into a form of knowledge management, by shifting the attention away from what changes (or which algorithms to enact), onto what interfaces exist between components and what promises they keep and why. The service-oriented style of programming, made famous by Amazon and Netflix, uses this approach for scalability (not only machine scalability but for scaling human ownership). It is hailed as a cloud-friendly approach to designing systems, but Promise Theory also tells us that it is the route to very large scale. Applications have to be extensible by cooperation (sometimes called horizontal scaling through parallelism rather than vertical scaling through brute force). Databases like Cassandra illustrate how to deal with the issues of scale, redundancy and relativity.

    Reply
  49. Tomi Engdahl says:

    China Planning to Remove Windows from All Government Computers
    http://news.softpedia.com/news/China-Planning-to-Remove-Windows-From-All-Government-Computers-463265.shtml

    Microsoft’s trouble in China continues with another chapter, this time coming from a local IT expert with strong ties to the government, who recommended the local authorities to remove Windows as soon as possible from their computers.

    In a report published by state-controlled newspaper Jinghua.cn, Ni Guangnan, academician of the Chinese Academy of Engineering, is quoted as saying that replacing Windows with a locally developed operating system must be done “urgently,” but no other specifics as to the reasons behind this recommendation were provided.

    Most likely, China wants to step away from Microsoft software because of security concerns, as some local officials have already accused the Redmond-based giant of bundling keyloggers in its operating system to help the United States government spy on Chinese PCs.

    Reply
  50. Tomi Engdahl says:

    Microsoft opens Office 365 to devs with APIs, SDKs
    Put a REST into your calendar
    http://www.theregister.co.uk/2014/10/30/microsoft_opens_office_365_to_devs_with_apis_sdks/

    Microsoft is putting its Office 365 crown jewels on display, opening up APIs to the environment to attract third-party developers to the platform.

    The APIs are yours for the taking here, along with resources and guides for developers. Microsoft imagines applications such as integrating a reservation system with a user’s calendar to create an entry once a booking is complete. Microsoft reckons there are now 400 PB of data stored in the Office 365 cloud, which will become accessible to third-party apps under the program.

    The REST APIs have some pre-pack resources for Windows, iOS and Android, along with Xamarin for multi-device apps and ASP .NET MVC for Web apps.

    Devs working in the Microsoft world can install a suitable API toolset for Visual Studio 2013 (with suitable registration, naturally). There’s also an Azure tenant signup allowing SharePoint or custom Web apps to use the APIs.

    Sample apps for Office and SharePoint
    http://dev.office.com/getting-started

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*