Computer trends for 2015

Here are comes my long list of computer technology trends for 2015:

Digitalisation is coming to change all business sectors and through our daily work even more than before. Digitalisation also changes the IT sector: Traditional software package are moving rapidly into the cloud.  Need to own or rent own IT infrastructure is dramatically reduced. Automation application for configuration and monitoring will be truly possible. Workloads software implementation projects will be reduced significantly as software is a need to adjust less. Traditional IT outsourcing is definitely threatened. The security management is one of the key factors to change as security threats are increasingly digital world. IT sector digitalisation simply means: “more cheaper and better.”

The phrase “Communications Transforming Business” is becoming the new normal. The pace of change in enterprise communications and collaboration is very fast. A new set of capabilities, empowered by the combination of Mobility, the Cloud, Video, software architectures and Unified Communications, is changing expectations for what IT can deliver.

Global Citizenship: Technology Is Rapidly Dissolving National Borders. Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work? The language you speak? The currency you use? If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Increasingly, technological developments will allow us to live and work almost anywhere on the planet… (and even beyond). In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one. Especially for entrepreneurs.

The traditional enterprise workflow is ripe for huge change as the focus moves away from working in a single context on a single device to the workflow being portable and contextual. InfoWorld’s executive editor, Galen Gruman, has coined a phrase for this: “liquid computing.”   The increase in productivity is promised be stunning, but the loss of control over data will cross an alarming threshold for many IT professionals.

Mobile will be used more and more. Currently, 49 percent of businesses across North America adopt between one and ten mobile applications, indicating a significant acceptance of these solutions. Embracing mobility promises to increase visibility and responsiveness in the supply chain when properly leveraged. Increased employee productivity and business process efficiencies are seen as key business impacts.

The Internet of things is a big, confusing field waiting to explode.  Answer a call or go to a conference these days, and someone is likely trying to sell you on the concept of the Internet of things. However, the Internet of things doesn’t necessarily involve the Internet, and sometimes things aren’t actually on it, either.

The next IT revolution will come from an emerging confluence of Liquid computing plus the Internet of things. Those the two trends are connected — or should connect, at least. If we are to trust on consultants, are in sweet spot for significant change in computing that all companies and users should look forward to.

Cloud will be talked a lot and taken more into use. Cloud is the next-generation of supply chain for ITA global survey of executives predicted a growing shift towards third party providers to supplement internal capabilities with external resources.  CIOs are expected to adopt a more service-centric enterprise IT model.  Global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion in 2014 (up a 20% from $145.2 billion in 2013), and growth will continue to be fast (“By 2017, enterprise spending on the cloud will amount to a projected $235.1 billion, triple the $78.2 billion in 2011“).

The rapid growth in mobile, big data, and cloud technologies has profoundly changed market dynamics in every industry, driving the convergence of the digital and physical worlds, and changing customer behavior. It’s an evolution that IT organizations struggle to keep up with.To success in this situation there is need to combine traditional IT with agile and web-scale innovation. There is value in both the back-end operational systems and the fast-changing world of user engagement. You are now effectively operating two-speed IT (bimodal IT, two-speed IT, or traditional IT/agile IT). You need a new API-centric layer in the enterprise stack, one that enables two-speed IT.

As Robots Grow Smarter, American Workers Struggle to Keep Up. Although fears that technology will displace jobs are at least as old as the Luddites, there are signs that this time may really be different. The technological breakthroughs of recent years — allowing machines to mimic the human mind — are enabling machines to do knowledge jobs and service jobs, in addition to factory and clerical work. Automation is not only replacing manufacturing jobs, it is displacing knowledge and service workers too.

In many countries IT recruitment market is flying, having picked up to a post-recession high. Employers beware – after years of relative inactivity, job seekers are gearing up for changeEconomic improvements and an increase in business confidence have led to a burgeoning jobs market and an epidemic of itchy feet.

Hopefully the IT department is increasingly being seen as a profit rather than a cost centre with IT budgets commonly split between keeping the lights on and spend on innovation and revenue-generating projects. Historically IT was about keeping the infrastructure running and there was no real understanding outside of that, but the days of IT being locked in a basement are gradually changing.CIOs and CMOs must work more closely to increase focus on customers next year or risk losing market share, Forrester Research has warned.

Good questions to ask: Where do you see the corporate IT department in five years’ time? With the consumerization of IT continuing to drive employee expectations of corporate IT, how will this potentially disrupt the way companies deliver IT? What IT process or activity is the most important in creating superior user experiences to boost user/customer satisfaction?

 

Windows Server 2003 goes end of life in summer 2015 (July 14 2015).  There are millions of servers globally still running the 13 year-old OS with one in five customers forecast to miss the 14 July deadline when Microsoft turns off extended support. There were estimated to be 2.7 million WS2003 servers in operation in Europe some months back. This will keep the system administrators busy, because there is just around half year time and update for Windows Server 2008 or Windows 2012 to may be have difficulties. Microsoft and support companies do not seem to be interested in continuing Windows Server 2003 support, so those who need that the custom pricing can be ” incredibly expensive”. At this point is seems that many organizations have the desire for new architecture and consider one option to to move the servers to cloud.

Windows 10 is coming  to PCs and Mobile devices. Just few months back  Microsoft unveiled a new operating system Windows 10. The new Windows 10 OS is designed to run across a wide range of machines, including everything from tiny “internet of things” devices in business offices to phones, tablets, laptops, and desktops to computer servers. Windows 10 will have exactly the same requirements as Windows 8.1 (same minimum PC requirements that have existed since 2006: 1GHz, 32-bit chip with just 1GB of RAM). There is technical review available. Microsoft says to expect AWESOME things of Windows 10 in January. Microsoft will share more about the Windows 10 ‘consumer experience’ at an event on January 21 in Redmond and is expected to show Windows 10 mobile SKU at the event.

Microsoft is going to monetize Windows differently than earlier.Microsoft Windows has made headway in the market for low-end laptops and tablets this year by reducing the price it charges device manufacturers, charging no royalty on devices with screens of 9 inches or less. That has resulted in a new wave of Windows notebooks in the $200 price range and tablets in the $99 price range. The long-term success of the strategy against Android tablets and Chromebooks remains to be seen.

Microsoft is pushing Universal Apps concept. Microsoft has announced Universal Windows Apps, allowing a single app to run across Windows 8.1 and Windows Phone 8.1 for the first time, with additional support for Xbox coming. Microsoft promotes a unified Windows Store for all Windows devices. Windows Phone Store and Windows Store would be unified with the release of Windows 10.

Under new CEO Satya Nadella, Microsoft realizes that, in the modern world, its software must run on more than just Windows.  Microsoft has already revealed Microsoft office programs for Apple iPad and iPhone. It also has email client compatible on both iOS and Android mobile operating systems.

With Mozilla Firefox and Google Chrome grabbing so much of the desktop market—and Apple Safari, Google Chrome, and Google’s Android browser dominating the mobile market—Internet Explorer is no longer the force it once was. Microsoft May Soon Replace Internet Explorer With a New Web Browser article says that Microsoft’s Windows 10 operating system will debut with an entirely new web browser code-named Spartan. This new browser is a departure from Internet Explorer, the Microsoft browser whose relevance has waned in recent years.

SSD capacity has always lag well behind hard disk drives (hard disks are in 6TB and 8TB territory while SSDs were primarily 256GB to 512GB). Intel and Micron will try to kill the hard drives with new flash technologies. Intel announced it will begin offering 3D NAND drives in the second half of next year as part of its joint flash venture with Micron. Later (next two years) Intel promises 10TB+ SSDs thanks to 3D Vertical NAND flash memory. Also interfaces to SSD are evolving from traditional hard disk interfaces. PCIe flash and NVDIMMs will make their way into shared storage devices more in 2015. The ULLtraDIMM™ SSD connects flash storage to the memory channel via standard DIMM slots, in order to close the gap between storage devices and system memory (less than five microseconds write latency at the DIMM level).

Hard disks will be still made in large amounts in 2015. It seems that NAND is not taking over the data centre immediately. The huge great problem is $/GB. Estimates of shipped disk and SSD capacity out to 2018 shows disk growing faster than flash. The world’s ability to make and ship SSDs is falling behind its ability to make and ship disk drives – for SSD capacity to match disk by 2018 we would need roughly eight times more flash foundry capacity than we have. New disk technologies such as shingling, TDMR and HAMR are upping areal density per platter and bringing down cost/GB faster than NAND technology can. At present solid-state drives with extreme capacities are very expensive. I expect that with 2015, the prices for SSD will will still be so much higher than hard disks, that everybody who needs to store large amounts of data wants to consider SSD + hard disk hybrid storage systems.

PC sales, and even laptops, are down, and manufacturers are pulling out of the market. The future is all about the device. We have entered the post-PC era so deeply, that even tablet market seem to be saturating as most people who want one have already one. The crazy years of huge tables sales growth are over. The tablet shipment in 2014 was already quite low (7.2% In 2014 To 235.7M units). There is no great reasons or growth or decline to be seen in tablet market in 2015, so I expect it to be stable. IDC expects that iPad Sees First-Ever Decline, and I expect that also because the market seems to be more and more taken by Android tablets that have turned to be “good enough”. Wearables, Bitcoin or messaging may underpin the next consumer computing epoch, after the PC, internet, and mobile.

There will be new tiny PC form factors coming. Intel is shrinking PCs to thumb-sized “compute sticks” that will be out next year. The stick will plug into the back of a smart TV or monitor “and bring intelligence to that”. It is  likened the compute stick to similar thumb PCs that plug to HDMI port and are offered by PC makers with the Android OS and ARM processor (for example Wyse Cloud Connect and many cheap Android sticks).  Such devices typically don’t have internal storage, but can be used to access files and services in the cloudIntel expects that sticks size PC market will grow to tens of millions of devices.

We have entered the Post-Microsoft, post-PC programming: The portable REVOLUTION era. Tablets and smart phones are fine for consuming information: a great way to browse the web, check email, stay in touch with friends, and so on. But what does a post-PC world mean for creating things? If you’re writing platform-specific mobile apps in Objective C or Java then no, the iPad alone is not going to cut it. You’ll need some kind of iPad-to-server setup in which your iPad becomes a mythical thin client for the development environment running on your PC or in cloud. If, however, you’re working with scripting languages (such as Python and Ruby) or building web-based applications, the iPad or other tablet could be an useable development environment. At least worth to test.

You need prepare to learn new languages that are good for specific tasks. Attack of the one-letter programming languages: From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following. Watch out! The coder in the next cubicle might have been bitten and infected with a crazy-eyed obsession with a programming language that is not Java and goes by the mysterious one letter name. Each offers compelling ideas that could do the trick in solving a particular problem you need fixed.

HTML5′s “Dirty Little Secret”: It’s Already Everywhere, Even In Mobile. Just look under the hood. “The dirty little secret of native [app] development is that huge swaths of the UIs we interact with every day are powered by Web technologies under the hood.”  When people say Web technology lags behind native development, what they’re really talking about is the distribution model. It’s not that the pace of innovation on the Web is slower, it’s just solving a problem that is an order of magnitude more challenging than how to build and distribute trusted apps for a single platform. Efforts like the Extensible Web Manifesto have been largely successful at overhauling the historically glacial pace of standardization. Vine is a great example of a modern JavaScript app. It’s lightning fast on desktop and on mobile, and shares the same codebase for ease of maintenance.

Docker, meet hype. Hype, meet Docker. Docker: Sorry, you’re just going to have to learn about it. Containers aren’t a new idea, and Docker isn’t remotely the only company working on productising containers. It is, however, the one that has captured hearts and minds. Docker containers are supported by very many Linux systems. And it is not just only Linux anymore as Docker’s app containers are coming to Windows Server, says Microsoft. Containerization lets you do is launch multiple applications that share the same OS kernel and other system resources but otherwise act as though they’re running on separate machines. Each is sandboxed off from the others so that they can’t interfere with each other. What Docker brings to the table is an easy way to package, distribute, deploy, and manage containerized applications.

Domestic Software is on rise in China. China is Planning to Purge Foreign Technology and Replace With Homegrown SuppliersChina is aiming to purge most foreign technology from banks, the military, state-owned enterprises and key government agencies by 2020, stepping up efforts to shift to Chinese suppliers, according to people familiar with the effort. In tests workers have replaced Microsoft Corp.’s Windows with a homegrown operating system called NeoKylin (FreeBSD based desktop O/S). Dell Commercial PCs to Preinstall NeoKylin in China. The plan for changes is driven by national security concerns and marks an increasingly determined move away from foreign suppliers. There are cases of replacing foreign products at all layers from application, middleware down to the infrastructure software and hardware. Foreign suppliers may be able to avoid replacement if they share their core technology or give China’s security inspectors access to their products. The campaign could have lasting consequences for U.S. companies including Cisco Systems Inc. (CSCO), International Business Machines Corp. (IBM), Intel Corp. (INTC) and Hewlett-Packard Co. A key government motivation is to bring China up from low-end manufacturing to the high end.

 

Data center markets will grow. MarketsandMarkets forecasts the data center rack server market to grow from $22.01 billion in 2014 to $40.25 billion by 2019, at a compound annual growth rate (CAGR) of 7.17%. North America (NA) is expected to be the largest region for the market’s growth in terms of revenues generated, but Asia-Pacific (APAC) is also expected to emerge as a high-growth market.

The rising need for virtualized data centers and incessantly increasing data traffic is considered as a strong driver for the global data center automation market. The SDDC comprises software defined storage (SDS), software defined networking (SDN) and software defined server/compute, wherein all the three components of networking are empowered by specialized controllers, which abstract the controlling plane from the underlying physical equipment. This controller virtualizes the network, server and storage capabilities of a data center, thereby giving a better visibility into data traffic routing and server utilization.

New software-defined networking apps will be delivered in 2015. And so will be software defined storage. And software defined almost anything (I an waiting when we see software defined software). Customers are ready to move away from vendor-driven proprietary systems that are overly complex and impede their ability to rapidly respond to changing business requirements.

Large data center operators will be using more and more of their own custom hardware instead of standard PC from traditional computer manufacturers. Intel Betting on (Customized) Commodity Chips for Cloud Computing and it expects that Over half the chips Intel will sell to public clouds in 2015 will have custom designs. The biggest public clouds (Amazon Web Services, Google Compute, Microsoft Azure),other big players (like Facebook or China’s Baidu) and other public clouds  (like Twitter and eBay) all have huge data centers that they want to run optimally. Companies like A.W.S. “are running a million servers, so floor space, power, cooling, people — you want to optimize everything”. That is why they want specialized chips. Customers are willing to pay a little more for the special run of chips. While most of Intel’s chips still go into PCs, about one-quarter of Intel’s revenue, and a much bigger share of its profits, come from semiconductors for data centers. In the first nine months of 2014, the average selling price of PC chips fell 4 percent, but the average price on data center chips was up 10 percent.

We have seen GPU acceleration taken in to wider use. Special servers and supercomputer systems have long been accelerated by moving the calculation of the graphics processors. The next step in acceleration will be adding FPGA to accelerate x86 servers. FPGAs provide a unique combination of highly parallel custom computation, relatively low manufacturing/engineering costs, and low power requirements. FPGA circuits may provide a lot more power out of a much lower power consumption, but traditionally programming then has been time consuming. But this can change with the introduction of new tools (just next step from technologies learned from GPU accelerations). Xilinx has developed a SDAccel-tools to  to develop algorithms in C, C ++ – and OpenCL languages and translated it to FPGA easily. IBM and Xilinx have already demoed FPGA accelerated systems. Microsoft is also doing research on Accelerating Applications with FPGAs.


If there is one enduring trend for memory design in 2014 that will carry through to next year, it’s the continued demand for higher performance. The trend toward high performance is never going away. At the same time, the goal is to keep costs down, especially when it comes to consumer applications using DDR4 and mobile devices using LPDDR4. LPDDR4 will gain a strong foothold in 2015, and not just to address mobile computing demands. The reality is that LPDRR3, or even DDR3 for that matter, will be around for the foreseeable future (lowest-cost DRAM, whatever that may be). Designers are looking for subsystems that can easily accommodate DDR3 in the immediate future, but will also be able to support DDR4 when it becomes cost-effective or makes more sense.

Universal Memory for Instant-On Computing will be talked about. New memory technologies promise to be strong contenders for replacing the entire memory hierarchy for instant-on operation in computers. HP is working with memristor memories that are promised to be akin to RAM but can hold data without power.  The memristor is also denser than DRAM, the current RAM technology used for main memory. According to HP, it is 64 and 128 times denser, in fact. You could very well have a 512 GB memristor RAM in the near future. HP has what it calls “The Machine”, practically a researcher’s plaything for experimenting on emerging computer technologies. Hewlett-Packard’s ambitious plan to reinvent computing will begin with the release of a prototype operating system in 2015 (Linux++, in June 2015). HP must still make significant progress in both software and hardware to make its new computer a reality. A working prototype of The Machine should be ready by 2016.

Chip designs that enable everything from a 6 Gbit/s smartphone interface to the world’s smallest SRAM cell will be described at the International Solid State Circuits Conference (ISSCC) in February 2015. Intel will describe a Xeon processor packing 5.56 billion transistors, and AMD will disclose an integrated processor sporting a new x86 core, according to a just-released preview of the event. The annual ISSCC covers the waterfront of chip designs that enable faster speeds, longer battery life, more performance, more memory, and interesting new capabilities. There will be many presentations on first designs made in 16 and 14 nm FinFET processes at IBM, Samsung, and TSMC.

 

1,403 Comments

  1. Tomi Engdahl says:

    Ballmer decries Microsoft’s financials and lack of Android apps
    He only says these things because he ‘loves this company’
    http://www.theinquirer.net/inquirer/news/2437746/ballmer-decries-microsofts-financials-and-lack-of-android-apps

    BASKETBALL MAGNATE Steve Ballmer has been using fruity language to express his frustration at the way Microsoft is being run.

    At Wednesday’s shareholders’ meeting, he was said to describe the company’s lack of transparency over profit margins and sales for its cloud and hardware businesses as (and we hope you have sent the children out of the room for this bit) “bullshit”.

    “It’s sort of a key metric. If they talk about it as key to the company, they should report it,”

    Microsoft instead uses a run rate – a snapshot or ‘sweep’ of the revenue at a point in time – and then applies it to the whole year. And that’s when he used the ‘b’ word.

    Ballmer also expressed his frustration at the app vacuum in the Windows Store

    Ballmer successor Satya Nadella’s response was to talk about the appeal to developers of Universal Apps, which can be written once and used across the whole Microsoft ecosystem.

    “That won’t work,” Ballmer commented as Nadella spoke. Instead, the company needs Windows Phones “to run Android apps”.

    Reply
  2. Tomi Engdahl says:

    Apple Open Sources Its Swift Programming Language
    http://www.wired.com/2015/12/apple-open-sources-its-swift-programming-language/

    In a move that represents a significant shift for Apple—and for the tech industry as a whole—the world’s most valuable company has open sourced its Swift programming language, freely sharing the underpinnings of this new and potentially powerful language with the world at large.

    Apple unveiled Swift last year—much to the surprise of the broader programming community—offering the language as a significantly easier way of building applications for the iPhone, the iPad, and the Mac. But in open sourcing the language—something Apple had promised it would do—the company is paving the way for Swift to run on all sorts of other machines, including computer servers loaded with Linux, smartphones based on Google’s Android mobile operating system, and tablets based on Microsoft’s Windows operating system.

    Apple says it will run the new open source project from a website called Swift.org, while sharing the source code through the popular code repository GitHub, and it has seeded the project with a wide range of tools. Most notably, it has open sourced Swift compilers that will run on Linux as well as Mac OS X.

    Reply
  3. Tomi Engdahl says:

    Harriet Taylor / CNBC:
    Report: Google Chromebooks now make up half of devices sold to K-12 schools and school districts in US

    Google’s Chromebooks make up half of US classroom devices
    http://www.cnbc.com/2015/12/03/googles-chromebooks-make-up-half-of-us-classroom-devices.html

    Google, Microsoft and Apple have been competing for years in the very lucrative education technology market. For the first time, Google has taken a huge lead over its rivals.

    Chromebooks now make up more than half of all devices in U.S. classrooms, up from less than 1 percent in 2012, according to a new report from Futuresource Consulting. To analysts, this comes as a big surprise.

    “While it was clear that Chromebooks had made progress in education, this news is, frankly, shocking,” said Forrester analyst J.P. Gownder. “Chromebooks made incredibly quick inroads in just a couple of years, leaping over Microsoft and Apple with seeming ease.

    Combine Chromebooks with devices running on Android, and Google’s share of the edtech market is even more impressive. As of the third quarter of this year it had 53 percent of the market for K-12 devices bought by schools and school districts.

    Reply
  4. Tomi Engdahl says:

    Dean Takahashi / VentureBeat:
    BlueStacks hits over 109M downloads, 1B apps used per month, now supports multiple simultaneous Android apps on Windows

    BlueStacks hits a billion apps used per month and launches new mobile platform
    http://venturebeat.com/2015/12/03/bluestacks-hits-a-billion-apps-used-per-month-and-launches-new-mobile-platform/

    In 2011, BlueStacks was a small company when it debuted its virtualization platform that enables people to play Android mobile apps on a PC. Now the company’s App Player has become surprisingly popular, with more than 109 million downloads of the BlueStacks player, which is used to engage with a billion apps per month. The Silicon Valley company announced those numbers and said today that it has launched its BlueStacks 2 platform with even better technology.

    BlueStacks is now the seventh-largest distributor of Android. The growth says a lot about the popularity of Google’s Android operating system and its apps, which people want to play on devices such as TVs and PCs. But it also shows how popular Android games and communications apps have become in places such as China, where BlueStacks is popular

    “This is equally surprising to us that the organic growth continues to happen,” said Rosen Sharma, the chief executive of BlueStacks in an interview with GamesBeat. “It is driven by the popularity of Android.”

    In places such as China, players have begun playing “midcore” games on Android devices. But they also want to play those games, such as titles like Crossy Road (pictured above), on bigger screens such as laptops or desktop PCs, Sharma said. To do that, they download the free BlueStacks app player to a Windows PC or Mac. The player lets people run most of the millions of apps (about 96 percent of them) and games (86 percent) available on the Google Play Store — on their computer. It does this through virtualization software.

    Games account for 40 percent of usage. Another 30 percent comes from messaging apps as it is much easier to type messages on a PC keyboard than it is on a mobile device. The remaining 30 percent of usage is attributed to all sorts of Android apps, Sharma said.

    Reply
  5. Tomi Engdahl says:

    Peter Bright / Ars Technica:
    Microsoft to open source Chakra, the JavaScript heart of its Edge browser — Source will be available from January, and open to community contributions. — At JSConf in Florida today, Microsoft announced that it is open sourcing Chakra, the JavaScript engine used in its Edge and Internet Explorer browsers.

    Microsoft to open source Chakra, the JavaScript heart of its Edge browser
    Source will be available from January, and open to community contributions.
    http://arstechnica.com/information-technology/2015/12/microsoft-to-open-source-chakra-the-javascript-heart-of-its-edge-browser/

    Microsoft is calling the version it’s open sourcing ChakraCore. This is the complete JavaScript engine—the parser, the interpreter, the just-in-time compiler, and the garbage collector along with the API used to embed the engine into applications (as used in Edge). This will have the same performance and capabilities, including asm.js and SIMD support, as well as cutting-edge support for new ECMAScript 2015 language features like the version found in Microsoft’s Windows 10 browser.

    Reply
  6. Tomi Engdahl says:

    Mozilla ends Tiles experiment for ads in Firefox, will shift focus to ‘content discovery’
    http://venturebeat.com/2015/12/04/mozilla-ends-tiles-experiment-for-ads-in-firefox-will-instead-focus-on-content-discovery/

    Mozilla is stopping its experiment to offer advertising in Firefox. The company announced its focus will instead shift to “content discovery” opportunities in its browser.

    “We will continue to experiment with content experiences on new tab pages within Firefox and across our products,” a Mozilla spokesperson confirmed with VentureBeat. “However we are stopping advertising through Tiles.”

    This is an important distinction because since this summer, Firefox has been promoting three types of Tiles: content from Mozilla (such as campaigns on policy issues), publisher content, and advertising. Mozilla’s Directory Tiles program is designed to “improve the first-time-with-Firefox experience” — instead of seeing blank tiles when a new Firefox user opens a new tab, Mozilla thought it would be best that they see “content.”

    Mozilla thus appears happy to continue its Tiles program, just without the advertising component.

    In other words, Mozilla has realized that it doesn’t make sense to offer Firefox features like tracking protection, which blocks website elements (ads, analytics trackers, and social share buttons) that could track you while you’re surfing the web, while at the same time also pushing ads. Even if they’re ads that don’t track you (Mozilla went to extensive lengths to ensure that is the case), it’s still a confusing message to the browser user.

    Directory Tiles are basically sponsored content, and were Mozilla’s first initiative to bring advertising to Firefox users. Suggested Tiles go a step further since they are based on what sites users had visited

    News of Mozilla’s plan to sell ads in Firefox first broke back in February 2014.

    And now, the project has been deemed unworthy. Finally.

    Reply
  7. Tomi Engdahl says:

    Takashi Mochizuki / Wall Street Journal:
    Sources: Toshiba considering spinning off unprofitable PC business alongside Fujitsu, others — Toshiba Looks to Spin Off PC Business — The proposed spinoff would be Toshiba’s latest effort to strip out unprofitable units — TOKYO—Toshiba Corp. is looking to spin off …

    Toshiba Looks to Spin Off PC Business
    The proposed spinoff would be Toshiba’s latest effort to strip out unprofitable units
    http://www.wsj.com/article_email/toshiba-looks-to-spin-off-pc-business-1449196629-lMyQjAxMTA1MjAyNDMwMjQyWj

    Toshiba Corp. is looking to spin off its personal-computer business and merge it with the PC business of others in the electronics industry, two people familiar with the situation said Friday.

    The proposed spinoff would represent the latest effort by Toshiba to strip out unprofitable units after an accounting scandal earlier this year forced top management to resign and led to large write-downs.

    The Japanese electronics conglomerate admitted it had been inappropriately inflating reported profits at several business units for years. Toshiba already has said it plans to sell part of its semiconductor unit to Sony Corp. and is considering selling a stake in the remainder of that unit.

    Japanese electronics companies have largely exited the PC market, once a big moneymaker for them. Hitachi Ltd. and Sharp Corp. quit the business, while NEC Corp. has a minority stake in its joint company with Lenovo Group., Panasonic Corp., meanwhile, has been focusing on enterprise customers with its laptop offerings.

    Reply
  8. Tomi Engdahl says:

    Soft-Decoding in LDPC based SSD Controllers
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1328385&

    What happens when that initial decode fails? How soft data can be used to recover data on the SSD.

    In this post, we will take a look at what happens when that initial decode fails and how soft data can be used to recover data on the SSD.

    LDPC and soft-data decoding in NAND flash
    If we assume that a hard-data LDPC decode fails, then things start to get very interesting for the SSD. We could decide to return an “Unrecoverable Read Error” and tell the user that their data is lost forever, but end-users typically don’t like that ;-). If the SSD has an internal RAID system, we could use it to attempt to recover the user’s data at the expense of additional complexity and NAND capacity to calculate and store the RAID parity. However, with LDPC, there is a third option, which is to soften the data and attempt a soft-data LDPC decode. Note that this third option is not available in controllers that use less advanced error correction (for example BCH codes) because they cannot leverage soft information.

    I like to think of a soft-data LDPC decode in three parts:

    A re-read strategy.
    Soft-data construction.
    The soft LPDC decode.

    Re-read strategy: The re-read strategy consists of reading one or more sections of the flash to assist in the construction of soft data.
    Some examples of a Re-Read Strategy might include:

    Read the same section as the original hard data but use a different set of read threshold voltages inside the NAND.
    In MLC NAND, read the section that shares the same word-line as the original section.
    Read the section that corresponds to the dominant disturber. This is the section that, when programmed, has the strongest program disturb impact on the original hard-data section.

    There are pros and cons to each of these Re-Read Strategies and, in fact, the three can even be combined together if desired. Just remember that each time you read from the flash you will incur more latency!

    The soft-data construction: Each of the reads in our Re-Read strategy returns the 0 and 1 data associated with that read. Although slightly more advanced multi-bit reads might exist in more advanced NAND

    The soft LDPC decode: The final step in the soft-data decode involves passing the LLRs for each of the bits of the codeword into the LDPC decoder logic. The hope is that this decode will be more successful than the original hard-data decode was and the SSD will now be able to return the user data and perhaps move that data to a safer region of the SSD so that it can be recovered more easily the next time it is requested.

    Given the characteristics of the latest generation of NAND technology, LDPC engines will be required to meet an acceptable bit error rate.

    Reply
  9. Tomi Engdahl says:

    This startup is making waves by letting top-gun programmers work how they want, when they want
    http://uk.businessinsider.com/gigster-raises-10-million-from-andreesen-horowitz-ron-conway-ashton-kutcher-2015-12?op=1?r=US&IR=T

    San Francisco-based Gigster, a two-year old startup that graduated from the Y Combinator accelerator program earlier this year, has what sounds like an obvious concept:

    Connect freelance programmers with the companies that need them.

    But there’s a lot more going on under the hood with Gigster.

    The way Gigster works is simple. As a customer, you write, in plain English, what you want your business app to do.

    Then, Gigster analyzes your request, figures out the best team for the job — including programmers, product managers, and designers — and gives you a flat quote with a guaranteed price.

    “In short, push a button, get software,” Gigster CEO Roger Dickey says.

    Who’s who

    All of that interest speaks to a real need, says Dickey, who sees the company as “the world’s software engineering department.”

    In a world where the need for apps is rapidly outpacing the availability of developers, Gigster is trying to shorten that loop.

    If you need software, Gigster provides an easy way to just get it done, whether or not you yourself are technical. And even if you are technical, it can be cheaper than hiring a full team.

    In fact, given the Silicon Valley talent crunch, it can actually replace the costly and time-consuming need to recruit developers in the first place.

    “We don’t think every company needs an engineering team,” Dickey says. “It’s a little bit ridiculous.”

    Developers developers developers

    Many of the 350-plus developers on Gigster are often past and current employees of companies like Google, Facebook, and Amazon, Dickey says. Many are winners of Apple’s prestigious Design Award, he says.

    Where, in their day-to-day life, programmers may have well-paying but fundamentally boring jobs managing tiny bits of huge, tremendously complicated software, Gigster lets them exercise their problem-solving muscles — and get well-paid for the privilege.

    “If you’re a really strong engineer, you’re going to want to do some work on the side,” says Dickey.

    Moreover, it’s a great venue for the self-employed developer, or those who just want to make money while working on other projects, or basically those who don’t want to deal with office drama.

    “These guys just need a clear understanding of what to build, and how to build it,” says Andreessen Horowitz partner and former SuccessFactors CEO Lars Dalgaard, who’s joining the Gigster board with this announcement. “The world of Silicon Valley engineers is uninterested in office politics.”

    Secret weapon

    There are plenty of other freelance marketplaces out there, for programming and for other services, sure. But Gigster has a secret weapon, in the form of a “smart platform” built out by CTO and co-founder Debo Olaosebikan, Dickey says.

    Behind the scenes, Gigster uses machine learning technology to get smarter over time. Basically, it can figure out the things that are similar about different customer projects.

    “Airbnb only had to build Airbnb once,” but Gigster gets better at solving similar problems over time, Dalgaard says.

    Reply
  10. Tomi Engdahl says:

    Open Source… Windows?
    http://hackaday.com/2015/12/08/open-source-windows/

    They’re writing their own version of Windows called ReactOS that aims to be binary-compatible with Windows. The software has been in development for over a decade, but they’re ready to release version 0.4 which will bring USB, sound, networking, wireless, SATA, and many more features to the operating system.

    While ReactOS isn’t yet complete for everyday use, the developers have made great strides in understanding how Windows itself works. There is a lot of documentation coming from the project regarding many previously unknown or undocumented parts of Windows, and with more developers there could be a drop-in replacement for Windows within a few years.

    Reply
  11. Tomi Engdahl says:

    Cade Metz / Wired:
    Facebook Open Sources Its AI Hardware as It Races Google
    http://www.wired.com/2015/12/facebook-open-source-ai-big-sur/

    In Silicon Valley, the new currency is artificial intelligence.

    Over the last few years, a technology called deep learning has proven so adept at identifying images, recognizing spoken words, and translating from one language to another, the titans of Silicon Valley are eager to push the state of the art even further—and push it quickly. The two biggest players are, yes, Google and Facebook.

    At Google, this tech not only helps the company recognize the commands you bark into your Android phone and instantly translate foreign street signs when you turn your phone their way. It helps drive the Google search engine, the centerpiece of the company’s online empire. At Facebook, it helps identify faces in photos, choose content for your News Feed, and even deliver flowers ordered through M, the company’s experimental personal assistant. All the while, these two titans hope to refine deep learning so that it can carry on real conversations—and perhaps even exhibit something close to common sense.

    Of course, in order to reach such lofty goals, these companies need some serious engineering talent. And the community of researchers who excel at deep learning is relatively small. As a result, Google and Facebook are part of an industry-wide battle for top engineers.

    ‘There is a network effect. The platform becomes better as more people use it.’ Yann LeCun, Facebook

    The irony is that, in an effort to win this battle, the two companies are giving away their secrets. Yes, giving them away. Last month, Google open sourced the software engine that drives its deep learning services, freely sharing it with the world at large. And this morning, Facebook announced that it will open source the designs for the computer server it built to run the latest in AI algorithms. Code-named Big Sur, this is a machine packed with an enormous number of graphics processing units, or GPUs—chips particularly well suited to deep learning.

    Big Sur includes eight GPU boards, each loaded with dozens of chips while consuming only about 300 Watts of power.

    Traditional processors help drive these machines, but big companies like Facebook and Google and Baidu have found that their neural networks are far more efficient if they shift much of the computation onto GPUs.

    Neural nets thrive on data.

    Facebook designed the machine in tandem with Quanta, a Taiwanese manufacturer, and nVidia, a chip maker specializing in GPUs.

    Facebook says it’s now working with Quanta to open source the design and share it through the Open Compute Project. You can bet this is a response to Google open sourcing TensorFlow.

    ‘There is a network effect. The platform becomes better as more people use it.’ Yann LeCun, Facebook

    Reply
  12. Tomi Engdahl says:

    OpenAI:
    OpenAI, a new nonprofit for AI research, gets $1B commitment from SV leaders including Reid Hoffman, Elon Musk, Peter Thiel, also AWS, Infosys, YC Research — Introducing OpenAI — OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence …

    Introducing OpenAI
    https://openai.com/blog/introducing-openai/

    Reply
  13. Tomi Engdahl says:

    Austin Frakt / New York Times:
    Computers, which excel at big data analysis, can help doctors deliver more personalized care

    Your New Medical Team: Algorithms and Physicians
    http://www.nytimes.com/2015/12/08/upshot/your-new-medical-team-algorithms-and-physicians.html?_r=0

    Can machines outperform doctors? Not yet. But in some areas of medicine, they can make the care doctors deliver better.

    Humans repeatedly fail where computers — or humans behaving a little bit more like computers — can help. Even doctors, some of the smartest and best-trained professionals, can be forgetful, fallible and prone to distraction. These statistics might be disquieting for anyone scheduled for surgery: One in about 100,000 operations is on the wrong body part. In one in 10,000, a foreign object — like a surgical tool — is accidentally left inside the body.

    Something as simple as a checklist — a very low tech-type of automation — can reduce such errors. For example, in a wide range of settings, surgical complications and mortality fell after implementation of a basic checklist including verification of patient identity and body part for surgery, confirmation of sterility of the surgical environment and equipment, and post-surgical accounting for all medical tools.

    Limits on how much information we can process and manipulate make it hard or impossible for even the smartest and most adept doctors to keep up with new evidence. In 2014 alone, more than 750,000 additional medical studies were published. Granted, a physician might need to keep up only with the evidence in her specialty, but even at a fraction of this rate, it is unrealistic to expect even the best physicians to assimilate every new development in their fields. In cancer alone, 150,000 studies are published annually.

    Computers, on the other hand, excel at searching and combining vastly more data than a human. I.B.M.’s Watson — the computer that won Jeopardy! — is among the best at doing so.

    At Boston Children’s Hospital, Watson will help diagnose and treat a type of kidney disease. It will team up with Apple to collect health care data; with Johnson & Johnson to improve care for knee and hip replacements; with medical equipment manufacturer Medtronic to detect when diabetes patients require adjustments to insulin doses; and with CVS to improve services for patients with chronic conditions. Another computer-assisted approach to cancer treatment is already in place in the vast majority of oncology practices. Other automated systems check for medication prescribing errors.

    To many patients, the very idea of receiving a medical diagnosis or treatment from a machine is probably off-putting.

    If the only thing between your illness and its diagnosis and cure is the manipulation of evidence, then, in principle, a computer should one day be able to deliver care as well or better than a human.

    But healing may rely on more than the mere processing of data. In some cases, we may lack data, and a physician’s judgment might be the best available guide.

    Patients also may be skeptical that a computer can deliver the best care. A 2010 study published in Health Affairs found that consumers didn’t believe doctors could deliver substandard care. In contrast, they thought that care strictly based on evidence and guidelines — as any system for automating medical care would be — was tailored to the lowest common denominator, meeting only the minimum quality standards.

    But algorithms can be put to good use in certain areas of medicine, as complements to, not substitutes for, clinicians.

    Just because algorithms can assist in making decisions doesn’t mean humans should check out and play no role. It is important not to over-rely on data and automation. Bob Wachter, a physician, relates a story about how automated aspects of an electronic medical system contributed to the overdose of a child at the University of California San Francisco Medical Center.

    Reply
  14. Tomi Engdahl says:

    Alan Boyle / GeekWire:
    Researchers develop algorithm to teach a new concept to a computer using just one example — Bayesian boost for A.I.: Researchers find a quicker way to teach a computer … Researchers say they’ve developed an algorithm that can teach a new concept to a computer using just one example …

    Bayesian boost for A.I.: Researchers find a quicker way to teach a computer
    http://www.geekwire.com/2015/bayesian-boost-for-a-i-researchers-create-a-quicker-way-to-teach-a-computer/

    Researchers say they’ve developed an algorithm that can teach a new concept to a computer using just one example, rather than the thousands of examples that are traditionally required for machine learning.

    The algorithm takes advantage of a probabilistic approach the researchers call “Bayesian Program Learning,” or BPL. Essentially, the computer generates its own additional examples, and then determines which ones fit the pattern best.

    “The gap between machine learning and human learning capacities remains vast,” said MIT’s Joshua Tenenbaum, one of the authors of a research paper published today in the journal Science. “We want to close that gap, and that’s the long-term goal.”

    The researchers concluded that the BPL approach “can perform one-shot learning in classification tasks at human-level accuracy and fool most judges in visual Turing tests of its more creative abilities.” But they also acknowledged the limitations of their experiment: Classifying the characters was a relatively simple task, and yet it sometimes took the computer several minutes to run the algorithm, Lake said.

    Once the algorithm is refined, it could be built into the speech recognition systems for next-generation smartphones, Tenenbaum told GeekWire.

    Reply
  15. Tomi Engdahl says:

    23-Year-Old’s Design Collaboration Tool Figma Launches With $14M To Fight Adobe
    http://techcrunch.com/2015/12/03/figma-vs-goliath/#.ojwuxm:JHfa

    “Which version of this design are we on? Did you make the suggested edits? Why is it taking so long to export?” Today, interface design collaboration tool Figma arrives to eliminate these questions with its browser-based alternative to Adobe’s desktop software.

    Figma constantly saves projects in the cloud with version control so teammates can always review, go back and modify, or comment on designs in real-time. Its free preview program begins admitting teams off its new wait list today.

    Field tells me his big competitor “doesn’t understand collaboration”, and the Adobe Creative Cloud is “really cloud in name only”. He insists that “Design is undergoing a monumental shift — going from when design was at the very end of the product cycle where people would just make things prettier to now where it runs through the entire process.” Figma wants to do for interface design what Google Docs did for text editing.

    Currently, collaborating to build a UI is more work about work than actual work. If one team member wants to change an icon, they have to find and download the latest design, check email or Slack for commentary, make an edit, save it, export it, upload or email it, and then wait for everyone else to jump through these hoops. Making the actual design change took only a fraction of that time.

    That’s because despite advances elsewhere in the collaborative web, design industry juggernaut Adobe was developed as a solo desktop software experience.

    You might mistake it as naive arrogance, but his youthful confidence is what it takes to battle the design Goliath.

    How did the collaboration idea start? Field tells me “I grew up online. I started using productivity tools when Google Docs came out almost 10 years ago.” That would be when Field was just a teenager “I was such a huge fan and used it for all my school projects”. He dreamed of a similar tool for interface design while interning at Flipboard and LinkedIn. Then at Brown University Field met the guy who could build it, WebGL prodigy Evan Wallace, who’d engineered at Pixar and Microsoft. Together, two started Figma, as in, figment of your imagination made real.

    The two knew building something that could stand up against Adobe’s products would be no light endeavor.

    Reply
  16. Tomi Engdahl says:

    Gartner clients report that poor requirements definition and management is a major cause of rework and friction between business and IT. Broad market adoption of software requirements solutions is low, exacerbating this situation. The strongest adoption has come in highly complex or regulated systems.

    Focus on tools is changing to better support collaboration and fast time to market to better address mass market needs. However, tools tend to align either toward large-scale system approaches or toward lean/agile techniques…

    Source: http://go.jamasoftware.com/gartner-rm-solution-market-guide.html?utm_medium=email&utm_campaign=gartner+rm+solution+market+guide&utm_source=emedia

    Reply
  17. Tomi Engdahl says:

    Firefox 43 Arrives With 64-bit Version For Windows, Android Tab Audio Indicators
    http://news.slashdot.org/story/15/12/15/2048226/firefox-43-arrives-with-64-bit-version-for-windows-android-tab-audio-indicators?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Mozilla today launched Firefox 43 for Windows, Mac, Linux, and Android. Notable additions to the browser include a 64-bit version for Windows (finally!),

    Firefox 43 arrives with 64-bit version for Windows, new strict blocklist, and tab audio indicators on Android
    http://venturebeat.com/2015/12/15/firefox-43-arrives-with-64-bit-version-for-windows-new-strict-blocklist-and-tab-audio-indicators-on-android/

    Reply
  18. Tomi Engdahl says:

    Collabora and OwnCloud Announce LibreOffice Online
    http://tech.slashdot.org/story/15/12/15/1849234/collabora-and-owncloud-announce-libreoffice-online

    Collabora Productivity, a UK-based consulting company, has collaborated with ownCloud Inc. to release a developer edition of online LibreOffice, which they call CODE (Collabora Online Development Edition). “The office suite implementation runs on ownCloud server. That’s where all the processing and heavy lifting is done. The rendering happens at the client side. Currently there are three apps: writer (equivalent to MS Word), spreadsheet (Excel) and presentation (PowerPoint). At the moment users can create new documents and edit them. Other functionality, such as collaborative editing, is in the pipeline.”

    Watch out Microsoft, Google ‘LibreOffice as a service’ is here
    http://www.itworld.com/article/3015015/open-source-tools/watch-out-microsoft-google-libreoffice-as-a-service-is-here.html

    Collabora Productivity, a UK-based consulting company has collaborated with ownCloud Inc. to release a developer edition of online LibreOffice, which they call CODE (Collabora Online Development Edition).

    Collabora Productivity said in a press statement that CODE “allows prototype editing of richly formatted documents from a web browser. With good support for key industry file formats, including text documents (docx, doc, odt, pdf…), spreadsheets (xslx, xsl, ods,…) and presentations (pptx, ppt, odp,…).”

    The office suite implementation runs on ownCloud server. That’s where all the processing and heavy lifting is done. The rendering happens at the client side. Currently there are three apps: writer (equivalent to MS Word), spreadsheet (Excel) and presentation (PowerPoint).

    This is interesting because ownCloud already has ‘ownCloud Documents’ that has collaborative features.

    Reply
  19. Tomi Engdahl says:

    Owen Thomas / ReadWrite:
    Slack teams with Howdy to launch Botkit, an open-source framework for building apps for Slack

    Slack Is Releasing Botkit To Make Bots Easier To Build
    And a directory, so you can find them.
    http://readwrite.com/2015/12/16/slack-botkit-platform

    Slack, the increasingly popular team-messaging software, wants more developers to build apps that hook into its work-chat software.

    So it’s announcing new software, Botkit, to simplify the building of such apps; a directory to make it easier to find them; and an investment fund to back developers, particularly ones building apps solely for Slack.

    Slack is also planning to reveal that it now has 2 million daily active users. While not all of those pay for the service, that represents a healthy audience for app developers, particularly ones that must identify groups of people working together as teams.

    In other words, Slack is putting together all the pieces needed for a successful platform: distribution, exposure, and tools.

    For Slack’s App Builders, The Message Is The Platform
    And the conversation is the interface.
    http://readwrite.com/2015/10/20/slack-bots-howdy-message-platform

    Specifically, messages in Slack, the popular team chat software that’s taken over workplaces from Manhattan’s media towers to the startup lofts of SoMa.

    Brown’s company has just released Howdy, a bot for Slack which automates functions like gathering status updates for team meetings.

    Will Slack become a prolific platform the way, say, Windows was in the 1990s or Facebook and the iPhone were in 2008, spawning a range of new companies? So far, the company’s making the right moves.

    To date, most of the apps built on Slack that I’ve heard about have been for internal consumption, like Blossom, an app which recommends stories for New York Times social media editors to share.

    Other apps treat Slack like an extension of their existing services—one interface among many. Asana, a task-management app, has built an integration that lets Slack users send simple commands to create or update tasks in Asana, without having to leave Slack.

    What’s key here is that users are interacting with software bots in much the same way they chat with colleagues, turning work into an ongoing conversation.

    And as bots get smarter, the distinction between our human coworkers and our digital ones gets harder to spot. And, perhaps, less meaningful.

    Slack’s Underwood is bullish on bots.

    “The potential for bots is really broad,” she told me. “I expect that expense-reporting software will have a bot in Slack. HR processes will have a bot in Slack. It’s not just optimizing small tasks. It’s as effective as having a team member.”

    Slack is far from the only place where you may not be sure whether you’re chatting with bots or humans.

    Facebook is testing a bot-powered digital-assistant service in Messenger called M. Tell M what you want—for instance, to buy a shirt as a gift for your spouse—and it attempts to handle it with artificial intelligence. If the algorithms fail, humans step in.

    Clara Labs makes a service called Clara, whose favored medium is email.

    WeChat, the chat service owned by Chinese Internet giant Tencent, has seen an explosion of bots on the service, including useful ones that help you hunt for jobs or buy clothes.

    For Brown, though, Slack is a particularly bot-friendly environment.

    Reply
  20. Tomi Engdahl says:

    Breakthrough In Automatic Handwritten Character Recognition Sans Deep Learning
    http://tech.slashdot.org/story/15/12/16/0053251/breakthrough-in-automatic-handwritten-character-recognition-sans-deep-learning

    Researchers from NYU, UToronto and MIT have come up with a technique that captures human learning abilities for a large class of simple visual concepts to recognize handwritten characters from World’s Alphabet. Their computational model (abstract) represents concepts as simple programs that best explain observed examples under a Bayesian criterion.

    This AI Algorithm Learns Simple Tasks as Fast as We Do
    http://www.technologyreview.com/news/544376/this-ai-algorithm-learns-simple-tasks-as-fast-as-we-do/

    Software that learns to recognize written characters from just one example may point the way towards more powerful, more humanlike artificial intelligence.

    Reply
  21. Tomi Engdahl says:

    IT infrastructure on demand? Yeah right, say devs
    The gritty reality of ops, according to sandbox vendor
    http://www.theregister.co.uk/2015/12/16/qualisystems_survey_disses_it_ops/

    IT operations remain completely out of touch with the needs of developers, with CIOs duped into believing a dusting of VMware magic will allow them to construct the sort of whitebox data factories that power the likes of Google, sandbox vendor QualiSystems has declared.

    The vendor’s CTO Joan Wrabetz took aim at the industry’s collective self-delusion while unveiling the results of a survey of end-users at this year’s US and European VMworld events, which she said showed IT operations were utterly out of touch with what developers, users and businesses actually need.

    The survey found that three-quarters of organisations take at least eight hours to deliver infrastructure to end users, while 43 per cent take more than a week. In 17 per cent of cases, developers can be twiddling their thumbs for more than a month while they wait for ops to carve out some infrastructure for that latest must have release.

    Reply
  22. Tomi Engdahl says:

    Intel talks concurrency and Knights Landing
    James Reinders explains why Intel’s Xeon Phi is now a processor
    http://www.theregister.co.uk/2015/12/16/intel_talks_concurrency_and_knights_landing/

    Reinders is talking up Knights Landing, the next generation of Xeon Phi, Intel’s MIC (many integrated core) processors, which are designed for high-performance concurrent programming.

    The first Xeon Phi, Knights Corner, was released in 2012 and had up to 61 cores. 48,000 of the chips are installed in the world’s most powerful supercomputer, China’s Tianhe-2.

    Knights Landing has up to 72 cores, but the more significant difference is that the new Xeon Phi is a processor rather than a co-processor. Co-processors use a host/device programming model, where an application running on the host (the CPU) offloads compute-intensive tasks to the device (the co-processor), with huge potential speed-ups. Nvidia’s Tesla range of GPU accelerator boards (installed in the Titan, the world’s second most powerful supercomputer) also use this model.

    Processor versus co-processor

    Why did Intel go the co-processor route with Knights Corner, but is now changing tack? “One issue was software,” says Reinders. “[Knights Corner] being a co-processor fitted with a mould that people seemed to be more ready for. The other thing was a bit of legacy. The cluster on a chip design came from Larabee, a project for something else that we didn’t bring to market. We could introduce a co-processor faster. It was an engineering trade-off.

    “In a co-processor you can control your ecosystem more: everything that runs on it we had control of. The host was standard. We weren’t quite ready to understand how 512-bit vectors should be done on a processor.

    “Personally I was, I’ll deal with this co-processor and where it is taking us, but I can’t wait for Knights Landing.”

    From the programmer’s perspective, a processor is easier to code for since you no longer have to worry about the host/device boundary. “Co-processors have a big issue, a controlling program that already has the data, but has to ship the data over to the co-processor. You buy the memory twice. You have memory on the host that stores the data, then you transfer it to the memory on the card,” says Reinders.

    Why Fortran is great for supercomputing

    Supercomputing and concurrent programming is not just about the hardware. Getting results means writing well-optimised code, and that has proved to be the harder problem. Enter Fortran. “While computer science may have abandoned Fortran, it still drives the scientific world,” says Reinders. “Fortran is a very good language for scientific programming. Fortran has grown up and some of the arguments against it have been rectified.”

    He is particularly enthusiastic about Coarray Fortran, which is designed for parallelism. “I consider it one of a few PGAS [partitioned global address space] technologies and they are pretty popular with a certain crowd. In particular, they are very popular on Cray Aries fabric, which is very low latency. When a programmer has to deal with moving data around it’s a pain, but you need some mechanism to make sure that it doesn’t move around too much. I’ve seen some beautiful programs written with Coarray Fortran. It’s not for everybody or every algorithm, but when it fits, it exposes when you are talking to remote memory and you can make sure you don’t do that too often.”

    Reply
  23. Tomi Engdahl says:

    Ari Levy / CNBC:
    Technology investors describe the cooling in Silicon Valley: lower valuations for top-tier companies, and trouble for weaker startups — Silicon Valley’s cash party is coming to an end — Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money is over.

    Silicon Valley’s cash party is coming to an end
    http://www.cnbc.com/2015/12/17/silicon-valleys-cash-party-is-coming-to-an-end.html

    Silicon Valley is cooling, not crashing. Valuations are falling. The era of cheap money is over.

    Based on interviews with about two dozen venture capitalists and tech investors, 2016 is shaping up to be a year of reckoning for scores of technology start-ups that have yet to prove out their business models and equally challenging for those that raised money at unjustifiably high prices.

    “It’s been surprising to see how quickly valuation expectations are recalibrating,”

    Reply
  24. Tomi Engdahl says:

    Unity Benchmarks Browser WebGL Performance
    http://tech.slashdot.org/story/15/12/17/1648249/unity-benchmarks-browser-webgl-performance

    Jonas Echterhoff from Unity has posted the latest Unity WebGL benchmark results on the Unity blog.

    Updated WebGL Benchmark Results
    http://blogs.unity3d.com/2015/12/15/updated-webgl-benchmark-results/

    A bit over a year ago, we released a blog post with performance benchmarks for Unity WebGL, to compare WebGL performance in different browsers. We figured it was time to revisit those benchmarks to see how the numbers have changed.

    Microsoft has since released Windows 10 with their new Edge browser (which supports asm.js and is now enabling it by default) – so we were interested to see how that competes. Also, we have an experimental build of Unity using Shared Array Buffers to run multithreaded code, and we wanted to see what kind of performance gains to expect. So we tested this in a nightly build of Firefox with Shared Array Buffer support.

    Some findings:

    Firefox 42 64-bit is currently the fastest shipping browser in most of the benchmarks. The 32-bit version of Firefox is noticeably slower than the 64-bit version.
    Edge, as a new contender in these benchmarks comes in second, with results close to Firefox

    Safari delivers performance comparable to Chrome,

    Internet Explorer 11 is far behind the pack in just about everything, and is too slow to be of much use running Unity WebGL content.

    Reply
  25. Tomi Engdahl says:

    New WTO Trade Deal Will Exempt IT-Related Products From Import Tariffs

    http://news.slashdot.org/story/15/12/17/2251217/new-wto-trade-deal-will-exempt-it-related-products-from-import-tariffs?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Under an agreement finalized Wednesday that applies to all 192 member countries of the World Trade Organization (WTO), tariffs on imports of consumer electronics will be phased out over 7 years starting in July 2016. The agreement affects around 10 percent of the world trade in information and communications technology products and will eliminate around $50 billion in tariffs annually
    All your consumer tech gear should be cheaper come July: that’s the end-date for import tariffs

    http://www.cio.com/article/3016368/all-your-consumer-tech-gear-should-be-cheaper-come-july-thats-the-end-date-for-import-tariffs.html

    Reply
  26. Tomi Engdahl says:

    Assembly of tech giants convene to define future of computing
    ‘Cloud natives’ include two-year-old Docker, 104-year-old IBM
    http://www.theregister.co.uk/2015/12/18/cloud_native_computer_cloud_native/

    A flurry of the tech world’s great and good signed up the Cloud Native Computing Foundation yesterday, and kicked off a technical board to review submissions – which will be tested and fattened up on a vast Intel-based “computer farm”.

    Vendors declared their intent to form the Cloud Native Computing Foundation (CNCF) earlier this year, under the auspices of the Linux Foundation. Just to avoid confusion, the (cloud native) foundation reckons “Cloud native applications are container-packaged, dynamically scheduled and microservices-oriented”.

    Hence the foundation said it “seeks to improve the overall developer experience, paving the way for faster code reuse, improved machine efficiency, reduced costs and increases in the overall agility and maintainability of applications”.

    Reply
  27. Tomi Engdahl says:

    A Neural Conversational Model
    http://arxiv.org/pdf/1506.05869v2.pdf

    Conversational modeling is an important task in
    natural language understanding and machine in-
    telligence. Although previous approaches ex-
    ist, they are often restricted to specific domains
    (e.g., booking an airline ticket) and require hand-
    crafted rules. In this paper, we present a sim-
    ple approach for this task which uses the recently
    proposed sequence to sequence framework. Our
    model converses by predicting the next sentence
    given the previous sentence or sentences in a
    conversation. The strength of our model is that
    it can be trained end-to-end and thus requires
    much fewer hand-crafted rules

    In this paper, we show that a simple language model based
    on the
    seq2seq
    framework can be used to train a conversa-
    tional engine. Our modest results show that it can gener-
    ate simple and basic conversations, and extract knowledge
    from a noisy but open-domain dataset. Even though the
    model has obvious limitations, it is surprising to us that a
    purely data driven approach without any rules can produce
    rather proper answers to many types of questions.

    Reply
  28. Tomi Engdahl says:

    Microsoft’s biggest problem, in a single chart
    http://money.cnn.com/2015/12/18/technology/most-used-apps-2015/

    The 10 most-used apps of the year in the U.S. were all made by three companies — Facebook, Google, and Apple.

    Notably absent is Microsoft, one of the largest software makers in the world by revenue.

    Microsoft cares about lists like this for an important reason. They paint a very clear picture of what people need and want their phones to do — and it’s not the stuff that Microsoft builds.

    To its credit, Microsoft (MSFT, Tech30) has been doing a commendable job of building a mobile-friendly bundle of products, including free versions of its uber-popular Office 365 suite of apps.

    With Windows 10, apps that developers write for the PC will also work on Microsoft’s phones. That could help the platform bust out of its longstanding “chicken and egg” problem: Too few people have Windows phones for developers to care about making apps for the platform, and customers don’t want to buy Windows phones because they don’t have enough apps.

    But Microsoft is one of the biggest software companies on the planet. To not be in the top 10 on mobile has got to hurt.

    Google and Apple’s presence on the list makes sense, since their apps are native to their phones.

    Microsoft should be encouraged by Facebook’s success.

    Without its own mobile operating system, Facebook is still dominant on mobile.

    Reply
  29. Tomi Engdahl says:

    Assembly of tech giants convene to define future of computing
    ‘Cloud natives’ include two-year-old Docker, 104-year-old IBM
    http://www.theregister.co.uk/2015/12/18/cloud_native_computer_cloud_native/

    Reply
  30. Tomi Engdahl says:

    Improving UI and UX: Changing the “Open Source Is Ugly” Perception
    http://news.slashdot.org/story/15/12/20/1713232/improving-ui-and-ux-changing-the-open-source-is-ugly-perception

    For four years, Garth Braithwaite has been working at Adobe on open source projects as a design and code contributor. In addition to his work at the company, he also speaks at conferences about the power of design, improving designer-developer collaboration, and the benefits of open source. Still, he argues that the user experience is weak in many open source projects. One of the largest contributing factors is the lack of professional designers contributing to open source projects.

    Open source is ugly: Improving UI and UX
    https://opensource.com/life/15/9/ato-interview-garth-braithwaite

    Garth Braithwaite is a designer turned engineer turned hybrid of the two. He has worked as an engineer and user experience designer on several award-winning sites, applications, and open source projects.

    Why is UX bad in so many open source projects?

    There are a lot of reasons, but one of the largest contributing factors is the lack of professional designers contributing to open source projects. Contributing to the lack of designers, there is also a lack of collaborative and open source design workflows. Secondary to that, there are open source project owners who are unaware of the value of design or are unsure where to start with the design process.
    How important is it for an open source project to have a good UI and UX?

    Not all open source projects need more UX or UI then they currently have. Often times developers build open source projects that are aimed at other developers, so they are able to consider the needs of the end user without additional design assistance. The problem occurs when the open source project is being used by an outside demographic, including by developers of a lower experience level. In these cases, good user experience design contributions will help define the target audience—their needs, struggles, and experience—and the recommended solutions for assisting the users.

    Good user interface and brand design can also help establish a consistent experience across the project and help attract new contributors.
    Is it easy to attract designers to participate in open source projects?

    No. Often times it is easier to find open source developers who also have design experience.

    Are there notable open source projects with good UI and UX?

    There are some great ones out there—particularly ones that overlap somewhat with the design community, like Sass, Bower, Ember, and others. There is a great collection of open source projects with beautiful UI and UX at beautifulopen.com. There are also more mainstream examples like Firefox, VLC, Popcorn Time, and others.

    Reply
  31. Tomi Engdahl says:

    Edwin Evans-Thirlwell / Eurogamer.net:
    Former Rare game developers explain why Kinect stalled: technical limitations, lack of killer content, Microsoft’s tactical reversals in response to PS4, more

    Rare and the rise and fall of Kinect
    Cancelled games, Xbox 180s and Kinect Sports – the story of a revolution that didn’t quite take off.
    http://www.eurogamer.net/articles/2015-12-16-rare-kinect-rise-and-fall

    Reply
  32. Tomi Engdahl says:

    The Angelbird Wings PX1 M.2 Adapter Review: Do M.2 SSDs Need Heatsinks?
    by Billy Tallis on December 21, 2015 8:00 AM EST
    http://www.anandtech.com/show/9856/angelbird-wings-px1-m2-adapter-review-do-ssds-need-heatsinks

    The M.2 form factor has quickly established itself as the most popular choice for PCIe SSDs in the consumer space. The small size easily fits in to most laptop designs, and the ability to provide up to four lanes of PCI Express accommodates even the fastest SSDs. By comparison, SATA Express never caught on and never will due to its two-lane limitation. And the more recent U.2 (formerly SFF-8639) does have traction, but has seen little adoption in the client market.

    Meanwhile, although M.2 has its perks it also has its disadvantages, often as a consequence of space. The limited PCB area of M.2 can constrain capacity: Samsung’s single-sided 950 Pro is only available in 256GB or 512GB capacities while the 2.5″ SATA 850 Pro is available in up to 2TB. And for Intel, the controller used in their SSD 750 is outright too large for M.2, as it’s wider than the most common M.2 form factor (22mm by 80mm). Finally and most recently, as drive makers have done more to take advantage of the bandwidth offered by PCIe, a different sort of space limitation has come to the fore: heat.

    When testing the Samsung SM951 we found that our heavier sustained I/O tests could trigger thermal throttling that would periodically restrict the drive’s performance.

    Austrian SSD manufacturer Angelbird has released a new PCIe to M.2 adapter: the Wings PX1. With an aluminum heatsink and thermal pads to conduct heat away from both sides of an SSD, the PX1 aims to provide M.2 drives with the same cooling capacity that larger add-in card SSDs have.

    Reply
  33. Tomi Engdahl says:

    Microsoft grabs ex-Google and Facebook brains for unstructured SQL engine
    Database and Cortana Analytics injection
    http://www.theregister.co.uk/2015/12/21/microsoft_buys_metanautix/

    Microsoft has bought VC-funded big-data SQL start up Metanautix, building what it’s called a data compute engine.

    Metanautix was founded by Google former engineering director Theo Vassilakis and ex Facebook senior software engineer Toli Lerios in 2012 and landed $7m from Sequoia Capital.

    Their creation was Quest, released in March for free as a single-user piece of software to search unstructured, structured and relational sources using SQL.

    The inspiration was a Google internal project called Dremel for ad-hoc queries on typically Google-scale infrastructure of thousands of CPUs and petabytes of data.

    Problem was, Dremel used non-standard SQL and only worked on Google systems. Vassilakis lead Google’s work on Dremel and founded Metanautix with Lerios to bring the technology to a broader, enderptise audience.

    Microsoft will put the Metanautix technology into SQL Server and Cortana Analytics Suite, Redmond said. Financial terms of the deal were not revealed.

    Reply
  34. Tomi Engdahl says:

    Toshiba Reorg Cuts Consumer Unit
    Move highlights decline of notebooks
    http://www.eetimes.com/document.asp?doc_id=1328536&

    Toshiba slashed its consumer systems division in a broad reorganization that throws a spotlight on the decline of the consumer notebook business. The corporation will lay off 6,800 people or about 30% of the division designing and making PCs, TVs and appliances.

    The move is part of a restructuring program in the wake of an estimated loss of US$4.53 billion (550 billion yen) for its fiscal year ending in March.

    Reply
  35. Tomi Engdahl says:

    Startup Preps Big Data Processor
    Deep learning chip plugs into cloud service
    http://www.eetimes.com/document.asp?doc_id=1328523&

    Nervana Systems is on the cusp of rolling out a microprocessor designed for big data analytics. The startup’s work is one of a handful of efforts aiming to accelerate deep neural networks in hardware for a variety of recognition tasks.

    Engineers are racing to develop and accelerate algorithms that find patterns in today’s flood of digital data. Nervana believes it has an edge with a novel processor it hopes to have up and running in its own cloud service late next year.

    http://www.nervanasys.com/

    Reply
  36. Tomi Engdahl says:

    ZFS Replication To the Cloud Is Finally Here and It’s Fast
    http://slashdot.org/story/15/12/22/026209/zfs-replication-to-the-cloud-is-finally-here-and-its-fast

    Jim Salter at arstechnica provides a detailed, technical rundown of ZFS send and receive and compares it to traditional remote syncing and backup tools such as rsync. He writes: ‘In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net.

    rsync.net: ZFS Replication to the cloud is finally here—and it’s fast
    Even an rsync-lifer admits ZFS replication and rsync.net are making data transfers better.
    http://arstechnica.com/information-technology/2015/12/rsync-net-zfs-replication-to-the-cloud-is-finally-here-and-its-fast/

    In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net. Who cares, right? As the service itself states, “If you’re not sure what this means, our product is Not For You.”

    Of course, this product is for someone—and to those would-be users, this really will matter. Fully appreciating the new rsync.net (spoiler alert: it’s pretty impressive!) means first having a grasp on basic data transfer technologies. And while ZFS replication techniques are burgeoning today, you must actually begin by examining the technology that ZFS is slowly supplanting.

    Revisiting a first love of any kind makes for a romantic trip down memory lane, and that’s what revisiting rsync—as in “rsync.net”—feels like for me.

    Rsync is a tool for synchronizing folders and/or files from one location to another. Adhering to true Unix design philosophy, it’s a simple tool to use. There is no GUI, no wizard, and you can use it for the most basic of tasks without being hindered by its interface. But somewhat rare for any tool, in my experience, rsync is also very elegant. It makes a task which is humanly intuitive seem simple despite being objectively complex.

    You can go further and further down this rabbit hole of “what can rsync do.” Inline compression to save even more bandwidth? Check. A daemon on the server end to expose only certain directories or files, require authentication, only allow certain IPs access, or allow read-only access to one group but write access to another? You got it. Running “rsync” without any arguments gets you a “cheat sheet” of valid command line arguments several pages long.

    If rsync’s so great, why is ZFS replication even a thing?

    This really is the million dollar question. I hate to admit it, but I’d been using ZFS myself for something like four years before I realized the answer. In order to demonstrate how effective each technology is, let’s go to the numbers. I’m using rsync.net’s new ZFS replication service on the target end and a Linode VM on the source end. I’m also going to be using my own open source orchestration tool syncoid to greatly simplify the otherwise-tedious process of ZFS replication.

    Time-wise, there’s really not much to look at. Either way, we transfer 1GB of data in two minutes, 36 seconds and change. It is a little interesting to note that rsync ate up 26 seconds of CPU time while ZFS replication used less than three seconds, but still, this race is kind of a snoozefest.

    what happens if we change it just enough to force a re-synchronization?

    Now things start to get real. Rsync needed 13 seconds to get the job done, while ZFS needed less than two. This problem scales, too. For a touched 8GB file, rsync will take 111.9 seconds to re-synchronize, while ZFS still needs only 1.7.

    Reply
  37. Tomi Engdahl says:

    Accelerating your programs using OpenACC 2.0 with GCC
    https://www.mentor.com/embedded-software/events/accelerating-your-programs-using-openacc-2-0-with-gcc?contactid=1&PC=L&c=2015_12_21_embedded_technical_news

    The OpenACC Application Programming Interface defines a collection of compiler directives to annotate loops and regions of code in standard C, C++ and Fortran to be offloaded from the host CPU to an attached heterogeneous accelerator device for parallelized execution. Support for OpenACC 2.0 will be available in Mentor Graphics’ Sourcery CodeBench (OpenACC/PTX) Lite and in the upcoming release of GCC 6. This webinar will review the basics of OpenACC and demonstrate the use of OpenACC 2.0 with GCC for NVIDIA GPUs.

    Reply
  38. Tomi Engdahl says:

    CIOs, what does your nightmare before Christmas look like?
    Graveyards are full of IT pros once thought irreplaceable
    http://www.theregister.co.uk/2015/12/22/cios_worst_real_life_disasters/

    No single point of failure… and other jokes

    We met in the shadow of Telecity going titsup, taking out VOIP, hosting and Amazon Web Services to a large bunch of customers. The cloud behind the silver lining is that Amazon or any other cloud vendor can be as fault tolerant, distributed and well supported as you like, but if a service like Akamai or Cloudflare was to die, you still stop.

    That’s not a single point of failure in the classical sense of a standalone database server but it’s really hard to manage unless you go for full cloud agnosticism, which pushes up costs of development and delivery times. This is hard to justify when their failure rate is so low, so the irony is that the reliability of the content delivery networks means fewer businesses work out what to do if they fail.

    Oh, and no one seems to test their mission-critical data centre properly, because it’s mission critical. Our IT execs shared a good laugh about the idea that any CTO would really “see what happens if this fails” if he had any doubt that the power/aircon/network might actually failover, since crashing it would be a career-changing event. So they just over-specify where they can and cross their fingers.

    This means that you pay twice for some things and get the half the coverage for other vulnerabilities.

    Reply
  39. Tomi Engdahl says:

    Seagate: Hard Disk Drives Set to Stay Relevant for 20 Years
    by Anton Shilov on December 18, 2015 10:00 AM EST
    http://www.anandtech.com/show/9858/seagate-hard-disk-drives-set-to-stay-relevant-for-20-years

    The very first hard disk drives (HDDs) were demonstrated by IBM back in 1956 and by the early 1980s they became the dominant storage technology for all types of computers. Some say, hard drives are no longer relevant as solid-state drives offer higher performance. According to Seagate Technology, HDDs will remain in the market for at least 15 to 20 years. In a bid to remain the primary bulk storage device for both clients and servers, hard drives will adopt a multitude of technologies in the coming decade.

    “I believe HDDs will be along around for at least 15 years to 20 years,” said David Morton, chief financial officer of Seagate, at the Nasdaq 33rd Investor Program Conference.

    Sales of HDDs Decrease, But Technology Keeps Evolving

    Sales of hard disk drives have been decreasing for several years now. Total available market of HDDs dropped to 118 million units in the third quarter of 2015, according to estimates by Seagate Technology and Western Digital Corp. By contrast, various makers of hard drives sold approximately 164 million units in Q3 2010, the two leading manufacturers claim.

    Shipments of HDDs decrease due to a variety of factors nowadays, including growing popularity of solid-state drives (SSDs), drop of PC sales, increasing usage of cloud storage and so on. Nonetheless, HDDs remain the most popular data storage technology, which is also the cheapest in terms of per-gigabyte costs. While SSDs are generally getting more affordable, high-capacity solid-state drives are not going to become as inexpensive as hard drives any time soon. As a result, HDDs will remain a key bulk storage technology for a long time.

    Reply
  40. Tomi Engdahl says:

    Engauge Makes Graph Thieving a Cinch
    http://hackaday.com/2015/12/22/engauge-makes-graph-thieving-a-cinch/

    We’ve seen ’em before: the charts and graphs in poorly photocopied ’80s datasheets, ancient research papers, or even our college prof’s chalkboard chicken scratch. Sadly, this marvelously plotted data is locked away in a poorly rendered png or textbook graphic. Fortunately, a team of programmers have come the rescue to give us the proper thieving tool to lift that data directly from the source itself, and that tool is Engauge.

    Engauge is an open source software tool that enables to convert pictures of plots into the numerical representation of their data. While some of us might still be tracing graphs by hand, Engauge enables us to simply define reference points on the graph, and a clever image-processing algorithm extracts the curve for us automatically!

    http://digitizer.sourceforge.net/

    This open source, digitizing software converts an image file showing a graph or map, into numbers. The image file can come from a scanner, digital camera or screenshot. The numbers can be read on the screen, and written or copied to a spreadsheet.

    The process starts with an image file containing a graph or map. You add three axis points to define the axes, then other points along your curve(s).The final result is digitized data that can be used by other tools such as Microsoft Excel and Gnumeric.

    Why Would You Need This Tool?

    Here are some real-life examples:

    You are an engineer with some graphs in decades-old documents, but you really need the numbers represented in those graphs so you can do analyses that will determine if a space vehicle is safe to fly.
    You are a graduate student gathering historical data from charts for your thesis.
    You are a webmaster with visitor statistics charts and you want to do statistical analyses.
    You ride a bike or boat and want to know how much distance you covered in your last trip, but you do not have an odometer or GPS unit. However, you do have a map.

    Reply
  41. Tomi Engdahl says:

    Converged systems market cracks $10bn a year says IDC
    And the winner is … NetApp? Yup, when you look at the market one way
    http://www.theregister.co.uk/2015/12/23/converged_systems_market_cracks_10bn_a_year_says_idc/

    Revenue for converged systems has cracked the US$10 billion dollar barrier over the last 12 months, according to the box-and-cash-counting experts at analyst firm IDC.

    The firm’s new Worldwide Quarterly Converged Systems Tracker for 2015′s third quarter found revenue up 6.2 per cent year over year to $2.5 billion, and 1,261 petabytes of new storage capacity shipments during the quarter. That’s up 34.8 per cent compared to 2014′s third quarter.

    IDC counts three types of converged infrastructure, starting with integrated systems like those offered by Oracle and VCE with everything in the box ready to go. VCE leads that market with quarterly revenue of $483m and 27.8 per cent of the market. HP and Oracle trail, on 22.5 per cent and 21.3 per cent market share.

    The second category IDC tracks is certified reference systems, a field in which the Cisco/NetApp FlexPods come out on top with 45 per cent share ahead of EMC’s efforts and those of Hitachi Data Systems.

    The third category is hyperconverged systems that “provide all compute and storage functions through the same server-based resources.” Such systems have grabbed just under 11 per cent of the converged systems market. The firm’s not yet breaking out the winners in this category

    Reply
  42. Tomi Engdahl says:

    Four years ago, Canonical’s Director General Mark Shuttelworth boasted that the Ubuntu operating system would be the end of 2015 to 200 million users. The target is not realized. In fact, Ubuntu falls far short of this target.

    Ubuntu smartphone certainly had to be a big part of this plan, but so far the Ubuntu mobile is certainly less than a million in less than a user.

    Ubuntu’s own information, the format used for desktops or laptops around 40 million users.

    Linux’s share of computers has remained virtually unchanged for many years. For example, Netapplicationsin statistics show that 1.6 per cent of the network address requests were made in November linux computers.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=3786:ubuntu-jaa-kauaksi-tavoitteestaan&catid=13&Itemid=101

    Reply
  43. Tomi Engdahl says:

    White-boxer joins flash array wars: SanDisk teams up with Amazon supplier
    To InfiniFlash and beyond with Quanta Cloud Technology?
    http://www.theregister.co.uk/2015/12/23/white_boxer_joins_flash_array_wars_sandisk_quanta/

    A white boxer is working with SanDisk to flog flash arrays, making SanDisk even more desirable to WDC.

    Taiwan-based Quanta, actually Quanta Cloud Technology (QCT), along with Foxcon and SuperMicro, is a so-called “white box” computer supplier, making notebooks, servers and switches to be branded by its customers. Apple is one of its customers, and Amazon, Dell and HP are others.

    NAND component and system supplier SanDisk is being bought by WDC for $19bn, and has developed an InfiniFlash array, characterised as a JBOF, Just a Box of Flash, and, like a JBOD, lacking array controller hardware and software. Pricing is said to start at less than $1/GB for the raw flash box, before any data reduction software or hardware functionality is added.

    It is partnering with CloudByte, Nexenta and Tegile to develop complete controller HW + SW + JBOF systems usable by customers.

    An InfiniFlash array offers up to 512TB of capacity in a 3U enclosure, using 8TB InfiniFlash cards, meaning up to 6PB per rack. With flash chip density increasing we can expect this to double.

    This product combo might well meet NetApp/Solidfire and other scale-out flash arrays when being pitched to cloud service providers.

    Reply
  44. Tomi Engdahl says:

    Australian government urges holidaymakers to kill two-factor auth
    Um, not sure you thought this one through
    http://www.theregister.co.uk/2015/12/22/australian_government_twofactor_auth/

    The Australian government is urging its citizens to turn off two-factor authentication while abroad.

    The myGov website allows Australians to tap into a broad range of government services including tax payments, health insurance, child support, and so on. Since this tends to involve sensitive personal information, it’s wise to protect one’s account with two-factor authentication – such as a one-time code texted to a phone that needs to be given to the website while logging in.

    There’s a fear that while citizens are overseas, they may not be able to reliably get these text messages (or be charged an extra fee to receive them) if they try to use myGov. So the advice is: turn off this protection when out the country, and turn it back on again when you return.

    Except, of course, that rather misses the entire reason for two-factor authentication, and puts convenience above the actual security of your information.

    What’s more, people are significantly more likely to be using online services in less secure settings when they are abroad, making the decision to remove a vital mechanism all the more likely that their accounts will be compromised.

    In other words, this is really terrible advice.

    The entire point of two-factor auth is to make it so that if someone manages to snatch a look at your username and password, they can’t automatically log into your account.

    As such, the Australian government is doing is the exact opposite of what it should be doing, which is educating people about alternative ways to secure their accounts, rather than pushing the crazy message that security is about convenience and that you should simply drop it when it requires a little extra effort,

    Reply
  45. Tomi Engdahl says:

    Mary Jo Foley / ZDNet:
    Despite a steady inflow of new well-known apps, it remains unclear whether the Universal Windows Platform can maintain momentum with developers — Windows 10 app momentum: A flash in the pan or something sustainable? — There has been a steady trickle of Universal Windows Platform apps arriving in the Windows Store, as of late.

    Windows 10 app momentum: A flash in the pan or something sustainable?
    http://www.zdnet.com/article/windows-10-app-momentum-a-flash-in-the-pan-or-something-sustainable/

    There has been a steady trickle of Universal Windows Platform apps arriving in the Windows Store, as of late. Can Microsoft keep the developer momentum going?

    Reply
  46. Tomi Engdahl says:

    Dana Wollman / Engadget:
    Toshiba Radius 12 4K laptop review: great display, fast performance, comes with a generous selection of ports but has poor battery life and a finicky trackpad

    Toshiba Radius 12 review: A 4K laptop with compromises
    It has one of the best screens of any notebook. Too bad about the battery life, though.
    http://www.engadget.com/2015/12/24/toshiba-radius-12-review/

    It ticks off almost all the right boxes, with a 4K, Technicolor-certified screen option and a 2.9-pound design — particularly impressive for a convertible like this with a 360-degree hinge.

    Summary

    Toshiba’s Radius 12 convertible has the ingredients of a great ultraportable laptop, but unfortunately, it would seem that its greatest asset is also its greatest weakness: That gorgeous 4K, Technicolor-certified screen option results in worst-in-class battery life.

    Then you pick it up. The machine is so light that it nearly excuses the drab design.

    On a practical level, too, the chassis is home to a useful selection of ports, including a full-sized HDMI socket, two USB 3.0 connections, a smaller USB Type-C port, a full-sized SD card reader, a headphone jack and a volume rocker for when the device is in tablet mode. Compare that to the MacBook, which makes do with one measly USB Type-C connection, and doesn’t even come with a dongle in the box.

    So far in our tour we haven’t yet powered on the Radius 12, but now would be a good time: The optional 4K display is likely the reason you’re considering buying this in the first place. The glass stretches virtually from edge to edge, with the skinniest of bezels acting as a nominal buffer between the display and the rest of the machine. I remain unconvinced that 3,840 x 2,160 resolution is necessary on a display this small — a slightly lower pixel count would still look sharp and would be less devastating on battery life, and there’s not yet much 4K content to watch anyway. Even so, there’s no question that the pixel density helps make the screen as gorgeous as it is.

    Reply
  47. Tomi Engdahl says:

    Jordan Novet / VentureBeat:
    Debian founder and Docker employee Ian Murdock has died at 42 — Docker today announced that Ian Murdock, a member of the startup’s technical staff and a former Sun and Salesforce employee known for founding the Debian Linux operating system, has passed away. He was 42.

    Debian founder and Docker employee Ian Murdock has died at 42
    http://venturebeat.com/2015/12/30/debian-founder-and-docker-employee-ian-murdock-has-died-at-42/

    Docker today announced that Ian Murdock, a member of the startup’s technical staff and a former Sun and Salesforce employee known for founding the Debian Linux operating system, has passed away. He was 42.

    A cause of death was not provided in the blog post announcing the news.

    “Ian helped pioneer the notion of a truly open project and community, embracing open design and open contribution; in fact the formative document of the open source movement itself (the Open Source Definition) was originally a Debian position statement,”

    In the past few years Docker has popularized Linux containers, and there is significance in his arrival at the startup, as he is highly regarded in the Linux world. Many Debian users responded to the Monday tweets with sympathy and support.

    In Memoriam: Ian Murdock
    http://blog.docker.com/2015/12/ian-murdock/

    Reply
  48. Tomi Engdahl says:

    Open Source Roles: Starters vs. Maintainers
    http://developers.slashdot.org/story/15/12/30/1611249/open-source-roles-starters-vs-maintainers

    Mozilla developer James Long has posted a sort of internal monologue on the difficulties of being a hobbyist open source project maintainer. He says, “I hugely admire people who give so much time to OSS projects for free. I can’t believe how much unpaid boring work is going on. It’s really cool that people care so much about helping others and the community. … There are two roles for any project: starters and maintainers. People may play both roles in their lives, but for some reason I’ve found that for a single project it’s usually different people. Starters are good at taking a big step in a different direction, and maintainers are good at being dedicated to keeping the code alive.

    Starters and Maintainers
    December 29, 2015
    http://jlongster.com/Starters-and-Maintainers

    I am definitely a starter. I tend to be interested in a lot of various things, instead of dedicating myself to a few concentrated areas. I’ve maintained libraries for years, but it’s always a huge source of guilt and late Friday nights to catch up on a backlog of issues.

    From now on, I’m going to be clear that code I put on github is experimental and I’m not going to respond to issues or pull requests. If I do release a production-ready library, I’ll already have someone in mind to maintain it. I don’t want to have a second job anymore. :)

    Here’s to all the maintainers out there. To all the people putting in tireless, thankless work behind-the-scenes to keep code alive, to write documentation, to cut releases, to register domain names, and everything else.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*