Computer trends 2018

IT seems to be growing again. Gartner forecasts worldwide IT spending will increase 4.5% this year to $3.68 trillion, driven by artificial intelligence, big data analytics, blockchain technology, and the IoT.

Digital transformations are fashionable. You won’t find an enterprise that isn’t leveraging some combination of cloud, analytics, artificial intelligence and machine learning to better serve customers or streamline operations. But here’s a hard truth about digital transformations: Many are failing outright or are in danger of failing. Typical reasons for failing are not understanding what is digital transformation (different people understand it differently), lack of CEO sponsorship, talent deficiency, resistance to change. Usually a technology-first approach to digital transformation is a recipe for disaster. Truing to just push trough technically unfeasible transformation idea is another way to fail.

The digital era requires businesses to move with speed, and that is causing IT organizations to rethink how they work. A lot of  IT is moving off premises to SaaS providers and the public cloud. Research outfit 451 standout finding was that 60 per cent of the surveyed enterprises say they will run the majority of their IT outside the confines of enterprise data centres by the end of 2019. From cost containment to hybrid strategies, CIOs are getting more creative in taking advantage of the latest offerings and the cloud’s economies of scale.

In 2018 there seems to be a growing Software Engineering Talent Shortage in both quantity and quality. For the past nine years, software engineers have been at the top of the hardest to fill jobs in the United States. And same applies to many other countries including Finland. Forrester projects that firms will pay 20% above market for quality engineering talent in 2018. Particularly in-demand skills  are data scientists, high-end software developers and information security analysts. There is real need for well-studied, experienced engineers with a formal and deep understanding of software engineering. Recruiting and retaining tech talent remains IT’s biggest challenge today. Most CIOs are migrating applications to public cloud services, offloading operations and maintenance of computing, storage and other capabilities so they can reallocate staff to focus on what’s strategic to their business.

The enterprise no longer is at the center of the IT universe. It seems that reports of the PC’s demise have been greatly exaggerated and the long and painful decline in PC sales of the last half-decade as tailed off, at least momentarily. As the sales of smartphones and tablets have risen, consumers had not stopped using PCs, but merely replaced them less often. FT reports that PC is set to stage a comeback in 2018, after the rise of smartphones sent sales of desktop and laptop computers into decline in recent years. If that does not happen, then PC market could return to growth in 2019. But the end result is that PC is no longer seen as the biggest growth driver for chip makers. An extreme economic shift has chipmakers focused on hyperscale clouds.

Microservices are talked about a lot. Software built using microservices is easier to deliver and maintain than the big and brittle architectures or old; these were difficult to scale and might take years to build and deliver. Microservices are small and self-contained, so therefore easy to wrap up in a virtual machine or a container (but don’t have to live in containers). Public cloud providers increasingly differentiate themselves through the features and services they provide. But it turns out that microservices are far from being one-size-fit-for-all silver bullet for IT challenges.

Containers will try to make break-trough again in 2018. Year 2017 was supposed to be the year of containers! It wasn’t? Oops. Maybe year 2018 is better. Immature tech still has a bunch of growing up to do. Linux Foundation’s Open Containers Initiative (OCI) finally dropped two specifications that standardise how containers operate at a low level. The needle in 2018 will move towards containers running separately from VMs, or entirely in place of VMs. Kubernates gains traction. It seems that the containers are still at the point where the enterprise is waiting to embrace them.

Serverless will be talked about. Serverless computing is a cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Serverless architectures refer to applications that significantly depend on third-party services (knows as Backend as a Service or “BaaS”) or on custom code that’s run in ephemeral containers (Function as a Service or “FaaS”), the best known vendor host of which currently is AWS Lambda.

Automation is what everybody with many computers wants. Infrastructure automation creates and destroys basic IT resources such as compute instances, storage, networking, DNS, and so forth. Security automation helps keeping systems secure. It bosses want to create self-driving private clouds. The journey to self-driving clouds needs to be gradual. The vision of the self-driving cloud makes sense, but the task of getting from here to there can seem daunting. DevOps automation with customer control: Automatic installation and configuration, Integration that brings together AWS and VMWare, workflows migration controlled by users, Self-service provisioning based on templates defined by users, Advanced machine learning to automate processes, and Automated upgrades.

Linux is center of many cloud operations: Google and Facebook started building their own gear and loading it with their own software. Google has it’s own Linux called gLinux.  Facebook networking uses Linux-based FBOSS operating system. Even Microsoft has developed its own Linux for cloud operations. Software-defined networking (SDN) is a very fine idea.

Memory business boomed in 2017 for both NAND and DRAM. The drivers for DRAM are smartphones and servers. Solid-state drives (SSDs) and smartphones are fueling the demand for NANDNAND Market Expected to Cool in Q1 from the crazy year 2017, but it is still growing well because there is increasing demand. Memory — particular DRAM — was largely considered a commodity business.

Lots of 3D NAND will go to solid state drives in 2018. IDC forecasts strong growth for the solid-state drive (SSD) industry as it transitions to 3D NAND.  SSD industry revenue is expected to reach $33.6 billion in 2021, growing at a CAGR of 14.8%. Sizes of memory chips increase as number of  layer in 3D NAND are added. The traditional mechanical hard disk based on magnetic storage is in hard place in competition, as the speed of flash-based SSDs is so superior

There is search for faster memory because modern computers, especially data-center servers that skew heavily toward in-memory databases, data-intensive analytics, and increasingly toward machine-learning and deep-neural-network training functions, depend on large amounts of high-speed, high capacity memory to keep the wheels turning. The memory speed has not increased as fast as the capacity. The access bandwidth of DRAM-based computer memory has improved by a factor of 20x over the past two decades. Capacity increased 128x during the same period. For year 2018 DRAM remains a near-universal choice when performance is the priority. There is search going on for a viable replacement for DRAM. Whether it’s STT-RAM or phase-change memory or resistive RAM, none of them can match the speed or endurance of DRAM.

 

 

PCI Express 4.0 is ramping up. PCI-standards consortium PCI-SIG (Special Interest Group) has ratified and released specifications for PCIe 4.0 Specification Version 1. Doubling PCIe 3.0’s 8 GT/s (~1 GB/s) of bandwidth per lane, PCIe 4.0 offers a transfer rate of 16 GT/s. The newest version of PCI Express will start appearing on motherboards soon. PCI-SIG has targeted Q2 2019 for releasing the finalized PCIe 5.0 specification, so PCIe 4.0 won’t be quite as long-lived as PCIe 3.0 has been. So we’ll See PCIe 4.0 this year in use and PCIe 5.0 in 2019.

USB type C is on the way to becoming the most common PC and peripheral interface. The USB C connector has become faster more commonplace than any other earlier interface. USB C is very common on smartphones, but the interface is also widespread on laptops. Sure, it will take some time before it is the most common. In 2021, the C-type USB connector has almost five billion units, IHS estimates.

It seems that the after-shocks of Meltdown/Spectre vulnerabilities on processors will be haunting us for quite long time this year. It is now three weeks since The Register revealed the chip design flaws that Google later confirmed and the world still awaits certainty about what it will take to get over the silicon slip-ups. Last pieces of farce has been that Intel Halts Spectre, Meltdown CPU Patches Over Unstable Code and Linux creator Linus Torvalds criticises Intel’s ‘garbage’ patches. Computer security will not be the same after all this has been sorted out.

What’s Next With Computing? IBM discusses AI, neural nets and quantum computing. Many can agree that those technologies will be important. Public cloud providers increasingly provide sophisticated flavours of data analysis and increasingly Machine Learning (ML) and Artificial Intelligence (AI). Central Banks Are Using Big Data to Help Shape Policy. Over the past few years, machine learning (ML) has evolved from an interesting new approach that allows computers to beat champions at chess and Go, into one that is touted as a panacea for almost everything. 2018 will be the start of what could be a longstanding battle between chipmakers to determine who creates the hardware that artificial intelligence lives on.

ARM processor based PCs are coming. As Microsoft and Qualcomm jointly announced in early December that the first Windows 10 notebooks with ARM-based Snapdragon 835 processors will be officially launched in early 2018, there will be more and more PCs with ARM processor architecture hitting the market. Digitimes Research expects that ARM-based models may dominate lower-end PC market, but don’t hold your breath on this. It is rumoured that “wireless LTE connectivity” function will be incorporated into all the entry-level Window 10 notebooks with ARM processors, branded by Microsoft as the “always-connected devices.” HP and Asustek have released some ARM-based notebooks with Windows 10S.

Sources:
Ohjelmistoalan osaajapula pahenee – kasvu jatkuu

PC market set to return to growth in 2018

PC market could return to growth in 2019

PC sales grow for the first time in five years

USBC yleistyy nopeasti

PCI-SIG Finalizes and Releases PCIe 4.0, Version 1 Specification: 2x PCIe Bandwidth and More

Hot Chips 2017: We’ll See PCIe 4.0 This Year, PCIe 5.0 In 2019

Serverless Architectures

Outsourcing remains strategic in the digital era

8 hot IT hiring trends — and 8 going cold

EDA Challenges Machine Learning

The Battle of AI Processors Begins in 2018

How to create self-driving private clouds

ZeroStack Lays Out Vision for Five-Step Journey to Self-Driving Cloud

2017 – the year of containers! It wasn’t? Oops. Maybe next year

Hyperscaling The Data Center

Electronics trends for 2018

2018′s Software Engineering Talent Shortage— It’s quality, not just quantity

Microservices 101

How Central Banks Are Using Big Data to Help Shape Policy

Digitimes Research: ARM-based models may dominate lower-end PC market

Intel Halts Spectre, Meltdown CPU Patches Over Unstable Code

Spectre and Meltdown: Linux creator Linus Torvalds criticises Intel’s ‘garbage’ patches

Meltdown/Spectre week three: World still knee-deep in something nasty

What’s Next With Computing? IBM discusses AI, neural nets and quantum computing.

The Week in Review: IoT

PCI Express 4.0 as Fast As Possible

Microsoft has developed its own Linux!

Microsoft Built Its Own Linux Because Everyone Else Did

Facebook has built its own switch. And it looks a lot like a server

Googlella on oma sisäinen linux

Is the writing on the wall for on-premises IT? This survey seems to say so

12 reasons why digital transformations fail

7 habits of highly effective digital transformations

 

857 Comments

  1. Tomi Engdahl says:

    When I’m 64: Toshiba Memory Corp woos data centres with a little TLC… SSD trio
    64-layer TLC 3D-NAND tech is caching on
    https://www.theregister.co.uk/2018/03/19/toshiba_data_centre_3d_nand_tlc_ssd_trio/

    Toshiba is making a play for expanded data centre flash drive sales with a trio of 64-layer 3D NAND products.

    The soberly named triad of drives – CD5, XD5 and HK6-DC (Toshiba doesn’t do catchy when it comes to enterprise drives) – provide PCIe and SATA interfaces, and 2.5-inch and M.2 form factors.

    Reply
  2. Tomi Engdahl says:

    Linux Foundation backs new ‘ACRN’ hypervisor for embedded and IoT
    Intel tosses in code because data centre hypervisors are too bloated for embedded use
    https://www.theregister.co.uk/2018/03/19/acrn_hypervizor_prject/

    The Linux Foundation has announced a new hypervisor for use in embedded and internet of things scenarios.

    Project ACRN (pronounced “acorn”) will offer a “hypervisor, and its device model complete with rich I/O mediators.”

    There’ll also be “a Linux-based Service OS” and the ability to “run guest operating systems (another Linux instance, an RTOS, Android, or other operating systems) simultaneously”.

    Reply
  3. Tomi Engdahl says:

    Announcing Microsoft DirectX Raytracing!
    https://blogs.msdn.microsoft.com/directx/2018/03/19/announcing-microsoft-directx-raytracing/

    3D Graphics is a Lie
    For the last thirty years, almost all games have used the same general technique—rasterization—to render images on screen. While the internal representation of the game world is maintained as three dimensions, rasterization ultimately operates in two dimensions (the plane of the screen), with 3D primitives mapped onto it through transformation matrices. Through approaches like z-buffering and occlusion culling, games have historically strived to minimize the number of spurious pixels rendered, as normally they do not contribute to the final frame. And in a perfect world, the pixels rendered would be exactly those that are directly visible from the camera

    It wasn’t long, however, until games began using techniques that were incompatible with these optimizations. Shadow mapping allowed off-screen objects to contribute to on-screen pixels, and environment mapping required a complete spherical representation of the world. Today, techniques such as screen-space reflection and global illumination are pushing rasterization to its limits, with SSR, for example, being solved with level design tricks, and GI being solved in some cases by processing a full 3D representation of the world using async compute. In the future, the utilization of full-world 3D data for rendering techniques will only increase.

    Today, we are introducing a feature to DirectX 12 that will bridge the gap between the rasterization techniques employed by games today, and the full 3D effects of tomorrow. This feature is DirectX Raytracing. By allowing traversal of a full 3D representation of the game world, DirectX Raytracing allows current rendering techniques such as SSR to naturally and efficiently fill the gaps left by rasterization, and opens the door to an entirely new class of techniques that have never been achieved in a real-time game.

    Reply
  4. Tomi Engdahl says:

    Google’s Linkage Of The Unity Engine With Google Maps Is A Game Changer
    https://www.forbes.com/sites/kevinmurnane/2018/03/16/googles-linkage-of-the-unity-engine-with-google-maps-is-a-game-changer/#1dd484e65af0

    Pokemon GO’s melding of AR sprites with real-world locations caused a sensation two years ago. Now Google threatens to revolutionize augmented reality gaming by offering developers the opportunity to build their games on Google Maps. This could be huge.

    Reply
  5. Tomi Engdahl says:

    Amazon surpasses Alphabet in market value
    https://techcrunch.com/2018/03/20/amazon-surpasses-alphabet-in-market-value/?utm_source=tcfbpage&sr_share=facebook

    Amazon is currently the second biggest company in the world when it comes to market capitalization. The company is currently worth $763.27 billion (NASDAQ:AMZN) while Alphabet (NASDAQ:GOOG) is “only” worth $762.98 billion.

    Amazon has had an incredible quarter. Stock is up nearly 29 percent since early January. As for Alphabet, its shares have gone up and down.

    The only company that is currently more valuable than Amazon is Apple. There’s still quite a long way to reach Apple as Apple’s market capitalization is… $892 billion.

    Reply
  6. Tomi Engdahl says:

    Continuous Integration: A “Typical” Process
    https://developers.redhat.com/blog/2017/09/06/continuous-integration-a-typical-process/?sc_cid=7016000000127ECAAY

    Continuous Integration (CI) is a phase in the software development cycle where code from different team members or different features are integrated together. This usually involves merging code (integration), building the application and carrying out basic tests all within an ephemeral environment.

    In the past, the code was integrated at an “integration phase” of the software development life cycle.

    Reply
  7. Tomi Engdahl says:

    Why can data analytics go totally wrong (and harm your business)?
    https://motley.fi/must-reads/why-can-data-analytics-go-wrong

    Even if you have the best data analytics tools in the world, something might be missing and things can go wrong, as Facebook has experienced lately. What seems to be good for your company, isn’t always good for your company’s customers.

    Obviously, people are spending less and less time on Facebook and asking why are they using the service or what is the value they get from it?

    Facebook’s downturn and escaping users show how data-driven thinking can badly mislead us. Facebook has among the best data analysis capabilities in the world, and it still can’t even understand what is good for its users.

    Of course, Facebook has given us unparalleled opportunities for interaction with each other, but on the other hand, it has sadly evolved into a platform of addiction.

    Reply
  8. Tomi Engdahl says:

    ​The Raspberry Pi is the feel-good tech success that we really need
    http://www.zdnet.com/article/the-raspberry-pi-is-the-feel-good-tech-success-that-we-really-need/

    Silicon Valley take note: you can be successful in tech and make the world a better place, too.

    Reply
  9. Tomi Engdahl says:

    TSMC to enter 7nm chip shipments for new Xilinx ACAP in 2019
    https://www.digitimes.com/news/a20180320PD215.html

    Xilinx has introduced a new product category called adaptive compute acceleration platform (ACAP) – a highly integrated multi-core heterogeneous compute platform – for big data and AI applications. The new product family will be developed using 7nm process technology at Taiwan Semiconductor Manufacturing Company (TSMC) and will tape out later this year, with customer shipments set to kick off in 2019, according to the FPGA chip vendor.

    Xilinx said ACAP goes far beyond the capabilities of an FPGA. An ACAP is a highly integrated multi-core heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP’s adaptability, which can be done dynamically during operation, delivers levels of performance and performance per-watt that is unmatched by CPUs or GPUs, th evendor claimed.

    An ACAP is suited to accelerate a broad set of applications in the emerging era of big data and AI, said Xilinx. These include: video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration.

    Software developers will be able to target ACAP-based systems using tools like C/C++, OpenCL and Python. An ACAP can also be programmable at the RTL level using FPGA tools, Xilinx said.

    ACAP has been under development for four years at an accumulated R&D investment of over US$1 billion, Xilinx disclosed. There are currently more than 1,500 hardware and software engineers at Xilinx designing ACAP and Everest. Software tools have been delivered to key customers. Everest will tape out in 2018 with customer shipments in 2019.

    Reply
  10. Tomi Engdahl says:

    IBM working on ‘world’s smallest computer’ to attach to just about everything
    https://techcrunch.com/2018/03/19/ibm-working-on-worlds-smallest-computer-to-attach-to-just-about-everything/

    IBM is hard at work on the problem of ubiquitous computing, and its approach, understandably enough, is to make a computer small enough that you might mistake it for a grain of sand. Eventually these omnipresent tiny computers could help authenticate products, track medications and more.

    It’s an evolution of IBM’s “crypto anchor” program, which uses a variety of methods to create what amounts to high-tech watermarks for products that verify they’re, for example, from the factory the distributor claims they are, and not counterfeits mixed in with genuine items.

    The “world’s smallest computer,” as IBM continually refers to it, is meant to bring blockchain capability into this; the security advantages of blockchain-based logistics and tracking could be brought to something as benign as a bottle of wine or box of cereal.

    In addition to getting the computers extra-tiny, IBM intends to make them extra-cheap, perhaps 10 cents apiece. So there’s not much of a lower limit on what types of products could be equipped with the tech.

    Not only that, but the usual promises of ubiquitous computing also apply: this smart dust could be all over the place, doing little calculations, sensing conditions, connecting with other motes and the internet to allow… well, use your imagination.

    It’s small (about 1mm x 1mm), but it still has the power of a complete computer, albeit not a hot new one. With a few hundred thousand transistors, a bit of RAM, a solar cell and a communications module, it has about the power of a chip from 1990. And we got a lot done on those, right?

    Reply
  11. Tomi Engdahl says:

    One hundred terabytes is available for the world’s largest SSD

    Samsung has a 30-terabyte SSD, Seagate recently launched a 60-terabyte disk, but they do not get near the top of the capacity list. The first place is Nimbus Data, which has the latest ExaDrive DC100 to capture as much as one hundred terabytes of data. So roughly one hundred thousand gigabytes.

    Despite its tremendous capacity, the SATA / SAS disc has a standard 3.5-inch disk size. Nimbus Data especially praises the energy efficiency of a disc. For one terawat per disk, the power consumes 0.1 watts, which is only a fifth of Samsung’s 30-terabyte reading.

    Nimbus explains the 100-terabyte board with a new design. Where a traditional SSD elevator connects to a single flash controller, the ExaDrive disk debugging and storage management are handled by a number of low-power ASIC circuits.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=7736&via=n&datum=2018-03-21_15:02:23&mottagare=31202

    Reply
  12. Tomi Engdahl says:

    Power 9 May Dent x86 Servers
    Alibaba, Google, Tencent test IBM systems
    https://www.eetimes.com/document.asp?doc_id=1333090

    Ten system makers showed servers using IBM’s Power 9 processor here amid expectations of rising sales for the x86 alternative. Their momentum will make, at best, a small dent in the market that Intel dominates, but their targets include one of its most lucrative segments — machine-learning jobs in the data center.

    Google, an early partner in IBM’s Open Power initiative, announced that it is expanding its tests of Power 9 systems. An engineer leading the effort said that, given the search giant’s investments in the architecture, it hopes to move at least some Power systems into production use this year.

    China’s Alibaba and Tencent are also testing Power 9. Tencent said that Power 9 is delivering 30% more performance than the x86 while using fewer servers and racks.

    At least one web giant is expected to announce production use of Power 9 systems this year.

    IBM’s corporate aim is to win within four years at least 20% of the sockets for Linux servers sold for $5,000 or more, said King. IBM’s Power roadmap calls for annual processor upgrades in 14 nm through 2019 and a Power 10 slated for some time after 2020 — leaving room for a possible 7-nm chip in 2020

    Reply
  13. Tomi Engdahl says:

    Web Giants Want Optical, Flash Advances
    Facebook, M’soft call for net, storage interfaces
    https://www.eetimes.com/document.asp?doc_id=1333094

    Web giants called for significant shifts in networking, storage and software at the Open Compute Summit here to keep their heads above a rising flood of data.

    Facebook said it will need switch ASICs with optical interfaces within three years for its next big network upgrade. Microsoft announced a new low-level interface for NAND flash drives to make better use of storage. And the Open Compute Project (OCP) stuck a partnership with the Linux Foundation to plug holes integrating and testing systems and software together.

    Interestingly, little was said about the adoption of machine learning, the latest force driving the data explosion. Security, however, is a rising focus with increasing support for a root-of-trust module announced last year and a broader security project just getting off the ground.

    Reply
  14. Tomi Engdahl says:

    Windows
    No comments
    New types of Windows 10 ARMs pick up criticism – there are bad gaps in the application

    Soon, the first new Windows 10 ARM computers will be launched. They are equipped with AMD or Intel processors in the same circuits as those used on smartphones.

    Windows 10 ARM computers bring instant wake up from standby, considerably longer operating times and a continuous mobile network connection. Of these things, hundreds of praises for novelties in the first estimates, but there are also problems with the surface.

    Microsoft has previously tried ARM-based Windows, but the then Windows RT only became a full flop. Now with Windows 10 ARM, the starting point seems to be somewhat better, though not without problems.

    The Verge test report shows that most of the applications acquired from Windows Store work well. “Most of the modern applications, especially downloaded from Microsoft Store or pre-installed, opened quickly, with no significant difference in how they work on Intel-based computers,

    However, some applications cause problems – and Windows 10 ARM completely fails support for 64-bit applications.

    The lack of support for 64-bit applications is also a long minus for Windows 10 ARM devices. 64-bit bug “as a result, many fresh tools and Utilities are not just available in this system, and I quickly encountered problems when my favorite Twitter application and screen image tool both required x64 support even though they were listed in the Microsoft Store,” says Seifert.

    Source: https://mobiili.fi/2018/03/21/uudenlaiset-windows-10-arm-lapparit-keraavat-kritiikkia-sovellustuessa-pahoja-aukkoja/?utm_source=highfi&utm_medium=rss&utm_campaign=generic

    More:
    Always-connected Windows laptops show promise but still need work
    Thankfully, it’s not Windows RT all over again
    https://www.theverge.com/2018/3/20/17143554/microsoft-windows-snapdragon-always-connected-qualcomm-pc-review-asus-novago

    Laptops with built-in cellular connections are poised to be an actual thing for consumers this year, after years of being only available to business customers. One of the biggest pushes for these connected PCs is from Qualcomm, which has been touting its Snapdragon platform as the future of mobile laptop computing. Windows on Snapdragon computers, which run on Qualcomm’s smartphone processors and modems, are finally making their way to store shelves this spring.

    To get an idea of how this new platform works and how it’s different from the standard Windows 10 that’s available on hundreds of millions of devices already, I’ve been using one of the first Windows on Snapdragon PCs to arrive: Asus’ NovaGo convertible. Asus plans to sell the NovaGo in the US starting on May 1st for $599, which includes 4GB of RAM and 64GB of storage.

    Windows 10 looks and feels the same
    Always-available LTE is fantastic
    Resume is near instant and battery life is long
    Edge is life; Chrome is pain
    App compatibility is hit or miss

    Windows on Snapdragon is a 32-bit platform, which means that any 64-bit (x64, in Microsoft parlance) apps will fail to install or run on it. As a result, a lot of more recent tools and utilities just can’t be used on this system

    Reply
  15. Tomi Engdahl says:

    Chrome 66 Beta: CSS Typed Object Model, Async Clipboard API, AudioWorklet
    https://blog.chromium.org/2018/03/chrome-66-beta-css-typed-object-model.html

    Reply
  16. Tomi Engdahl says:

    Chrome 66 beta restricts autoplay, prevents Windows crashes, adds ‘Home Duplex’ & ‘Modern Design’ on Android
    https://9to5google.com/2018/03/21/google-chrome-66-beta-features/

    Following a more developer-focused release last version, Chrome 66 is now in the beta channel with a number of new user features and changes. Google is implementing new media autoplay behavior and warnings about Chrome crashes related to third-party software on Windows. On Android, the browser replaces “Chrome Home” with a toolbar, while there’s a new “Modern Design.”

    With version 64 in January, Chrome added the ability to mute audio across sites. Google is now continuing its efforts towards a consistent playback experience with a new behavior that governs when media can autoplay.

    In Chrome 66, media automatically starts if conditions meets the following criteria:

    Content is muted, or does not feature audio
    Users previously tapped or clicked on the site during the browsing session
    On mobile, if the site has been added to the Home Screen by the user
    On desktop, if the user has frequently played media on the site, according to the Media Engagement Index

    Reply
  17. Tomi Engdahl says:

    Xilinx FPGA manufacturer today introduced its new vision and at the same time the new product category it calls ACAP, the adaptive computational acceleration platform.

    - ACAP computing capabilities go far beyond the capabilities of traditional FPGAs. It’s a genuinely new product category that can be altered to fit different applications and workloads at a device level, Peng said at a press conference.

    The words are covered. With the ACAP processor, functions can be dynamically changed during performance. The change takes time in milliseconds, and then the new application-specific computation succeeds much higher power per watt than with a general-purpose processor or graphics processor.

    According to Peng, ACAP is ideally suited to new big data and artificial intelligence applications. These include video processing, database processing, data compression, searches, calculation of AI models, machine vision, and many of the network acceleration functions.

    The first ACAP family is called Everest and is implemented in the TSMC’s 7 nanometer process. The first chips in the chip are getting through this year. – Everest circuits will radically differ from what Xilinx and Altera have done so far.

    Source: http://www.etn.fi/index.php/13-news/7724-pc-laskennan-aika-on-ohi

    Reply
  18. Tomi Engdahl says:

    Tom Warren / The Verge:
    Microsoft unveils cloud gaming division led by Microsoft vet Kareem Choudhry who says the company wants content available across all devices, hints at streaming

    Microsoft’s new gaming cloud division readies for a future beyond Xbox
    Cloud services seen as the future of games
    https://www.theverge.com/2018/3/15/17123452/microsoft-gaming-cloud-xbox-future

    Reply
  19. Tomi Engdahl says:

    9 hidden risks of telecommuting policies
    https://www.cio.com/article/3261950/hiring-and-staffing/hidden-risks-of-telecommuting-policies.html

    As the boundaries of the enterprise shift, IT’s ability to support and protect remote work environments must shift correspondingly. Here’s how to develop a comprehensive telecommuting policy to mitigate potential liabilities.

    How I Learned to Stop Worrying and Love Telecommuting
    https://www.cio.com/article/2436957/it-organization/how-i-learned-to-stop-worrying-and-love-telecommuting.html

    CareGroup CIO John Halamka takes an in-depth look at the policies and technologies necessary for supporting flexible work arrangements.

    Reply
  20. Tomi Engdahl says:

    Microsoft’s new gaming cloud division readies for a future beyond Xbox
    Cloud services seen as the future of games
    https://www.theverge.com/2018/3/15/17123452/microsoft-gaming-cloud-xbox-future

    Microsoft shipped its first video game in 1981, appropriately named Microsoft Adventure. It was an MS-DOS game that booted directly from a floppy disk, and set the stage for Microsoft’s adventures in gaming. A lot has changed over the past 37 years, and when you think of Microsoft’s efforts in gaming these days you’ll immediately think of Xbox. It’s fair to say a lot is about to change over the next few decades too, and Microsoft is getting ready. Today, the software giant is unveiling a new gaming cloud division that’s ready for a future where consoles and gaming itself are very different to today.

    Reply
  21. Tomi Engdahl says:

    Xilinx to bust ACAP in the dome of data centres all over with uber FPGA
    That’s an Adaptive Compute Acceleration Platform btw
    https://www.theregister.co.uk/2018/03/19/xilinx_everest_acap_super_fpga/

    Xilinx is developing a monstrous FPGA that can be dynamically changed at the hardware level.

    The biz’s “Everest” project is the development of what Xilinx termed an Adaptive Compute Acceleration Platform (ACAP), an integrated multi-core heterogeneous design that goes way beyond your bog-standard FPGA, apparently. It is being built with TSMC’s 7nm process technology and tapes out later this year.

    Xilinx Unveils Revolutionary Adaptable Computing Product Category
    https://www.xilinx.com/news/press/2018/xilinx-unveils-revolutionary-adaptable-computing-product-category.html

    ACAP TECHNICAL DETAILS

    An ACAP has – at its core – a new generation of FPGA fabric with distributed memory and hardware-programmable DSP blocks, a multicore SoC, and one or more software programmable, yet hardware adaptable, compute engines, all connected through a network on chip (NoC). An ACAP also has highly integrated programmable I/O functionality, ranging from integrated hardware programmable memory controllers, advanced SerDes technology and leading edge RF-ADC/DACs, to integrated High Bandwidth Memory (HBM) depending on the device variant.

    Software developers will be able to target ACAP-based systems using tools like C/C++, OpenCL and Python. An ACAP can also be programmable at the RTL level using FPGA tools.

    “This is what the future of computing looks like,” says Patrick Moorhead, founder, Moor Insights & Strategy. “We are talking about the ability to do genomic sequencing in a matter of a couple of minutes, versus a couple of days. We are talking about data centers being able to program their servers to change workloads depending upon compute demands, like video transcoding during the day and then image recognition at night. This is significant.”

    ACAP has been under development for four years at an accumulated R&D investment of over one billion dollars (USD). There are currently more than 1,500 hardware and software engineers at Xilinx designing “ACAP and Everest.” Software tools have been delivered to key customers. “Everest” will tape out in 2018 with customer shipments in 2019.

    Reply
  22. Tomi Engdahl says:

    Web Giants Want Optical, Flash Advances
    Facebook, M’soft call for net, storage interfaces
    https://www.eetimes.com/document.asp?doc_id=1333094

    Web giants called for significant shifts in networking, storage and software at the Open Compute Summit here to keep their heads above a rising flood of data.

    Facebook said it will need switch ASICs with optical interfaces within three years for its next big network upgrade. Microsoft announced a new low-level interface for NAND flash drives to make better use of storage. And the Open Compute Project (OCP) struck a partnership with the Linux Foundation to plug holes integrating and testing systems and software together.

    Interestingly, little was said about the adoption of machine learning, the latest force driving the data explosion. Security, however, is a rising focus with increasing support for a root-of-trust module announced last year and a broader security project just getting off the ground.

    More than 3,000 people registered for the ninth annual event sponsored by OCP and launched by Facebook in 2011. The group now has 175 members that have created 375 specifications related to open hardware for data centers.

    Reply
  23. Tomi Engdahl says:

    New Xilinx CEO Touts ‘Adaptive Computing’
    https://www.eetimes.com/document.asp?doc_id=1333086

    Less than two months after taking the reigns at Xilinx, Victor Peng outlined a new strategy for the programmable logic stalwart that emphasizes technology for the data center and “adaptive computing,” centered around what the company calls a new class of devices.

    Claiming performance advantages over high-end CPUs and GPUs for applications related to Big Data and artificial intelligence, Xilinx will begin rolling out a new type of multicore chip next year that emphasize compute capability and with both software- and hardware-level programmability.

    Xilinx — long the market leader in programmable logic devices — claims its adaptive compute acceleration platform (ACAP) goes far beyond the capabilities of FPGAs to deliver levels of performance and performance-per-watt unmatched by CPUs or GPUs. An ACAP consists of an FPGA fabric with distributed memory and hardware-programmable DSP blocks, a multicore SoC and one or more software-programmable compute engines, all connected through an on-chip network.

    Reply
  24. Tomi Engdahl says:

    PCIe 4.0 is to power data centers

    Last October, PCI-SIG completed the 4.0 standard for PCI Express bus technology. Now the new bus begins to conquer data centers. This can be seen, for example, by introducing system development platforms to equipment manufacturers.

    Microsemi, California, became one of the first in the market. Its Switchtec platform enables immediate development of PCIe 4.0-based hardware and software development.

    Compared to today’s commonly used PCIe 3.0 technology, a new bus is an improvement in many ways. The bandwidth of each line doubles in version 4.0. The x16 link can thus transfer data up to 64 gigabytes per second. This is enough for the needs of many graphics processors for a long time.

    Over the years, the capacity of the PCIe bus has always been doubled with the new standard platform. 4.0 is ahead of the 5.0 standard, which should reach 128 gigabytes per second. The 5.0 stndard is expected to be ready in 2020

    Source: http://www.etn.fi/index.php/13-news/7739-pcie-4-0-valtaa-datakeskukset

    Reply
  25. Tomi Engdahl says:

    Xilinx Unveils Revolutionary Adaptable Computing Product Category
    https://www.xilinx.com/news/press/2018/xilinx-unveils-revolutionary-adaptable-computing-product-category.html

    Xilinx, Inc. (NASDAQ: XLNX), the leader in adaptive and intelligent computing, today announced a new breakthrough product category called adaptive compute acceleration platform (ACAP) that goes far beyond the capabilities of an FPGA. An ACAP is a highly integrated multi-core heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP’s adaptability, which can be done dynamically during operation, delivers levels of performance and performance per-watt that is unmatched by CPUs or GPUs.

    An ACAP is ideally suited to accelerate a broad set of applications in the emerging era of big data and artificial intelligence. These include: video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration. Software and hardware developers will be able to design ACAP-based products for end point, edge and cloud applications. The first ACAP product family, codenamed “Everest,” will be developed in TSMC 7nm process technology and will tape out later this year.

    “This is a major technology disruption for the industry and our most significant engineering accomplishment since the invention of the FPGA,” says Victor Peng, president and CEO of Xilinx. “This revolutionary new architecture is part of a broader strategy that moves the company beyond FPGAs and supporting only hardware developers. The adoption of ACAP products in the data center, as well as in our broad markets, will accelerate the pervasive use of adaptive computing, making the intelligent, connected, and adaptable world a reality sooner.”

    Reply
  26. Tomi Engdahl says:

    Xilinx to bust ACAP in the dome of data centres all over with uber FPGA
    That’s an Adaptive Compute Acceleration Platform btw
    https://www.theregister.co.uk/2018/03/19/xilinx_everest_acap_super_fpga/

    Xilinx is developing a monstrous FPGA that can be dynamically changed at the hardware level.

    The biz’s “Everest” project is the development of what Xilinx termed an Adaptive Compute Acceleration Platform (ACAP), an integrated multi-core heterogeneous design that goes way beyond your bog-standard FPGA, apparently. It is being built with TSMC’s 7nm process technology and tapes out later this year.

    Xilinx president and CEO Victor Peng, appointed January, claimed it is the “most significant engineering accomplishment since the invention of the FPGA”.

    The ACAP can be programmed at the RTL (Register-Transfer Level) with FPGA tools, and software devs can code for ACAP-based systems using C/C++, OpenCL and Python.

    The Everest ACAP features up to 50 billion transistors and is said to provide:

    Distributed memory
    Hardware-programmable DSP blocks
    Multicore SoC
    One or more software-programmable, hardware-adaptable, compute engines
    Network on chip (NoC)
    On-chip control blocks for security and power management
    Hardware-programmable memory controller
    CCIX and PCIe support
    Multi-mode Ethernet controllers
    Programmable I/O interfaces and serialisation/deserialisation (SerDes)
    High-bandwidth memory or programmable ADCs and DACs in some versions

    Reply
  27. Tomi Engdahl says:

    Amid congressional mandate to open source DoD’s software code, Code.mil serves as guidepost
    https://federalnewsradio.com/on-dod/2018/03/amid-congressional-mandate-to-open-source-dods-software-code-code-mil-serves-as-guidepost/

    As part of the 2018 National Defense Authorization Act, the Defense Department has until June to start moving much of its custom-developed software source code to a central repository and begin managing and licensing it via open source methods.

    The mandate might prove daunting for an organization in which open source practices are relatively scarce, especially considering that, until recently, there was no established open source playbook for the federal government. That’s begun to change

    Reply
  28. Tomi Engdahl says:

    Linux is the first platform for coders

    Almost half of the coders favor Linux, says the latest Stackoverflown survey. Linux announced its own platform for 48.3 percent of developers. Windows named 35.4 percent, Android 29 percent of the interviewed coders.

    Linux got the number one codecs last year. In the previous year’s survey, Windows was still the number one with a 41% share. Linux then named 32.9 percent at that time.

    Amazonin’s AWS announced its 24.1 percent favorite in the latest survey. MacOS rose to 17.9 percent, which sounds surprisingly small, as it is commonly seen in Mac OS X’s MacBooks.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=7753&via=n&datum=2018-03-23_14:10:57&mottagare=31202

    Reply
  29. Tomi Engdahl says:

    The coder is a young, white and heterosexual

    92.9 percent of the respondents in the Stackoverflown survey were men. 74.2 percent were white or European descent. Asia accounted for 11.5 percent, which is perhaps less than expected.

    Sexual minor encoders are not terribly large, as 93.2% of the respondents reported themselves to be heterosexual. On the other hand, it is difficult to perceive why orientation has been asked or what significance it could have in terms of code produced.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=7754&via=n&datum=2018-03-23_14:10:57&mottagare=31202

    Reply
  30. Tomi Engdahl says:

    Inside IBM’s Purge of Thousands of Workers Who Have One Thing in Common
    “Age discrimination is an open secret like sexual harassment was.”
    https://www.motherjones.com/crime-justice/2018/03/ibm-propublica-gray-hairs-old-heads/

    How the Crowd Led Us to Investigate IBM
    Our project started with a digital community of ex-employees.
    https://www.propublica.org/article/investigating-ibm-digital-community-ex-employees

    Reply
  31. Tomi Engdahl says:

    GitHub’s tool reduces open source software license violations
    https://www.infoworld.com/article/3264805/software-licensing/githubs-tool-reduces-open-source-software-license-violations.html

    Called Licensed, the tool finds license dependencies early in the development life cycle

    GitHub has open-sourced its Licensed tool, a Ruby gem that caches and verifies the status of license dependencies in Git repos.

    Licensed has helped GitHub engineers who use open source software find potential problems with license dependencies early in the development cycle. The tool reports any dependencies needing review.

    Reply
  32. Tomi Engdahl says:

    How to share a mouse and keyboard across multiple computers
    https://opensource.com/life/16/10/synchrony?sc_cid=7016000000127ECAAY

    Synergy is an open source software alternative for a physical KVM switch.

    Reply
  33. Tomi Engdahl says:

    Introduction to Eclipse Che, a next-generation, web-based IDE
    https://opensource.com/life/16/11/introduction-eclipse-che?sc_cid=7016000000127ECAAY

    Correctly installing and configuring an integrated development environment, workspace, and build tools in order to contribute to a project can be a daunting or time consuming task, even for experienced developers.

    After multiple days of struggling, Jewell could not get the project to work, but inspiration struck him. He wanted to make it so that “anyone, anytime can contribute to a project with installing software.”

    It is this idea that lead to the development of Eclipse Che.

    Eclipse Che is a web-based integrated development environment (IDE) and workspace. Workspaces in Eclipse Che are bundled with an appropriate runtime stack and serve their own IDE, all in one tightly integrated bundle.

    The ready-to-go bundled stacks included with Eclipse Che cover most of the modern popular languages. There are stacks for C++, Java, Go, PHP, Python, .NET, Node.js, Ruby on Rails, and Android development.

    Eclipse Che is a full-featured IDE, not a simple web-based text editor. It is built on Orion and the JDT. Intellisense and debugging are both supported, and version control with both Git and Subversion is integrated. The IDE can even be shared by multiple users for paired programming.

    One of the major technologies underlying Eclipse Che are Linux containers, using Docker. Workspaces are built using Docker and installing a local copy of Eclipse Che requires nothing but Docker and a small script file.

    Beyond Codenvy, contributors to Eclipse Che include Microsoft, Red Hat, IBM, Samsung, and many others. Several of the contributors are working on customized versions of Eclipse Che for their own specific purposes. For example, Samsung’s Artik IDE for IoT projects.

    Reply
  34. Tomi Engdahl says:

    Gartner predicts that by 2022, half of proprietary databases will be replaced with an open source database. Read Gartner research “State of the Open-Source DBMS Market, 2018” and learn why it is time to evaluate open source DBMS alternatives to proprietary databases. http://ow.ly/I3dd30j5mRA

    State of the Open-Source DBMS Market, 2018
    http://go.mariadb.com/Gartner-State-of-the-Open-Source-DBMS-Market.html?utm_source=facebook&utm_medium=social&utm_campaign=2018-gartner-osdbms-report

    It’s time to evaluate open source alternatives to proprietary databases.

    Why? To save up to 95% in costs over three years. Oracle Database Enterprise Edition is 25x more expensive than MariaDB TX in the example pricing comparison included in this Gartner report. That being said, adopting an open source database requires planning and analysis.

    Reply
  35. Tomi Engdahl says:

    The ability to correct errors in GPLv2 compliance: the right thing to do
    https://www.redhat.com/en/blog/ability-correct-errors-gplv2-compliance-right-thing-do?sc_cid=7016000000127ECAAY

    How often is it that 10 large technology companies agree on anything, much less agreeing to give up legal rights? It can happen when it’s the right thing to do.

    Today, six more technology companies – CA Technologies, Cisco, HPE, Microsoft, SAP and SUSE — have all committed to offering the GPLv3 cure approach to licensees of their GPLv2, LGPLv2.1 and LGPLv2 licensed code (except in cases of a defensive response to a legal proceeding). The GPLv3 cure approach offers licensees of GPLv2 code a period of time to come into compliance before their licenses are terminated but does not involve the relicensing of the code under GPLv3.

    Reply
  36. Tomi Engdahl says:

    The rise of artificial intelligence is creating new variety in the chip market, and trouble for Intel
    https://www.economist.com/news/business/21717430-success-nvidia-and-its-new-computing-chip-signals-rapid-change-it-architecture?etear=sasexpectexceptional

    The success of Nvidia and its new computing chip signals rapid change in IT architecture

    “WE ALMOST went out of business several times.” Usually founders don’t talk about their company’s near-death experiences. But Jen-Hsun Huang, the boss of Nvidia, has no reason to be coy. His firm, which develops microprocessors and related software, is on a winning streak. In the past quarter its revenues increased by 55%, reaching $2.2bn, and in the past 12 months its share price has almost quadrupled.

    A big part of Nvidia’s success is because demand is growing quickly for its chips, called graphics processing units (GPUs), which turn personal computers into fast gaming devices. But the GPUs also have new destinations: notably data centres where artificial-intelligence (AI) programmes gobble up the vast quantities of computing power that they generate.

    Reply
  37. Tomi Engdahl says:

    Scientists Construct Biocomputer Made From Living Human Cells
    http://www.iflscience.com/technology/scientists-construct-biocomputer-made-living-human-cells/

    a team from ETH Zurich and the University of Basel are making headways on constructing biocomputers – those made from living cells – and a new paper, in Nature Methods, details their most advanced system to date.

    Using nine different cell populations assembled into 3D cultures, the team of synthetic biologists has managed to get them to behave like a very simple electronic computational circuit.

    Reply
  38. Tomi Engdahl says:

    You bet your DRaaS: Infinidat squeezes out new backup, array and cloud compute products
    Four-piece suite coming for IT wallets
    https://www.theregister.co.uk/2018/03/27/infinidat_moves_into_data_protection_disaster_recovery_and_public_cloud_compute_brokerage/

    A flurry of array related activity came out of Infinidat today. Well, it is a Tuesday so why the hell not.

    The big iron vendor told us it is squeezing out a higher capacity model, it is releasing a faster restore backup target array and a “data centre black box” billed as a zero data loss disaster recovery system, and enabling users to play off public cloud compute vendors against each other.

    On the data availability front the company claimed it is allowing for better protection and recovery of the data stored in its arrays, and enabling them to withstand a regional disaster.

    InfiniSync

    The data centre Black Box recorder is the fruits of buying Axxana in late 2017. Axxana was founded in 2005 by CEO Eli Efrat, CTO Alex Winokur and EVP Business Development Dan Hochberg (left in 2010). It took in some $14m in funding and developed a virtually indestructible data recorder to safeguard storage arrays.

    The idea was to obviate the need for a synchronous link to a remote DR site for high-value data where a low recovery point objective was needed. Typically, we’re told, such DR arrangements involve a sync link from a primary array in, say, New York, to a nearby bunker site in New Jersey, with multiple millisecond latency. There is also an asynchronous link to a remote secondary site, say in Dallas, with an overall recovery point objective (RPO) measured in minutes.

    The Axxana Phoenix “black box” is a heavily insulated local system that Infinidat said can survive an explosion. It stores up to 1.6TB of recent primary array data changes as they occur, with a latency of less than 0.3ms on its SSDs. Changes are also sent asynchronously to the secondary site. If the primary site goes down then recent sync data on the Infinisync box can be sent to the secondary site, by WAN or cellular transfer, and you reduce the RPO time to seconds.

    The claimed outcome is fast post-disaster application recovery – RPO 0, with no data loss whatsoever and cost-savings. There is no need for the bunker site and sync comms links to it.

    Axxana is now a fully owned Infinidat subsidiary and Infinidat was its first array integration, followed by arrays from Dell EMC, IBM and others.

    Reply
  39. Tomi Engdahl says:

    Ask Slashdot: How Did Real-Time Ray Tracing Become Possible With Today’s Technology?
    https://ask.slashdot.org/story/18/03/27/000230/ask-slashdot-how-did-real-time-ray-tracing-become-possible-with-todays-technology?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    There are occasions where multiple big tech manufacturers all announce the exact same innovation at the same time — e.g. 4K UHD TVs. Everybody in broadcasting and audiovisual content creation knew that 4K/8K UHD and high dynamic range (HDR) were coming years in advance, and that all the big TV and screen manufacturers were preparing 4K UHD HDR product lines because FHD was beginning to bore consumers. It came as no surprise when everybody had a 4K UHD product announcement and demo ready at the same time. Something very unusual happened this year at GDC 2018 however. Multiple graphics and GPU companies, like Microsoft, Nvidia, and AMD, as well as other game developers and game engine makers, all announced that real-time ray tracing is coming to their mass-market products, and by extension, to computer games, VR content and other realtime 3D applications.

    Why is this odd? Because for many years any mention of 30+ FPS real-time ray tracing was thought to be utterly impossible with today’s hardware technology. It was deemed far too computationally intensive for today’s GPU technology and far too expensive for anything mass market. Gamers weren’t screaming for the technology. Technologists didn’t think it was doable at this point in time. Raster 3D graphics — what we have in DirectX, OpenGL and game consoles today — was very, very profitable

    Reply
  40. Tomi Engdahl says:

    Dean Takahashi / VentureBeat:
    Nvidia unveils Quadro GV100 GPU with real-time ray tracing capabilities for creating realistic animations quickly

    Nvidia reinvents workstations with real-time ray tracing
    https://venturebeat.com/2018/03/27/nvidia-reinvents-workstations-with-real-time-ray-tracing/

    Nvidia announced that it has created added real-time ray tracing to its graphics processing units (GPUs) for workstations in a move that could make it much easier for media and entertainment professionals to create realistic animations quickly.

    The Santa Clara, California company unveiled the Nvidia Quadro GV100 GPU with Nvidia’s RTX technology for real-time ray tracing, which is a more efficient way of animating a 3D graphics scene. It effectively uses rays of light to identify and animate the objects in a scene. Nvidia made a similar announcement for game engines last week at the Game Developers Conference.

    Until now, it used too much computing power to do ray tracing in real time. Filmmakers could use it for special effects or animations, but it was the kind of task that the artists would leave running on their computers overnight. That’s why Nvidia calls real-time ray tracing the “biggest advance in computer graphics since the introduction of programmable shaders (which made it possible to create new kinds of surfaces in 3D images) nearly two decades ago.”

    “It is so hard to compute, and that is why ray tracing has been the Holy Grail of compute science for the last 40 years,” Huang said. “Everything you see here is completely in real time.”

    “Nvidia has reinvented the workstation by taking ray-tracing technology optimized for our Volta architecture, and marrying it with the highest-performance hardware ever put in a workstation,”

    The Quadro GV100 GPU, with 32GB of memory scalable to 64GB with multiple Quadro GPUs using Nvidia NVLink interconnect technology, is the highest-performance platform available for these applications.

    “The availability of Nvidia RTX opens the door to make real-time ray tracing a reality. By making such powerful technology available to the game development community with the support of the new DirectX Raytracing API, Nvidia is the driving force behind the next generation of game and movie graphics,” said Kim Libreri, chief technology officer at Epic Games, in a statement.

    Reply
  41. Tomi Engdahl says:

    Susan Decker / Bloomberg:
    A US appeals court revives Oracle’s billion-dollar copyright claim against Google, saying Google’s use of Java wasn’t “fair use” — Google could owe Oracle Corp. billions of dollars after an appeals court said it didn’t have the right to use the Oracle-owned Java programming code …

    Google Could Owe Oracle $8.8 Billion in Android Fight
    27. maaliskuuta 2018 klo 16.44 UTC+3 Updated on 27. maaliskuuta 2018 klo 21.33 UTC+3
    https://www.bloomberg.com/news/articles/2018-03-27/oracle-wins-revival-of-billion-dollar-case-against-google

    Google’s use of Java wasn’t ‘fair use,’ appeals court rules
    Case remanded to determine how much Google should pay

    Reply
  42. Tomi Engdahl says:

    Nvidia CEO comments on GPU shortage caused by Ethereum
    https://techcrunch.com/2018/03/27/nvidia-ceo-comments-on-gpu-shortage-caused-by-etherium/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    AdChoices

    Nvidia CEO comments on GPU shortage caused by Ethereum
    Matt Burns
    @mjburnsy / 1 hour ago

    NVID0633
    There’s currently a shortage of Nvidia GPUs and Nvidia’s CEO pointed to Ethereum distributed ledgers as the cause. Today at Nvidia’s GTC conference he spoke to a group of journalists following his keynote address and addressed the shortage.

    Huang simply stated that Nvidia is not in the business of cryptocurrency or distributed ledgers. As such, he stated he preferred if his company’s GPUs were used the areas Nvidia is targeting though explained why Nvidia’s products are used for crypto mining.

    “[Cryptocurrency] is not our business,” he said. “Gaming is growing and workstation is growing because of ray tracing.” He noted that Nvidia’s high performance business is also growing and these are the areas he wished Nvidia could allocate units for.

    Reply
  43. Tomi Engdahl says:

    Build your own PC inside the PC you built with PC Building Simulator
    https://techcrunch.com/2018/03/27/build-your-own-pc-inside-the-pc-you-built-with-pc-building-simulator/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook

    I suppose it was only a matter of time until someone made a game where you assemble your own PC. It’s called PC Building Simulator, as you might guess, and it looks fabulous.

    http://store.steampowered.com/app/621060/PC_Building_Simulator/

    Reply
  44. Tomi Engdahl says:

    Java-aaaargh! Google faces $9bn copyright bill after Oracle scores ‘fair use’ court appeal win
    https://www.theregister.co.uk/2018/03/27/oracle_apple_copyright_reversal/

    You thought this was over? You thought wrong, laughs Larry

    The US Court of Appeals for the Federal Circuit in Washington DC has revived Oracle’s bid to bill Google for billions over its use of copyrighted Java APIs in its Android mobile operating system.

    On Tuesday, the appeals court reversed a 2016 jury finding of fair use that deemed Google’s actions acceptable, and sent the case back to federal court in California to determine damages, which Oracle in 2016 said should amount to about $8.8bn.

    A key consideration in whether the use of copyrighted material qualifies for the fair use defense is whether the use is transformative. The appeals court decided that Google’s use of the Java APIs was not transformative.

    Reply
  45. Tomi Engdahl says:

    Chris Mellor / The Register:
    Pure Storage and Nvidia unveil AIRI, a four-petaflop platform using Nvidia’s DGX-1 servers and Pure’s FlashBlade storage system for large-scale AI initiatives

    If you’ve got $1m+ to blow on AI, meet Pure, Nvidia’s AIRI fairy: A hyperconverged beast
    0.5 PFLOPS FP32, 0.5 PB of effective flash storage
    http://www.theregister.co.uk/2018/03/27/pure_nvidia_ai_airi/

    Pure Storage and Nvidia have produced a converged machine-learning system to train AI models using millions of data points.

    It’s called AIRI – AI-Ready Infrastructure – and combines a Pure FlashBlade all-flash array with four Nvidia DGX-1 GPU-accelerated boxes and a pair of 100GbitE switches from Arista.

    The system has been designed by Pure and Nvidia, and is said to be easier and simpler to buy, deploy, and operate than buying and integrating the components separately; the standard converged infrastructure pitch.

    Reply
  46. Tomi Engdahl says:

    Dean Takahashi / VentureBeat:
    Nvidia unveils Quadro GV100 GPU with real-time ray tracing capabilities for creating realistic animations quickly

    Nvidia reinvents workstations with real-time ray tracing
    https://venturebeat.com/2018/03/27/nvidia-reinvents-workstations-with-real-time-ray-tracing/

    Nvidia announced that it has created added real-time ray tracing to its graphics processing units (GPUs) for workstations in a move that could make it much easier for media and entertainment professionals to create realistic animations quickly.

    The Santa Clara, California company unveiled the Nvidia Quadro GV100 GPU with Nvidia’s RTX technology for real-time ray tracing, which is a more efficient way of animating a 3D graphics scene. It effectively uses rays of light to identify and animate the objects in a scene. Nvidia made a similar announcement for game engines last week at the Game Developers Conference.

    Frederic Lardinois / TechCrunch:
    Nvidia announces support for Kubernetes container orchestration on Nvidia GPUs and will contribute its GPU enhancements to the Kubernetes open source community

    Nvidia brings joy by bringing GPU acceleration to Kubernetes
    https://techcrunch.com/2018/03/27/nvidia-brings-joy-by-bringing-gpu-acceleration-to-kubernetes/

    This has been a long time coming, but during his GTC keynote, Nvidia CEO Jensen Huang today announced support for the Google-incubated Kubernetes container orchestration system on Nvidia GPUs.

    The idea here is to optimize the use of GPUs in hyperscale data centers — the kind of environments where you may use hundreds or thousands of GPUs to speed up machine learning processes — and to allow developers to take these containers to multiple clouds without having to make any changes.

    “Now that we have all these accelerated frameworks and all this accelerated code, how do we deploy it into the world of data centers?,” Jensen asked. “Well, it turns out there is this thing called Kubernetes . […] This is going to bring so much joy. So much joy.”

    Nvidia is contributing its GPU enhancements to the Kubernetes open-source community. Machine learning workloads tend to be massive, both in terms of the computation that’s needed and the data that drives it. Kubernetes helps orchestrate these workloads and with this update, the orchestrator is now GPU-aware.

    “Kubernetes is now GPU-aware. The Docker container is now GPU-accelerated.

    Reply
  47. Tomi Engdahl says:

    Susan Decker / Bloomberg:
    A US appeals court revives Oracle’s billion-dollar copyright claim against Google, saying Google’s use of Java wasn’t “fair use” — Google could owe Oracle Corp. billions of dollars after an appeals court said it didn’t have the right to use the Oracle-owned Java programming code …

    Google Could Owe Oracle $8.8 Billion in Android Fight
    https://www.bloomberg.com/news/articles/2018-03-27/oracle-wins-revival-of-billion-dollar-case-against-google

    Google could owe Oracle Corp. billions of dollars for using Oracle-owned Java programming code in its Android operating system on mobile devices, an appeals court said, as the years-long feud between the two software giants draws near a close.

    Mike Masnick / Techdirt:
    Insanity Wins As Appeals Court Overturns Google’s Fair Use Victory For Java APIs
    https://www.techdirt.com/articles/20180327/10431439512/insanity-wins-as-appeals-court-overturns-googles-fair-use-victory-java-apis.shtml

    Oh, CAFC. The Court of Appeals for the Federal Circuit has spent decades fucking up patent law, and now they’re doing their damndest to fuck up copyright law as well. In case you’d forgotten, the big case between Oracle and Google over whether or not Google infringed on Oracle’s copyrights is still going on — and it appears it will still be going on for quite a while longer, as CAFC this morning came down with a laughably stupid opinion, overturning the district court’s jury verdict, which had said that Google’s use of a few parts of Java’s API was protected by fair use. That jury verdict was kind of silly in the first place, because the whole trial (the second one in the case) made little sense, as basically everyone outside of Oracle and the CAFC had previously understood (correctly) that APIs are simply not covered by copyright.

    Section 102(b) of the Copyright Act says quite clearly:

    In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work.

    Reply
  48. Tomi Engdahl says:

    Microsoft loves Linux so much it wants someone else to build distros for its Windows Store
    WSL blueprint open-sourced to tempt distro makers
    https://www.theregister.co.uk/2018/03/27/microsoft_wsl_oss/

    Microsoft quietly open-sourced a Windows Subsystem for Linux (WSL) sample last night in an effort to persuade Linux distribution maintainers to add their distros to the Windows Store.

    The sample will also allow developers to side-load their own custom distribution packages onto a development machine.

    Open Sourcing a WSL Sample for Linux Distribution Maintainers and Sideloading Custom Linux Distributions
    https://blogs.msdn.microsoft.com/commandline/2018/03/26/wsl-distro-launcher/

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*