3 AI misconceptions IT leaders must dispel


 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”


  1. Tomi Engdahl says:

    These pop songs were written by OpenAI’s deep-learning algorithm

    The news: In a fresh spin on manufactured pop, OpenAI has released a neural network called Jukebox that can generate catchy songs in a variety of different styles, from teenybop and country to hip-hop and heavy metal. It even sings—sort of.

    How it works: Give it a genre, an artist, and lyrics, and Jukebox will produce a passable pastiche in the style of well-known performers, such as Katy Perry, Elvis Presley or Nas. You can also give it the first few seconds of a song and it will autocomplete the rest.

  2. Tomi Engdahl says:

    Table Hockey Robot — 12 Motors and Webcam Are a Worthy Rival

    This automated game of table hockey uses machine learning to recognize players and make gameplay decisions.

  3. Tomi Engdahl says:

    Microsoft’s Brad Smith says company will not sell facial recognition tech to police

    Microsoft is joining IBM and Amazon in taking a position against the use of facial recognition technology by law enforcement — at least, until more regulation is in place.

    During a remote interview at a Washington Post Live event this morning, the company’s president Brad Smith said Microsoft has already been taking a “principled stand” on the proper use of this technology.

    “As a result of the principles that we’ve put in place, we do not sell facial recognition technology to police departments in the United States today,” Smith said.

  4. Tomi Engdahl says:

    Use Unity’s perception tools to generate and analyze synthetic data at scale to train your ML models

    Synthetic data alleviates the challenge of acquiring labeled data needed to train machine learning models. In this post, the second in our blog series on synthetic data, we will introduce tools from Unity to generate and analyze synthetic datasets with an illustrative example of object detection.

    Synthetic data: Simulating myriad possibilities to train robust machine learning models

  5. Tomi Engdahl says:

    Jay Greene / Washington Post:
    Microsoft says it will not sell facial recognition tech to police departments until there is a federal law regulating its use — The software giant will ban police use of its controversial facial-recognition systems, as the company awaits regulatory rules for how law-enforcement agencies deploy the technology.

    Microsoft won’t sell police its facial-recognition technology, following similar moves by Amazon and IBM

  6. Tomi Engdahl says:

    Ashlee Vance / Bloomberg:
    OpenAI launches its first commercially available product, an API that can perform a broad set of language tasks, like translation or writing a news story

  7. Tomi Engdahl says:

    Catherine Shu / TechCrunch:
    JIFFY.ai, which uses RPA and AI to help companies automate tasks, raises $18M Series A led by Nexus Venture Partners

    Enterprise automation platform JIFFY.ai raises $18 million Series A

    JIFFY.ai, the brand name of Paanini, uses robotic process automation (RPA) and machine learning and artificial intelligence to help companies automate tasks that are usually performed manually, making operations more time and cost efficient. Its platform also includes a design studio for no-code application development, and a configurable analytics dashboard to monitor automated processes.

    JIFFY.ai’s largest equity shareholder is its non-profit organization, Paanini Foundation, which was created to provide job training and placement programs for people whose positions are displaced because of RPA and other automation tech. According to a report last year from Gartner, RPA is the fastest growing enterprise software market, and the research firm also predicts that by 2024, low-code application development will be responsible for more than 65% of app development activity.

  8. Tomi Engdahl says:

    James Vincent / The Verge:
    Facebook announces the results of its first Deepfake Detection Challenge, says the winning algorithm spotted deepfakes with an average accuracy of just 65.18% — But the company says deepfakes are not currently ‘a big issue’ — Facebook has announced the results of its first Deepfake Detection Challenge …

    Facebook contest reveals deepfake detection is still an ‘unsolved problem’
    But the company says deepfakes are not currently ‘a big issue’

    Facebook has announced the results of its first Deepfake Detection Challenge, an open competition to find algorithms that can spot AI-manipulated videos. The results, while promising, show there’s still lots of work to be done before automated systems can reliably spot deepfake content, with researchers describing the issue as an “unsolved problem.”

    Facebook says the winning algorithm in the contest was able to spot “challenging real world examples” of deepfakes with an average accuracy of 65.18 percent. That’s not bad, but it’s not the sort of hit-rate you would want for any automated system.

    Deepfakes have proven to be something of an exaggerated menace for social media. Although the technology prompted much handwringing about the erosion of reliable video evidence, the political effects of deepfakes have so far been minimal. Instead, the more immediate harm has been the creation of nonconsensual pornography, a category of content that’s easier for social media platforms to identify and remove.

    Some 2,114 participants submitted more than 35,000 detection algorithms to the competition. They were tested on their ability to identify deepfake videos from a dataset of around 100,000 short clips. Facebook hired more than 3,000 actors to create these clips, who were recorded holding conversations in naturalistic environments. Some clips were altered using AI by having other actors’ faces pasted on to their videos.

    Researchers were given access to this data to train their algorithms, and when tested on this material, they produced accuracy rates as high as 82.56 percent. However, when the same algorithms were tested against a “black box” dataset consisting of unseen footage, they performed much worse, with the best-scoring model achieving an accuracy rate of 65.18 percent. This shows detecting deepfakes in the wild is a very challenging problem.

    Schroepfer said Facebook is currently developing its own deepfake detection technology separate from this competition. “We have deepfake detection technology in production and we will be improving it based on this context,”

    Schroepfer added that while deepfakes were “currently not a big issue” for Facebook, the company wanted to have the tools ready to detect this content in the future — just in case. Some experts have said the upcoming 2020 election could be a prime moment for deepfakes to be used for serious political influence.

  9. Tomi Engdahl says:

    Artificial intelligence makes blurry faces look more than 60 times sharper

    This AI turns blurry pixelated photos into hyperrealistic portraits that look like real people. The system automatically increases any image’s resolution up to 64x

    Duke University researchers have developed an AI tool that can turn blurry, unrecognizable pictures of people’s faces into eerily convincing computer-generated portraits, in finer detail than ever before.

    Previous methods can scale an image of a face up to eight times its original resolution. But the Duke team has come up with a way to take a handful of pixels and create realistic-looking faces with up to 64 times the resolution, ‘imagining’ features such as fine lines, eyelashes and stubble that weren’t there in the first place.

    The system cannot be used to identify people, the researchers say: It won’t turn an out-of-focus, unrecognizable photo from a security camera into a crystal clear image of a real person. Rather, it is capable of generating new faces that don’t exist, but look plausibly real.

    While the researchers focused on faces as a proof of concept, the same technique could in theory take low-res shots of almost anything and create sharp, realistic-looking pictures, with applications ranging from medicine and microscopy to astronomy and satellite imagery,

    The researchers will present their method, called PULSE, at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR), held virtually from June 14 to June 19

    The team used a tool in machine learning called a “generative adversarial network,” or GAN, which are two neural networks trained on the same data set of photos. One network comes up with AI-created human faces that mimic the ones it was trained on, while the other takes this output and decides if it is convincing enough to be mistaken for the real thing. The first network gets better and better with experience, until the second network can’t tell the difference.

    PULSE can create realistic-looking images from noisy, poor-quality input that other methods can’t

    The system can convert a 16×16-pixel image of a face to 1024 x 1024 pixels in a few seconds, adding more than a million pixels, akin to HD resolution. Details such as pores, wrinkles, and wisps of hair that are imperceptible in the low-res photos become crisp and clear in the computer-generated versions.

  10. Tomi Engdahl says:

    Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared.

    Several deepfake videos have gone viral recently, giving millions around the world their first taste of this new technology: President Obama using an expletive to describe President Trump, Mark Zuckerberg admitting that Facebook’s true goal is to manipulate and exploit its users, Bill Hader morphing into Al Pacino on a late-night talk show.

    While impressive, today’s deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long.

  11. Tomi Engdahl says:

    Amazon will temporarily disallow the police from using the company’s facial recognition software for a year.


  12. Tomi Engdahl says:

    Microsoft Joins Ban on Sale of Facial Recognition Tech to Police
    Microsoft has joined Amazon and IBM in banning the sale of facial
    recognition technology to police departments and pushing for federal
    laws to regulate the technology.

  13. Tomi Engdahl says:

    George Anadiotis / ZDNet:
    Streamlit, an open-source Python app framework geared toward data scientists, raises $21M Series A co-led by Gradient Ventures and GGV Capital

    Streamlit wants to revolutionize building machine learning and data science applications, scores $21 million Series A funding

    Streamlit wants to be for data science what business intelligence tools have been for databases: A quick way to get to results, without bothering much with the details

  14. Tomi Engdahl says:

    The startup making deep learning possible without specialized hardware

    GPUs have long been the chip of choice for performing AI tasks. Neural Magic wants to change that.

  15. Tomi Engdahl says:

    The coronavirus is helping to erode the hype around artificial intelligence.

    AI Isn’t Magical and Won’t Help You Reopen Your Business

    The coronavirus is helping to erode the hype around artificial intelligence; data scientists get the axe and some ‘old-fashioned’ solutions work better.

    What do you do when a sudden break from past trends profoundly reorders the way the world works? If you’re a business, one thing you probably can’t do is turn to existing artificial intelligence.

    To carry out one of its primary applications, predictive analytics, today’s AI requires vast quantities of relevant data. When things change this quickly, there’s no time to gather enough. Many pre-pandemic models for many business functions are no longer useful; some might even point businesses in the wrong direction.

    AI has seemed to many experts like some kind of magic sauce that could be poured over any business process to transform it into a moneymaking Terminator, an unstoppable deliverer of self-driving cars and destroyer of white-collar work.

    it’s clear that AI isn’t progressing as fast as we were once told, and that it won’t be a cure-all.

    It is hardly an AI winter, but a chill is definitely in the air. Businesses for which AI is more of an add-on, as well as struggling startups and smaller firms, are furloughing data scientists previously awarded stratospheric salaries, and complaining they can’t find uses for AI. Suddenly, there’s a vindication of those who have argued that the systems most closely associated with modern AI—ones that can learn from huge pools of data—aren’t as capable as their superfans suggested.

    What’s happening is not so much a reckoning as a ‘rationalization’ of the application of AI in businesses.— Rajeev Sharma, Pactera Edge
    The hype around AI, among those who actually use it, is subsiding. The flip side of this trend is we’re starting to see that, far from being magical, AI is most useful for accomplishing some pretty mundane stuff. We use AI daily, every time we talk to a voice-activated personal assistant or unlock our phones with our faces or fingerprints. Beyond that, for most businesses, academics, public-health researchers and actual rocket scientists, AI is mostly about assisting humans in making decisions

    The pain for data scientists will likely increase as companies rethink how they spend

    “[Companies] feel this is a time they can get rid of extra hires or lower performers who are not a good cultural fit,”

    By contrast, the deep-pocketed big tech companies clearly see AI as not merely important but core to their businesses, and plan to keep hiring like crazy. Google Chief Executive Sundar Pichai has said that in the sweep of human history, AI is more important than electricity or fire, and all the Big Five have said they’ll continue to add to their engineering ranks during this downturn, including data scientists and AI experts. Now is a great time to hire them

    Right now, AI’s top-shelf approach to solving many problems, the so-called deep-learning algorithms, are good at doing things like identifying cats in pictures and beating humans at the strategy game Go. However, they require enormous quantities of data to train, and are left flat-footed when that data no longer represents the world we live in

    believing that their resulting models are “brittle.” That is, rather than resembling the mock-ups of the world that human brains construct, they’re just big engines for finding statistical correlations.

    The pandemic and the current business challenges in applying AI are “a wake-up call about how shitty the AI we’re building is,” says Prof. Marcus.

    Even in good times, small- and medium-size businesses simply don’t have enough data to train useful AI systems, says Prof. Marcus.

    A number of studies comparing new and supposedly improved AI algorithms to “old-fashioned” ones have found they perform no better—and sometimes worse—than systems developed years before.

    Researchers are working to fix the core problem in modern AI—the demand for so much data—but a solution is a long way off

  16. Tomi Engdahl says:

    What’s been happening with automata and other AI? What concern is so important we need to repeat it over and over? See http://worksnewage.blogspot.com/2019/06/robots-and-artificial-intelligence-four.html.

  17. Tomi Engdahl says:

    Fact no 1 : This article (for whatever worth it is for) is shown to you if you haven’t subscribed to WSJ involves AI. Fact no 2: AI is powering Tesla Cars, yes, it is not a small business. Fact no 3: AI has helped driven Dragon into space and facilitated reentry and landing of its rocket. Fact no 4: AI is making cancer diagnosis possible in 4 minutes, instead , half an hour. For me Fact 1, 2, 3 and 4 are enough to believe it’s magical. Fact no 5: This article is generated for gaining eyes with no meaningful insight or help offered to those affected.

  18. Tomi Engdahl says:

    Monica Houston compares inferencing MobileNet and EfficientNet-Lite on the Google AI Coral board vs. Avnet’s MaaXBoard and Raspberry Pi.


  19. Tomi Engdahl says:

    AI Tensor Blocks provide accelerated AI compute for common matrix-matrix and vector-matrix multiplications and INT8 inferencing.

    Intel Announces Its First AI-Optimized FPGA, the Stratix 10 NX FPGA

    AI Tensor Blocks provide accelerated AI compute for common matrix-matrix and vector-matrix multiplications and INT8 inferencing.

    According to Intel, AI model complexity is doubling every 3.5 months — that’s 10X per year! In order to keep up with machine learning software’s frantic pace, specialized ASICs can be replaced with more flexible FPGAs. The Intel Stratix 10 NX FPGA provides accelerated AI compute via AI Tensor Blocks, which are optimized for common matrix-matrix and vector-matrix multiplications, and INT8 inferencing.

    In-package 3D HBM DRAM allows models to be stored on-chip, and onboard transceivers permit multi-node inferencing with data transfer rates of up to 57.8 Gbps. The Stratix 10 NX FPGA delivers twice the clock frequency performance on up to 70% less power compared to conventional architectures, thanks to Intel’s Hyperflex FPGA Architecture, and has a staggering logic capacity of up to more than 2,000,000 logic elements for hardware customization.

  20. Tomi Engdahl says:

    CSAIL – MIT researchers have proposed a new approach (RF-ReID) that side steps the limitations of visible light imagery by harnessing radio frequency signals.

    Radio Killed the Video Stream

    Re-identify people across time and place with radio frequency signals.

  21. Tomi Engdahl says:

    Machine learning devs can now run GPU-accelerated code on Windows devices on AMD’s chips, OpenAI applies GPT-2 to computer vision
    Plus: AI for the benefit of humankind group loses a member and more

    Roundup Windows fans can finally train and run their own machine learning models off Radeon and Ryzen GPUs in their boxes, computer vision gets better at filling in the blanks and more in this week’s look at movements in AI and machine learning.

    GPT2 on images: Transformer models are all the rage right now. They’re typically applied to language to carry out tasks like text generation or question-and-answering. But what happens when they’re applied for computer vision instead?

  22. Tomi Engdahl says:

    Machine learning models trained on pre-COVID data are now completely out of whack, says Gartner
    That AI-powered product and price recommendation engine? Useless now

    Machine learning models built for doing business prior to the COVID-19 pandemic will no longer be valid as economies emerge from lockdowns, presenting companies with new challenges in machine learning and enterprise data management, according to Gartner.

    The research group has reported that “the extreme disruption in the aftermath of COVID-19… has invalidated many models that are based on historical data.”

    Organisations commonly using machine learning for product recommendation engines or next-best-offer, for example, will have to rethink their approach. They need to broaden their machine learning techniques as there is not enough post-COVID-19 data to retrain supervised machine learning models.

    Advanced modelling techniques can help

    In any case the ‘new normal’ is still emerging, making the validity of prediction models a challenge, said Rita Sallam, distinguished research vice president at Gartner.

    “It’s a lot harder to just say those models based on typical data that happened prior to the COVID-19 outbreak, or even data that happened during the pandemic, will be valid. Essentially what we’re seeing is [a] complete shift in many ways in customer expectations, in their buying patterns. Old processing, products, customer needs and wants, and even business models are being replaced. Organisations have to replace them at a pace that is just unprecedented,” she said.

    “Models that are based on extensive historical data; any sort of planning based on past performance and, even some models about customer behaviour [will no longer be valid] because things have changed significantly, and as a result, customers are behaving very differently,” she said.

  23. Tomi Engdahl says:

    Janus Rose / VICE:
    1,000+ technologists, including some from MIT, Facebook, Google, urge Springer to not publish a paper that describes a system to predict crime based on faces — Technologists from MIT, Harvard, and Google say research claiming to predict crime based on human faces creates a “tech-to-prison pipeline” that reinforces racist policing.

    Over 1,000 AI Experts Condemn Racist Algorithms That Claim to Predict Crime

    Technologists from MIT, Harvard, and Google say research claiming to predict crime based on human faces creates a “tech-to-prison pipeline” that reinforces racist policing.

  24. Tomi Engdahl says:

    The results are unexpected and… creepy?

    Face Depixelizer Neural Network “Brings Back The Sharpness” Of Photos In Low Resolution, And The Results Are Very Unexpected

    Face Depixelizer based on “PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models” repository. Given a low-resolution input image, Face Depixelizer searches the outputs of a generative model for high-resolution images that are perceptually realistic and downscale correctly.


  25. Tomi Engdahl says:

    Start-up Helps FPGAs Replace GPUs in AI Accelerators

    AI software startup Mipsology is working with Xilinx to enable FPGAs to replace GPUs in AI accelerator applications using only a single additional command. Mipsology’s “zero effort” software, Zebra, converts GPU code to run on Mipsology’s AI compute engine on an FPGA without any code changes or retraining necessary.

    Xilinx announced today that it is shipping Zebra with the latest build of its Alveo U50 cards for the data center. Zebra already supports inference acceleration on other Xilinx boards, including Alveo U200 and Alveo U250.

    “The level of acceleration that Zebra brings to our Alveo cards puts CPU and GPU accelerators to shame,” said Ramine Roane, Xilinx’s vice president of marketing. “Combined with Zebra, Alveo U50 meets the flexibility and performance needs of AI workloads and offers high throughput and low latency performance advantages to any deployment.”

    FPGAs historically were seen as notoriously difficult to program for non-specialists, but Mipsology wants to make FPGAs into a plug-and-play solution that is as easy to use as a CPU or GPU. The idea is to make it as easy as possible to switch from other types of acceleration to FPGA.

    “The best way to see [Mipsology] is that we do the software that goes on top of FPGAs to make them transparent in the same way that Nvidia did Cuda CuDNN to make the GPU completely transparent for AI users,” said Mipsology CEO Ludovic Larzul, in an interview with EE Times.

    Crucially, this can be done by non-experts, without deep AI expertise or FPGA skills, as no model retraining is needed to transition.

    “Ease of use is very important, because when you look at people’s AI projects, they often don’t have access to the AI team who designs the neural network,” Larzul said. “Typically if someone puts in place a system of robots, or a video surveillance system… they have some other teams or other parties developing the neural networks and training them. And once they get [the trained model], they don’t want to change it because they don’t have the expertise.”

    Versus Vitis
    Why would Xilinx support third-party software when it already has a comprehensive solution intended to make FPGAs accessible for both data scientists and software developers (namely, Vitis)?

    “The pitch in one sentence is: we are doing better,” Larzul said. “Another sentence would be: ours works.”

    Mipsology does not use any part of Vitis or link with it in any way, nor does it use XDNN, Xilinx’ neural network accelerator engine. Mipsology has its own compute engine within Zebra, which supports customers’ existing convolutional neural network (CNN) models, unlike XDNN which Larzul said has support for plenty of demos but is less well-suited to custom neural networks. This, he said, made getting custom networks up and running with XDNN “painful”. While XDNN can compete in applications where there is no threat from GPUs, Zebra is intended to enable FPGAs to take on GPUs head-on based on performance, cost and ease of use.

    Most customers’ motivation to change from GPU solutions is cost, Larzul said.

    “They want to lower the cost of the hardware, but don’t want to have to redesign the neural network,” he said. “There is a non-recurring cost [that’s avoided] because we are able to replace GPUs transparently, and there is no re-training or modification of the neural network.”

    FPGAs also offer reliability, in part because they are less aggressive on silicon real estate and often run cooler than other accelerator types including GPUs

    “Total cost of ownership is not just the price of the board,” Larzul said. “There is also the price of making sure the system is up and running.”

    Zebra is also aiming to make FPGAs compete on performance. While FPGAs typically offer less TOPS (tera operations per second) than other accelerators, they are able to use those TOPS more efficiently thanks to Zebra’s carefully designed compute engine, Larzul said.

    “That’s something that most of the ASIC start-ups accelerating AI have forgotten — they are doing a very big piece of silicon, trying to pack in more TOPS, but they haven’t thought about how you map your network on that to be efficient,” he said, noting that Zebra’s FPGA-based engine is able to process more images per second than a GPU with 6x the amount of TOPS.

    Mipsology has 12 patents pending and works closely with Xilinx as well as being compatible with third party accelerator cards such as Western Digital small form factor (SFF U.2) cards and Advantech cards like the Vega-4001.

  26. Tomi Engdahl says:

    AI researchers condemn predictive crime software, citing racial bias and flawed methods

    A collective of more than 1,000 researchers, academics and experts in artificial intelligence are speaking out against soon-to-be-published research claims to use neural networks to “predict criminality.” At the time of writing, more than 50 employees working on AI at companies like Facebook, Google and Microsoft had signed on to an open letter opposing the research and imploring its publisher to reconsider.


  27. Tomi Engdahl says:

    Kashmir Hill / New York Times:
    In January, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit, in what may be the first known case of its kind — In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

    Wrongfully Accused by an Algorithm

    In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

  28. Tomi Engdahl says:

    ML Opening New Doors For FPGAs

    Programmability shifts some of the burden from hardware engineers to software developers for ML applications.

  29. Tomi Engdahl says:

    Machine Learning in Malware Analysis

    Many different deep network architectures have been suggested by machine learning experts and malware analysts to detect both known and unknown malware. There has been proposed architectures include limited CNN Modeling, Boltzmann machines and hybrid methods.

  30. Tomi Engdahl says:

    Despite what you may have seen on your favorite TV crime drama, you can’t use a clear image generated from a blurry or pixelated original to identify someone. But the makers of a new upsampling algorithm have other applications in mind.

    Making Blurry Faces Photorealistic Goes Only So Far

    Duke University researchers have created an AI algorithm (“PULSE”) that pixelates an uploaded picture of a human face and then explores the range of possible (computer-generated) human faces that could produce that pixelated face.

    For starters, Rudin said, “We kind of proved that you can’t do facial recognition from blurry images because there are so many possibilities. So zoom and enhance, beyond a certain threshold level, cannot possibly exist.”

    However, Rudin added that “PULSE,” the Python module her group developed, could have wide-ranging applications beyond just the possibly problematic “upsampling” of pixelated images of human faces. (Though it’d only be problematic if misused for facial recognition purposes. Rudin said there are no doubt any number of unexplored artistic and creative possibilities for PULSE, too.)

  31. Tomi Engdahl says:

    9 Key Machine Learning Algorithms Explained in Plain English

    Machine learning is changing the world. Google uses machine learning to suggest search results to users. Netflix uses it to recommend movies for you to watch. Facebook uses machine learning to suggest people you may know.

  32. Tomi Engdahl says:

    Sketch a Face to Crack the Case
    DeepFaceDrawing is a deep learning framework that generates synthetic face images from rough sketches.

  33. Tomi Engdahl says:

    10 Lesser-Known Python Libraries for Machine Learning
    Ten tools to simplify the machine learning process that you might not know about

  34. Tomi Engdahl says:

    Frederic Lardinois / TechCrunch:
    AWS makes CodeGuru, a set of tools that use machine learning to automatically review code for bugs and suggest potential optimizations, generally available

    CodeGuru, AWS’s AI code reviewer and performance profiler, is now generally available

    AWS today announced that CodeGuru, a set of tools that use machine learning to automatically review code for bugs and suggest potential optimizations, is now generally available. The tool launched into preview at AWS re:Invent last December.

    CodeGuru consists of two tools, Reviewer and Profiler, and those names pretty much describe exactly what they do. To build Reviewer, the AWS team actually trained its algorithm with the help of code from more than 10,000 open source projects on GitHub, as well as reviews from Amazon’s own internal codebase.


    Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code. Integrate Amazon CodeGuru into your existing software development workflow where you will experience built-in code reviews to detect and optimize the expensive lines of code to reduce costs.

  35. Tomi Engdahl says:

    Disney Research neural face-swapping technique can provide photorealistic, high-resolution video

    A new paper published by Disney Research in partnership with ETH Zurich describes a fully automated, neural network-based method for swapping faces in photos and videos — the first such method that results in high-resolution, megapixel resolution final results according, to the researchers. That could make it suited for use in film and TV, where high-resolution results are key to ensuring that the final product is good enough to reliably convince viewers as to their reality.

    The researchers specifically intend this tech for use in replacing an existing actor’s performance with a substitute actor’s face, for instance when de-aging or increasing the age of someone

  36. Tomi Engdahl says:

    MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs
    Top uni takes action after El Reg highlights concerns by academics


  37. Tomi Engdahl says:

    Are Better Machine Training Approaches Ahead?

    Why unsupervised, reinforcement and Hebbian approaches are good for some things, but not others.

    We live in a time of unparalleled use of machine learning (ML), but it relies on one approach to training the models that are implemented in artificial neural networks (ANNs) — so named because they’re not neuromorphic. But other training approaches, some of which are more biomimetic than others, are being developed. The big question remains whether any of them will become commercially viable.

    ML training frequently is divided into two camps — supervised and unsupervised. As it turns out, the divisions are not so clear-cut. The variety of approaches that exists defies neat pigeonholing. Yet the end goal remains training that is easier and uses far less energy than what we do today.

    “The amount of computation for training is doubling every three to four months. That’s unsustainable,”

    Where we are today: gradient descent
    “The one [training method] that everyone’s looking at is supervised learning,” said Elias Fallon, software engineering group director for the Custom IC & PCB Group at Cadence, referring to the approach in wide use today. “The key there is that I have labels.”

    Supervised training starts with a random model and then, through trial and error, tweaks the model until it gives acceptable results. For any given model, there’s no one unique “correct” or “best” model. Slight deviations in the training technique — ones as innocuous as changing the order of the training samples — will generate different models. And yet, as long as all the different models operate with the same accuracy, they are all equally valid.

    An incorrect decision during training will be detected in the final layer of the network, which contains the percentage likelihood of each possible category. Because those values are numeric, the error can be calculated. At this point, the “gradient descent” algorithm determines how the weights would need to change in the next-to-last layer in order for the last layer to achieve the correct response. From that next-to-last layer, you then can move back another layer to see what would need to change in that layer in order for the next-to-last layer to be correct. This “back-propagation” continues until all weights have been changed.

    After this process is done on the first sample, the network is correct only for that sample. Then the next sample is presented, and the process repeats. The intent is that, after a very large number of samples that are randomized and generalized well enough to be free of bias, the adjustments to the weights will be smaller and smaller with each succeeding sample, ultimately converging on a set of weights that let the network recognize new samples that it hasn’t seen before with acceptable accuracy.

    “There’s lots of variation on the gradient-descent approach,”

    This training technique has seen enormous success for a wide range of applications, but its big downside is that it demands a huge amount of energy, and the calculations require enormous amounts of computation. For each sample, millions or billions of weights must be calculated, and there may be thousands of samples.

    In addition, the gradient-descent approach to training bears no resemblance to what happens in animal brains. “Back-propagation is not biologically plausible,”

    Peter van der Made, Brainchip CTO and founder, agreed. “Back-propagation is totally artificial and has no equivalent in biology. [It] may be useful in creating fixed features, but it cannot be used for real-time learning. Because of its successive approximation method, it needs millions of labeled samples to make a decision if the network is correct or not and to readjust its weights.”

    It’s a convenient numerical approach, but it requires the ability to calculate the “descent” — essentially a derivative — in order to be effective. As far as we can tell, there’s no such parallel activity in the brain.

    By itself, that’s not a huge problem. If it works, then it works. But the search for a more biomimetic approach continues because the brain can do all of this with far, far less energy than we require in machines. Therefore, researchers remain tantalized by the possibilities of doing more with less energy.

    Unsupervised learning
    The effort of labeling samples can be removed if instead we can perform unsupervised learning. Such learning still requires samples, but those samples will have no labels — and therefore, there is no one specifically saying what the right answer is. “The big distinction is whether I have labeled outputs or not,” said Fallon.

    With this approach, algorithms attempt to find commonalities in the data sets using techniques like clustering. As Fallon noted, this amounts to, “Let’s figure out how to group things.” These groupings perform like inferred labels. While this may sound less satisfying than the rigor of saying, “That is a cat,” in the end, the category of “cats” is nothing more than a cluster of images that share the characteristics of a cat. Of course, we like to put names on categories, which the manual labeling process permits. But unsupervised clustering may result in naturally occurring groupings that may not correspond to anything with a simple name. And some groupings may have more value than others.

    The clustering can be assisted by what’s referred to as “semi-supervised” learning, which mixes the approaches of both unsupervised and supervised learning. So there will be a few samples that are labeled, but many more that are not. The labeled samples can be thought of as resembling a nucleating site in crystallization — it gives the unlabeled samples some examples around which the clustering can proceed.

    “Auto-encoders” can provide a way to effectively label or characterize features in unlabeled samples.

    In general, unsupervised learning is a very broad category with lots of possibilities, but there doesn’t appear to be a clear path yet toward commercial viability.

    Reinforcement learning
    Yet a third category of training is “reinforcement learning,” and it already is seeing limited commercial use. It’s more visible in algorithms trained to win games like Go. It operates through a reward system, with good decisions reinforced and bad ones discouraged. It’s not a new thing, but it’s also not well established yet. It gets its own category since, as Cadence’s Fallon noted, “[Reinforcement] doesn’t really fall into the supervised or unsupervised [distinction].”

    . “[A reward] may not provide the right answer, but it’s a push in the right direction,” said Eliasmith.

    The environment matters, however. “[Reinforcement] can be useful if you have a notion of the environment and can provide a positive or negative reward,” said Fallon. The big challenge here is the reward system. This system operates at a high level, so what constitutes a reward can vary widely, and it will be very application-dependent.

    “Reward learning is derived from animal behavior,”

    “Biology hasn’t solved it perfectly.”

    There are daily examples of humans and other animals mis-associating cause and effect. But in general, reinforcement learning may end up being a mix of offline and online training. This is particularly true for robotics applications, where you can’t think of every possible scenario during offline training. Incremental learning must be possible as the machines go through their paces and encounter unforeseen situations.

    Reinforcement learning tends to be good at solving control problems — such as robotics. Google used it for its cooling and saved 40% on its energy bill. This approach shows some promise for commercial viability.

    Hebbian learning and STDP
    While the credit assignment problem may have solutions for some applications, the high-level nature of reinforcement learning remains

    Donald Hebb is credited with the notion that, “Neurons that fire together wire together,” although he didn’t literally coin that phrase — just the notion. “This is not exactly what he said, but it’s what we remember,” said Eliasmith. The idea is that, given two neurons, if one fires before the other, a link should be reinforced. If one fires after the other, the link should be weakened. How close in time the two firings are can affect the strength of the reinforcement or weakening. “This is understood to be closer to how neurons actually learn things,” said Fallon.

    “At the lower level, neurons in the brain modify their synaptic weights (learning) by a process known as Spike Time Dependent Plasticity (STDP),”

    With many neurons learning different patterns, it is possible to learn very complex sets of patterns.” This specifically codifies a timing relationship between neurons and can result in temporal coding within spiking neural networks (SNNs).

    STDP allows for training, either supervised or unsupervised, with fewer samples than we require today. Brainchip has leveraged some of the ideas in its SNN

    University of Waterloo’s Eliasmith observed that incremental learning isn’t new. As an example, he noted that Google came up with the BERT neural network, complete with weights, and made it openly available.

    Many other approaches and variants are being tried in research laboratories, most of which are far from a commercial solution. “Some of these other techniques might be better when there’s less data and lower power requirements,”

  38. Tomi Engdahl says:

    TensorFlow Gains Hardware Support

    Hardware support is now available for TensorFlow from NVIDIA and Movidius, intended to accelerate the use of deep neural networks for machine learning applications.

    There are a number of machine learning (ML) architectures that utilize deep neural networks (DNNs), including AlexNet, VGGNet, GoogLeNet, Inception, ResNet, FCN, and U-Net. These in turn run on frameworks like Berkeley’s Caffe, Google’s TensorFlow, Torch, Microsoft’s Cognitive Toolkit (CNTK), and Apache’s mxnet. Of course, support for these frameworks on specific hardware is required to actually run the ML applications.

    Each framework has advantages and disadvantages. For example, Caffe is an easy platform to start with, especially since ones of its popular uses is image recognition. It is also fast and often the first framework supported by hardware. TensorFlow tends to be easier to deploy with simpler model definitions, as well as better support or GPUs. There are also accelerators specifically designed for TensorFlow like Google’s TensorFlow Processing Unit (TPU). It also handles multiple machine configurations better.

    Two platforms that support TensorFlow are NVIDIA’s Jetson TX2 and Intel’s Movidius chips (Fig. 1). Intel’s TensorFlow support for Movidius is new. It addresses the range of Movidius chips that have been used in DJI’s SPARK drone for tracking user gestures visually for real-time control of the system. The Movidius Neural Compute Stick Software Development Kit (SDK) now supports TensorFlow as well as Caffe frameworks.

  39. Tomi Engdahl says:

    Rebecca Heilweil / Vox:
    Even if government use of facial recognition tech is regulated more strictly, issues will remain due to the ubiquity of the same tech in consumer devices — A growing number of gadgets are scanning your face. — Facial recognition is having a reckoning.

    How can we ban facial recognition when it’s already everywhere?

    A growing number of gadgets are scanning your face.

  40. Tomi Engdahl says:

    Intel and the NSF award US $9 million to research teams working on ways to bring more machine learning power to the wireless edge.


  41. Tomi Engdahl says:

    Adobe tests an AI recommendation tool for headlines and images

    Team members at Adobe have built a new way to use artificial intelligence to automatically personalize a blog for different visitors.

    This tool was built as part of the Adobe Sneaks program, where employees can create demos to show off new ideas, which are then showcased (virtually, this year) at the Adobe Summit.

    So in the demo, the Experience Cloud can go beyond simple A/B testing and personalization, leveraging the company’s AI technology Adobe Sensei to suggest different headlines, images (which can come from a publisher’s media library or Adobe Stock) and preview blurbs for different audiences.

    For example, Chung showed me a mocked-up blog for a tourism company, where a single post about traveling to Australia could be presented differently to thrill-seekers, frugal travelers, partygoers and others. Human writers and editors can still edit the previews for each audience segment, and they can also consult a Snippet Quality Score to see the details behind Sensei’s recommendation.

  42. Tomi Engdahl says:

    Tom Simonite / Wired:
    A look at how AI companies like Synthesia are creating corporate-friendly uses for deepfakes, like creating multilingual, personalized training videos — Coronavirus restrictions make it harder and more expensive to shoot videos. So some companies are turning to synthetic media instead.

    Deepfakes Are Becoming the Hot New Corporate Training Tool

    Coronavirus restrictions make it harder and more expensive to shoot videos. So some companies are turning to synthetic media instead.

  43. Tomi Engdahl says:

    This repo demonstrates an end-to-end architecture for intelligent video analytics using NVIDIA Embedded devices and Microsoft Azure services.

    Intelligent Video Analytics with NVIDIA Jetson and Microsoft

    A repository demonstrating an end-to-end architecture for Intelligent Video Analytics using NVIDIA hardware with Microsoft Azure.


Leave a Comment

Your email address will not be published. Required fields are marked *