3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,231 Comments

  1. Tomi Engdahl says:

    Do You Like Robots? How About Crafting with Cardboard? Then Smartibot Is for You!
    https://blog.hackster.io/do-you-like-robots-how-about-crafting-with-cardboard-then-smartibot-is-for-you-50ed9e08080c

    Smartibot, which is currently being crowdfunded on Kickstarter, is an AI-enabled platform that lets you create robots out of cardboard. It uses a central circuit board that contains sensors and motor drivers, and can simultaneously control up to 10 servos and four DC motors. With the companion app, your smartphone can pair with the board to act as either a remote control for the robot, or as an onboard AI brain that can identify faces, animals, and common objects.

    https://www.kickstarter.com/projects/460355237/smartibot-the-worlds-first-ai-enabled-cardboard-ro?ref=home_new_and_noteworthy

    Reply
  2. Tomi Engdahl says:

    My Loopy Robot Entertains and Educates
    https://blog.hackster.io/my-loopy-robot-entertains-and-educates-d59749b9b88a

    If you’re looking for a robot companion for you or your kids, then My Loopy looks like a great place to start. Now on Kickstarter, this AI-powered robot can be used without an app, and has its own quirky personality expressed using a pair of LED eyes, a light-up dome with six RGB LEDs, and a voice speaker.

    Reply
  3. Tomi Engdahl says:

    A Path to Broad AI: 5 Challenges
    In search for AI that performs across tasks, across domains
    https://www.eetimes.com/document.asp?doc_id=1333426

    Try to find a technology conference or trade show where everybody is not talking about artificial intelligence. Go ahead: Try. But not at this week’s Design Automation Conference (DAC).

    The DAC keynote on Tuesday was “AI is the new IT,” offered by Dario Gil, vice president of AI and IBM Q at IBM Research. Gil presented a helicopter’s-eye view of the technology’s current topography, identifying key areas as the industry strives to broaden AI’s turf.

    Narrow AI
    AI, as we know it today, is being applied to language translation, speech transcription, object detection, and face recognition. Gil calls this “a narrow form of AI” in which AI runs a single task in a single domain.

    Nonetheless, AI is already spreading like wildfire across many industry segments. “There are hundreds of applications, and the list is quite long,” said Gil. IBM is tracking AI challenges in a spectrum of applications that range from design automation, industrial, healthcare, and visual inspection to customer care, marketing/business, IoT, and compliance.

    In IC design, for example, machine learning is already used to optimize synthesis flow. Advances in AI can now “automate the decisions of skilled designers,” according to Gil.

    Reply
  4. Tomi Engdahl says:

    4 steps for running a machine learning pilot project
    http://www.ibmbigdatahub.com/blog/4-steps-running-machine-learning-pilot-project

    Running a machine learning pilot project is a great early step on the road to full adoption.

    To get started, you’ll need to build a cross-functional team of business analysts, engineers, data scientists and key stakeholders. From there, the process looks a lot like the scientific method taught in school.

    Reply
  5. Tomi Engdahl says:

    Top 8 open source AI technologies in machine learning
    https://opensource.com/article/18/5/top-8-open-source-ai-technologies-machine-learning?sc_cid=7016000000127ECAAY

    Take your machine learning to the next level with these artificial intelligence technologies.

    Reply
  6. Tomi Engdahl says:

    Experts Bet on First Deepfakes Political Scandal
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/experts-bet-on-first-deepfakes-political-scandal

    A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They’re betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018.

    The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the “yes” camp and tropical tiki drinks for the “no” camp. But the implications of the technology behind the bet’s premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life.

    Reply
  7. Tomi Engdahl says:

    GyoiThon is a Growing Penetration Testing Tool Using Machine Learning.
    https://blog.hackersonlineclub.com/2018/06/gyoithon-next-generation-penetration.html?m=1

    GyoiThon identifies the software installed on web server (OS, Middleware, Framework, CMS, etc…) based on the learning data. After that, it executes valid exploits for the identified software using Metasploit. Finally, it generates reports of scan results. GyoiThon executes the above processing automatically.

    Reply
  8. Tomi Engdahl says:

    Machine Learning’s Limits
    https://semiengineering.com/machine-learnings-limits-2/

    Experts at the Table, part 2: When errors occur, how and when are they identified and by whom?

    Reply
  9. Tomi Engdahl says:

    IBM’s New Do-It-All Deep Learning Chip
    https://spectrum.ieee.org/tech-talk/semiconductors/processors/ibms-new-doitall-deep-learning-chip

    IBM’s new chip is designed to do both high-precision learning and low-precision inference across the three main flavors of deep learning

    The field of deep learning is still in flux, but some things have started to settle out. In particular, experts recognize that neural nets can get a lot of computation done with little energy if a chip approximates an answer using low-precision math. That’s especially useful in mobile and other power-constrained devices. But some tasks, especially training a neural net to do something, still need precision. IBM recently revealed its newest solution, still a prototype, at the IEEE VLSI Symposia: a chip that does both equally well.

    The disconnect between the needs of training a neural net and having that net execute its function, called inference, has been one of the big challenges for those designing chips that accelerate AI functions. IBM’s new AI accelerator chip is capable of what the company calls scaled precision. That is, it can do both training and inference at 32-, 16-, or even 1- or 2-bits.

    “The most advanced precision that you can do for training is 16 bits, and the most advanced you can do for inference is 2 bits,”

    Reply
  10. Tomi Engdahl says:

    One-Shot Imitation from Watching Videos
    http://bair.berkeley.edu/blog/2018/06/28/daml/

    Learning a new skill by observing another individual, the ability to imitate, is a key part of intelligence in human and animals. Can we enable a robot to do the same, learning to manipulate a new object by simply watching a human manipulating the object just as in the video

    Reply
  11. Tomi Engdahl says:

    On Labeled data
    https://medium.com/@TalPerry/on-labeled-data-85fbaf1bdf89

    It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data (…) than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.

    Reply
  12. Tomi Engdahl says:

    Clean Water AI © GPL3+
    https://create.arduino.cc/projecthub/89559/clean-water-ai-e40806?ref=platform&ref_id=424_recent___&offset=0

    Using AI to detect dangerous bacteria and harmful particles in the water.

    Reply
  13. Tomi Engdahl says:

    Is that AI decision fair? Consumers, regulators look to IT for explanations
    https://enterprisersproject.com/article/2018/7/ai-decision-fair-consumers-regulators-look-it-explanations?sc_cid=7016000000127f3AAA

    Why did you deny that mortgage? Why did you hire that job applicant? As pressure grows to make AI decisions explainable, IT leaders face a big challenge

    Reply
  14. Tomi Engdahl says:

    AI Vision IoT
    https://www.hackster.io/Nyceane/ai-vision-iot-171325

    Using AI to detect and monitor objects, then to connect and record it on the IoT platform.

    We are going to focus specifically on computer vision and Single Shot Detection (SSD) on in this sample. To do this, we will be building nevus, melanoma, and seborrheic keratosis image classifier using deep Learning algorithm, the Convolution Neural Network (CNN) through Caffe Framework. In this project we will be building an AI vision kit that can be used to count items

    Reply
  15. Tomi Engdahl says:

    This Raspberry Pi-based instant camera turns photos into cartoons
    https://www.htxt.co.za/2018/07/04/this-raspberry-pi-based-instant-camera-turns-photos-into-cartoons/

    The device, which does not have an official name, takes regularly snapped pics that one may take with an instant camera and “transforms” them into crudely drawn cartoons thanks to neural network.

    Leveraging Google’s Quick Draw! dataset, Macnish’s creation is able to identify about picture passed in front of its lens and them reinterpret it into the cartoon version and print them via a thermal printer.

    https://quickdraw.withgoogle.com

    Reply
  16. Tomi Engdahl says:

    AI spots legal problems with tech T&Cs in GDPR research project
    https://techcrunch.com/2018/07/04/european-ai-used-to-spot-legal-problems-in-tech-tcs/?sr_share=facebook&utm_source=tcfbpage

    Technology is the proverbial double-edged sword. And an experimental European research project is ensuring this axiom cuts very close to the industry’s bone indeed by applying machine learning technology to critically sift big tech’s privacy policies — to see whether AI can automatically identify violations of data protection law

    Reply
  17. Tomi Engdahl says:

    Noam Scheiber / New York Times:
    Algorithms increasingly replace highly-skilled white-collar workers, as seen in online fashion and retail industries and the ascendance of firms like Stitch Fix — One of the best-selling T-shirts for the Indian e-commerce site Myntra is an olive, blue and yellow colorblocked design.

    High-Skilled White-Collar Work? Machines Can Do That, Too
    https://www.nytimes.com/2018/07/07/business/economy/algorithm-fashion-jobs.html

    One of the best-selling T-shirts for the Indian e-commerce site Myntra is an olive, blue and yellow colorblocked design. It was conceived not by a human but by a computer algorithm — or rather two algorithms.

    The first algorithm generated random images that it tried to pass off as clothing. The second had to distinguish between those images and clothes in Myntra’s inventory. Through a long game of one-upmanship, the first algorithm got better at producing images that resembled clothing, and the second got better at determining whether they were like — but not identical to — actual products.

    This back and forth, an example of artificial intelligence at work, created designs whose sales are now “growing at 100 percent,”

    Clothing design is only the leading edge of the way algorithms are transforming the fashion and retail industries. Companies now routinely use artificial intelligence to decide which clothes to stock and what to recommend to customers.

    And fashion, which has long shed blue-collar jobs in the United States, is in turn a leading example of how artificial intelligence is affecting a range of white-collar work as well.

    “A much broader set of tasks will be automated or augmented by machines over the coming years,”

    The fashion industry illustrates how machines can intrude even on workers known more for their creativity than for cold empirical judgments.

    Stitch Fix relies heavily on algorithms to guide its buying decisions — in fact, its business probably could not exist without them.

    Myntra, the Indian online retailer, arms its buyers with algorithms that calculate the probability that an item will sell well based on how clothes with similar attributes — sleeves, colors, fabric — have sold in the past.

    retailers adept at using algorithms and big data tend to employ fewer buyers and assign each a wider range of categories

    Arti Zeighami, who oversees advanced analytics and artificial intelligence for the H & M group, which uses artificial intelligence to guide supply-chain decisions, said the company was “enhancing and empowering” human buyers and planners, not replacing them. But he conceded it was hard to predict the effect on employment in five to 10 years.

    Experts say some of these jobs will be automated away.

    There is at least one area of the industry where the machines are creating jobs rather than eliminating them, however. Bombfell, Stitch Fix and many competitors in the box-fashion niche employ a growing army of human stylists who receive recommendations from algorithms

    Reply
  18. Tomi Engdahl says:

    Baidu Accelerator Rises in AI
    Kunlun chip claims 30x performance of FPGAs
    https://www.eetimes.com/document.asp?doc_id=1333449

    China’s Baidu followed in Google’s footsteps this week, announcing it has developed its own deep learning accelerator. The move adds yet another significant player to a long list in AI hardware, but details of the chip and when it will be used remain unclear.

    Baidu will deploy Kunlun in its data centers to accelerate machine learning jobs for both its own applications and those of its cloud-computing customers. The services will compete with companies such as Wave Computing and SambaNova who aim to sell to business users appliances that run machine-learning tasks.

    Baidu will deploy Kunlun in its data centers to accelerate machine learning jobs for both its own applications and those of its cloud-computing customers. The services will compete with companies such as Wave Computing and SambaNova who aim to sell to business users appliances that run machine-learning tasks.

    Kunlun delivers 260 Tera-operations/second while consuming 100 Watts, 30 times as powerful as Baidu’s prior accelerators based on FPGAs. The chip is made in a 14nm Samsung process and consists of thousands of cores with an aggregate 512 GBytes/second of memory bandwidth.

    Baidu did not disclose its architecture, but like Google’s Tensor Processing Unit, it probably consists of an array of multiply-accumulate units. The memory bandwidth likely comes from use of a 2.5D stack of logic and the equivalent of two HBM2 DRAM chips.

    AI Silicon Preps for 2018 Debuts
    A dozen startups chase deep learning
    https://www.eetimes.com/document.asp?doc_id=1332877

    Reply
  19. Tomi Engdahl says:

    Intel Pushes AI at Baidu Create
    https://www.eetimes.com/document.asp?doc_id=1333443

    Baidu Create 2018 this week in Beijing, the second annual AI developer conference sponsored by China’s Internet giant, is looking more and more like Google I/O or the Apple Worldwide Developers Conference in Silicon Valley.

    Mobileye & Baidu’s Apollo
    Timed with this year’s Baidu Create, Intel/Mobileye announced that Mobileye’s Responsibility Sensitive Safety (RSS) model will be designed into both Baidu’s open-source Project Apollo and commercial Apollo Drive programs.

    Mobileye sees its proposed RSS model as critical to providing “safety assurance of autonomous vehicle (AV) decision-making” in the era of artificial intelligence.

    More specifically, Mobileye recently acknowledged that because AI-based AVs operate probabilistically, they could make mistakes.

    To mitigate unsafe operations by AI-driven vehicles, Mobileye said that under the RSS model it is installing two separate systems: 1) AI based on reinforcement learning, which proposes the AV’s next action, and 2) a “safety layer” based on a formal deterministic system that can override an “unsafe” AV decision.

    Intel told us that Baidu is the first company to “publicly” announce the adoption of the RSS model.

    Apollo’s impact on global auto market
    In one short year, Baidu’s Apollo platform has signed more than 100 companies, while making major advancements by enabling a host of new features that include telematics updates on its open AV platform.

    Reply
  20. Tomi Engdahl says:

    Security Holes In Machine Learning And AI
    https://semiengineering.com/security-holes-in-machine-learning-and-ai/

    A primary goal of machine learning is to use machines to train other machines. But what happens if there’s malware or other flaws in the training data?

    Machine learning, deep learning and artificial intelligence are powerful tools for improving the reliability and functionality of systems and speeding time to market. But the AI algorithms also can contain bugs, subtle biases, or even malware that can go undetected for years, according to more than a dozen experts interviewed over the past several months. In some cases, the cause may be errors in programming, which is not uncommon as new tools or technologies are developed and rolled out. Machine learning and AI algorithms are still being fine-tuned and patched. But alongside of that is a growing fear that it can become an entry point for malware, which becomes a back door that can be cracked open at a later date.

    Even when flaws or malware are discovered, it’s nearly impossible to trace back to the root cause of the problem and fix all of the devices trained with that data. By that point there may be millions of those devices in the market. If patches are developed, not all of those devices will be online all the time or even accessible. And that’s the best-case scenario. The worse-case scenario is that this code is not discovered until it is activated by some outside perpetrator, regardless of whether they planted it there or just stumbled across it.

    Reply
  21. Tomi Engdahl says:

    Deep Learning with Open Source Python Software
    https://www.linuxlinks.com/deep-learning-with-python-best-free-software/

    Deep Learning is a subset of Machine Learning that uses multi-layers artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, language translation and others. Think of Machine Learning as cutting-edge, and Deep Learning as the cutting-edge of the cutting-edge.

    Reply
  22. Tomi Engdahl says:

    Apple combines machine learning and Siri teams under Giannandrea
    https://techcrunch.com/2018/07/10/apple-combines-machine-learning-and-siri-teams-under-giannandrea/?utm_source=tcfbpage&sr_share=facebook

    Apple is creating a new AI/ML team that brings together its Core ML and Siri teams under one leader in John Giannandrea.

    Reply
  23. Tomi Engdahl says:

    How to Build a Self-Flying Drone That Tracks People
    https://www.hackster.io/arun-gandhi/how-to-build-a-self-flying-drone-that-tracks-people-709ebd

    Giving a drone the ability to autonomously follow you using deep learning-based computer vision techniques.

    Reply
  24. Tomi Engdahl says:

    The Internet of… Power Outlets?
    https://blog.hackster.io/the-internet-of-power-outlets-ac8fcacde48d

    This system is designed to distinguish between benign power surges when an appliance starts up and actual dangerous situations. It currently runs on a Raspberry Pi 3, using a USB sound card and current clamp for data acquisition and processing.

    MIT engineers build smart power outlet
    https://news.mit.edu/2018/mit-engineers-build-smart-power-outlet-0615

    Design can “learn” to identify plugged-in appliances, distinguish dangerous electrical spikes from benign ones.

    The problem with today’s arc-fault detectors, according to a team of MIT engineers, is that they often err on the side of being overly sensitive, shutting off an outlet’s power in response to electrical signals that are actually harmless.

    In this case, the team’s machine-learning algorithm is programmed to determine whether a signal is harmful or not by comparing a captured signal to others that the researchers previously used to train the system.

    Reply
  25. Tomi Engdahl says:

    Does the brain store information in discrete or analog form?
    https://www.technologyreview.com/s/611165/does-the-brain-store-information-in-discrete-or-analog-form/

    New evidence in favor of a discrete form of data storage could change the way we understand the brain and the devices we build to interface with it.

    Reply
  26. Tomi Engdahl says:

    Robots do not destroy employment, politicians do
    https://www.dlacalle.com/en/robots-do-not-destroy-employment-politicians-do/

    I’m not worried about artificial intelligence, I’m terrified of human stupidity.

    The debate about technology and its role in society that we need to have is being used to deceive citizens and scare them about the future so they accept to submit to politicians who cannot nor will protect us from the challenges of robotization.

    However, there are many studies that tell us that in 50 years the vast majority of work will be done by robots. What can we do?

    We have lived the fallacies of dystopian estimates for decades.

    Reply
  27. Tomi Engdahl says:

    Facebook’s DensePose Tech Raises Concerns About Potential Misuse
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/surveillance-concerns-follow-facebooks-densepose-tech

    In early 2018, Facebook’s AI researchers unveiled a deep learning system that can transform 2D photo and video images of people into 3D mesh models of those human bodies in motion. Last month, Facebook publicly shared the code for its “DensePose” technology, which could be used by Hollywood filmmakers and augmented reality game developers—but maybe also by those seeking to build a surveillance state.

    DensePose goes beyond basic object recognition. Besides detecting humans in pictures, it can also make 3D models of their bodies by estimating the positions of their torsos and limbs. Those models can then enable the technology to create real-time 3D recreations of human movement in 2D videos.

    “As a community we—including organizations like OpenAI—need to be better about dealing publicly with the information-hazards of releasing increasingly capable systems, lest we enable things in the world that we’d rather not be responsible for,” Clark said.

    Reply
  28. Tomi Engdahl says:

    Let’s Shape AI Before AI Shapes Us
    https://spectrum.ieee.org/robotics/artificial-intelligence/lets-shape-ai-before-ai-shapes-us

    Will today’s outsized fears of AI become fodder for tomorrow’s computer comedy?

    Past AI scares now seem silly in hindsight. In the 1950s, building on excitement over the advent of digital computers, scientists foresaw machines that would instantly translate Russian to English. Rather than 5 years, machine translation took more than 50. “Expert systems” similarly have experienced a long gestation, and even now these programs, built around knowledge gained from human experts, deliver little. Meanwhile, HAL, the Terminator, Ava, and other computer-generated rivals remain the stuff of Hollywood.

    In recent years, some claims for AI seem to have been realized.

    Reply
  29. Tomi Engdahl says:

    AI Can Now Fix Your Grainy Photos by Only Looking at Grainy Photos
    https://news.developer.nvidia.com/ai-can-now-fix-your-grainy-photos-by-only-looking-at-grainy-photos/

    July 9, 2018
    What if you could take your photos that were originally taken in low light and automatically remove the noise and artifacts? Have grainy or pixelated images in your photo library and want to fix them? This deep learning-based approach has learned to fix photos by simply looking at examples of corrupted photos only.

    The work was developed by researchers from NVIDIA, Aalto University, and MIT, and is being presented at the International Conference on Machine Learning in Stockholm, Sweden this week.

    Recent deep learning work in the field has focused on training a neural network to restore images by showing example pairs of noisy and clean images. The AI then learns how to make up the difference. This method differs because it only requires two input images with the noise or grain.

    Reply
  30. Tomi Engdahl says:

    AI Vision IoT
    https://www.hackster.io/Nyceane/ai-vision-iot-171325

    Using AI to detect and monitor objects, then to connect and record it on the IoT platform.

    We are going to focus specifically on computer vision and Single Shot Detection (SSD) on in this sample. To do this, we will be building nevus, melanoma, and seborrheic keratosis image classifier using deep learning algorithm, the Convolution Neural Network (CNN) through Caffe Framework. In this project we will be building an AI vision kit that can be used to count items,

    Equipments needed is very simple for this project, you can either do it with your computer and a USB Movidius Neural Computing Stick or Build it using Embed computing like these IoT devices.

    Up2 Board
    Vision Kit Camera
    Movidius PCIe Add-on (Or USB Neural Computing Stick)
    A screen or monitor
    WIZ750SR ETH to Serial Connector (this is an option control AI selection through local telnet
    Helium Atom and Helium Element

    Reply
  31. Tomi Engdahl says:

    Jetson TX2 TensorFlow, OpenCV & Keras Install
    https://www.hackster.io/wilson-wang/jetson-tx2-tensorflow-opencv-keras-install-b74e40

    A tutorial about setting up Jetson TX2 with TensorFlow, OpenCV, and Keras for deep learning projects.

    Reply
  32. Tomi Engdahl says:

    How to Make an Artificial Neural Net With DNA
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/neural-net-dna

    An artificial neural network made of DNA can recognize numbers written using molecules, a new study finds.

    These new findings suggest that DNA neural networks could also recognize other patterns of molecules, such as ones signaling disease, researchers add.

    Reply
  33. Tomi Engdahl says:

    A new hope: AI for news media
    https://techcrunch.com/2018/07/12/a-new-hope-ai-for-news-media/?guccounter=1

    To put it mildly, news media has been on the sidelines in AI development. As a consequence, in the age of AI-powered personalized interfaces, the news organizations don’t anymore get to define what’s real news, or, even more importantly, what’s truthful or trustworthy. Today, social media platforms, search engines and content aggregators control user flows to the media content and affect directly what kind of news content is created. As a result, the future of news media isn’t anymore in its own hands. Case closed?

    The (Death) Valley of news digitalization
    There’s a history: News media hasn’t been quick or innovative enough to become a change maker in the digital world.

    Reply
  34. Tomi Engdahl says:

    A look at open source image recognition technology
    https://opensource.com/article/18/5/state-of-image-recognition?sc_cid=7016000000127ECAAY

    Image recognition technology promises great potential in areas from public safety to healthcare.

    Reply
  35. Tomi Engdahl says:

    As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation
    https://techcrunch.com/2018/07/13/as-facial-recognition-technology-becomes-pervasive-microsoft-yes-microsoft-issues-a-call-for-regulation/?utm_source=tcfbpage&sr_share=facebook

    Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.

    And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.

    Reply
  36. Tomi Engdahl says:

    AI hardware acceleration needs careful requirements planning
    https://www.edn.com/design/systems-design/4460872/AI-hardware-acceleration-needs-careful-requirements-planning?utm_source=Aspencore&utm_medium=EDN&utm_campaign=social

    In many ways the dive into AI acceleration resembles the DSP gold rush of the late 90s and early 2000s. As wired and wireless communications took off, the rush was on to offer the ultimate DSP co-processor to handle baseband processing. Like DSP coprocessors, the goal with AI accelerators is to find the fastest, most energy-efficient means of performing the computations required.

    The mathematics behind neural network processing involves statistics, multivariable calculus, linear algebra, numerical optimization, and probability. While complex, it’s also highly parallelizable. In fact, it’s embarrassingly parallelizable, meaning it’s easily broken down into parallel paths with no branches or dependencies (unlike distributed computing), before the outputs of the paths are reassembled and the output produced.

    There are various neural network algorithms, with convolution neural networks (CNNs) being particularly adept at tasks such as object recognition — filtering to strip out and identify objects of interest in an image. CNNs take in data as multidimensional matrices, called tensors

    Reply
  37. Tomi Engdahl says:

    Intelligent Machines
    Evolutionary algorithm outperforms deep-learning machines at video games
    https://www.technologyreview.com/s/611568/evolutionary-algorithm-outperforms-deep-learning-machines-at-video-games/?utm_medium=social&utm_source=facebook.com&utm_campaign=owned_social

    Neural networks have garnered all the headlines, but a much more powerful approach is waiting in the wings.

    With all the excitement over neural networks and deep-learning techniques, it’s easy to imagine that the world of computer science consists of little else. Neural networks, after all, have begun to outperform humans in tasks such as object and face recognition and in games such as chess, Go, and various arcade video games.

    These networks are based on the way the human brain works. Nothing could have more potential than that, right?

    Not quite. An entirely different type of computing has the potential to be significantly more powerful than neural networks and deep learning. This technique is based on the process that created the human brain—evolution. In other words, a sequence of iterative change and selection that produced the most complex and capable machines known to humankind—the eye, the wing, the brain, and so on. The power of evolution is a wonder to behold.

    Reply
  38. Tomi Engdahl says:

    What machine learning means for software development
    https://www.oreilly.com/ideas/what-machine-learning-means-for-software-development

    “Human in the loop” software development will be a big part of the future.

    Reply
  39. Tomi Engdahl says:

    At the Heart of Intelligence: Futurist Gerd Leonhard and Telia Finland film collaboration
    https://m.youtube.com/watch?feature=share&v=QlVCj8DFLXM

    Reply
  40. Tomi Engdahl says:

    Data mining reveals fundamental pattern of human thinking
    https://www.technologyreview.com/s/611640/data-mining-reveals-fundamental-pattern-of-human-thinking/?utm_campaign=social_button&utm_source=facebook&utm_medium=social&utm_content=2018-07-21

    Word frequency patterns show that humans process common and uncommon words in different ways, with important consequences for natural-language processing.

    Reply
  41. Tomi Engdahl says:

    Open source image recognition with Luminoth
    https://opensource.com/article/18/5/getting-started-luminoth?sc_cid=7016000000127ECAAY

    Luminoth helps computers identify what’s in a photograph. The latest update offers new models and pre-trained checkpoints.

    Reply
  42. Tomi Engdahl says:

    Jon Christian / Motherboard:
    As Google translates nonsensical messages into garbled religious prophecies, researchers blame AI algorithm trained on religious texts as some users see demons — Google Translate is moonlighting as a deranged oracle—and experts say it’s likely because of the spooky nature of neural networks.

    Why Is Google Translate Spitting Out Sinister Religious Prophecies?
    https://motherboard.vice.com/en_us/article/j5npeg/why-is-google-translate-spitting-out-sinister-religious-prophecies

    Google Translate is moonlighting as a deranged oracle—and experts say it’s likely because of the spooky nature of neural networks.

    Type the word “dog” into Google Translate 19 times, request that the nonsensical message be flipped from Maori into English, and out pops what appears to be a garbled religious prophecy.

    “Doomsday Clock is three minutes at twelve,” it reads.

    That’s just one of many bizarre and sometimes ominous translations that users on Reddit and elsewhere have dredged up from Google Translate

    https://www.reddit.com/r/TranslateGate/

    Reply
  43. Tomi Engdahl says:

    Freia Nahser / Global Editors Network:
    Inside Fox Sports, The Times, and Le Figaro’s efforts to use AI, Alexa voice actions, and automation for their World Cup reporting
    https://medium.com/global-editors-network/covering-the-world-cup-2018-with-ai-and-automation-93914e5787d7

    Reply
  44. Tomi Engdahl says:

    Covering the World Cup 2018 with AI and automation
    https://medium.com/global-editors-network/covering-the-world-cup-2018-with-ai-and-automation-93914e5787d7

    Fox Sports, The Times, and Le Figaro have tapped into AI, voice AI, and automation for their World Cup reporting.

    Fox Sports: The AI highlight machine

    The US didn’t qualify for the World Cup this year, but that didn’t stop Fox Sports from airing all 64 matches and teaming up with IBM Watson to create the World Cup highlight machine. Using Watson artificial intelligence, the highlight machine lets the user create on-demand clips from every World Cup as far back as 1958

    Scanning thousands of hours of video material in seconds

    According to Engadget, there are 300 archived World Cup matches that Watson’s AI technology is capable of analysing. More specifically, the IBM Watson Video Enrichment, a programmatic metadata tool, analyses the footage to create metadata that identifies what is happening in a scene at any given moment with an associated timestamp.

    Users can create their highlight video filtering out by year, team, player, game, or play type, such as penalties or goals. To give an example, you can ask the machine to give you a highlight video of Ronaldo’s goals in all World Cups he’s ever played in.

    Le Figaro: Automatically generated visual game summaries
    No human can work that fast!

    The French publication created a tool to automatically generate visual summaries of every World Cup match within five seconds of the full-time whistle. ‘No human can work that fast!’

    Push notifications

    The target audience were all mobile users from the Figaro app and the Sport24 app (Le Figaro’s sport section), Paquot told us. For the knockout stages, they only sent the Stories via push notifications to those that subscribed to the service. From the quarter finals onwards, push notifications about the Stories were sent to the entire sports fan base. (According to Paquot, 90% of Figaro app users have opted in to receive alerts related to sports.)

    Automation: no extra costs, no team bias

    The summaries are fully automated, meaning that no extra money is spent creating each story. The maintenance rate is also low.
    The tool is neutral. There is no preference for any team (even the French team), which makes it objective: It’s about the data above anything else.
    Seeing as the project was a very last minute effort, the team didn’t have much time to look at the business side of things, but they’re hoping to update it for the UEFA Champions League and the French Ligue 1 (French men’s pro football league). For this, they’re hoping to secure sponsorship by a big brand. ‘I can’t tell you which, but we have a very strong lead’, said Paquot.

    …But messy data and time constraints

    The Times: Hey Alexa!

    Voice AI for experimentation

    At The Times, some of the action took place on voice interfaces. The publication looked towards voice AI, using The Times Sport Alexa skill to complement its extensive reporting on the competition.

    ‘Alexa, launch Times Sport’, was all listeners had to say in order to get a taster of the day’s World Cup headlines and an interesting fact about the competition. Those who made it to the end of the briefing were prompted to listen to The Times’ World Cup podcast hosted by presenter Natalie Sawyer.

    The Times’ content is firmly locked behind a paywall, so the Alexa skill served as more of a sampling tool

    Reaching new audiences to drive subscriptions

    According to Joiner, Alexa provides the possibility of reaching a new audience. He told us that this has two benefits: you can increase brand awareness, reaching people who may never buy or subscribe to The Times, potentially leading to subscriptions in the future. The second is short term, listeners are given a taster of what The Times has to offer and are then tempted to the website or pick up a paper to discover more.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*