3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,181 Comments

  1. Tomi Engdahl says:

    Artificial Intelligence and Cybersecurity
    https://pentestmag.com/artificial-intelligence-and-cybersecurity/

    The Crossroads of Artificial Intelligence, Machine Learning, and Deep Learning

    Reply
  2. Tomi Engdahl says:

    Will AI come to the test industry?
    https://www.edn.com/design/test-and-measurement/4461906/Will-AI-come-to-the-test-industry-

    Whether it’s called artificial intelligence (AI), machine learning (ML), or expert systems, AI is in the news today.

    Putting the threatening scenarios aside, AI has the potential to help decision making in ambiguous situations. This is more than just following an automated flow chart; these are situations that normally require some judgment, historically from a person. This brings us to electronic test and test engineering. Does AI have a role here? To find out, I contacted a number of companies about their AI efforts and how they saw the future.

    Reply
  3. Tomi Engdahl says:

    OpenAI Five Beats World Champion DOTA2 Team 2-0
    https://www.youtube.com/watch?v=tfb6aEUMC04

    OpenAI’s blog post:
    OpenAI Five Finals
    https://openai.com/blog/openai-five-finals/

    Reply
  4. Tomi Engdahl says:

    a Eurovision song created by Artificial Intelligence: Blue Jeans and Bloody Tears
    https://www.youtube.com/watch?v=4MKAf6YX_7M

    As Europe (together with Australia and Israel) are glued to their TV sets watching the 64th Eurovision song competition, we asked ourselves What makes a Eurovision song memorable? does a Eurovision hit have special DNA?

    We are a group of artists, musicians and programmers that wanted explore human creativity and challenge it. We have created a Eurovision AI song that celebrates Eurovision – its melodrama, kitsch and camp, its humor and its gimmicks. The result is comprised entirely of material written and composed by Artificial Intelligence, titled “Blue Jeans & Bloody Tears”.

    Reply
  5. Tomi Engdahl says:

    Intel’s present and future AI chip business
    https://venturebeat.com/2019/05/27/the-present-and-future-of-intels-ai-chip-business/

    The future of Intel is AI. Its books imply as much. The Santa Clara company’s AI chip segments notched $1 billion in revenue last year, and Intel expects the market opportunity to grow 30% annually from $2.5 billion in 2017 to $10 billion by 2022. Putting this into perspective, its data-centric revenues now constitute around half of all business across all divisions, up from around a third five years ago.

    Still, increased competition from the likes of incumbents Nvidia, Qualcomm, Marvell, and AMD; startups like Hailo Technologies, Graphcore, Wave Computing, Esperanto, and Quadric; and even Amazon threaten to slow Intel’s gains, which is why the company isn’t resting on its laurels. Intel bought field-programmable gate array (FPGA) manufacturer Altera in 2015 and a year later acquired Nervana, filling out its hardware platform offerings and setting the stage for an entirely new generation of AI accelerator chipsets. Last August, Intel snatched up Vertex.ai, a startup developing a platform-agnostic AI model suite.

    Software

    Hardware is nothing if it can’t be easily developed against, Singer rightly pointed out. That’s why Intel has taken care not to neglect the software ecosystem piece of the AI puzzle, he said.

    Last April, the company announced it would open-source nGraph, a neural network model compiler that optimizes assembly code across multiple processor architectures. Around the same time, Intel took the wraps off One API, a suite of tools for mapping compute engines to a range of processors, graphics chips, FPGAs, and other accelerators. And in May, the company’s newly formed AI Lab made freely available a cross-platform library for natural language processing — NLP Architect — designed to imbue and benchmark conversational assistants with name entity recognition, intent extraction, and semantic parsing.

    Spring 2018 saw the launch of OpenVINO (Open Visual Inference & Neural Network Optimization), a toolset for AI edge computing development that packs pretrained AI models for object detection, facial recognition, and object tracking.

    Singer said OpenVINO is intended to complement Intel’s Computer Vision software development kit (SDK), which combines video processing, computer vision, machine learning, and pipeline optimization into a single package, with Movidius Neural Compute SDK, which includes a set of software to compile, profile, and check machine learning models. They’re in the same family as Intel’s Movidius Neural Compute API, which aims to simplify app development in programming languages like C, C++, and Python.

    Many of these suites run in Intel’s AI DevCloud, a cloud-hosted AI model training and inferencing platform powered by Xeon Scalable processors.

    Reply
  6. Tomi Engdahl says:

    DeepMind Deploys Self-taught Agents To Beat Humans at Quake III
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/deep-mind-gets-software-agents-to-work-together-to-beat-a-multiplayer-video-game

    Alphabet’s DeepMind, having crushed chess and Go, has now tackled the far harder challenge posed by the three-dimensional, multiplayer, first-person video game. Writing today in Science, lead author Max Jaderberg and 17 DeepMind colleagues describe how a totally unsupervised program of self-learning allowed software to exceed human performance in playing “Quake III Arena.” The experiment involved a version of the game that requires each of two teams to capture as many of the other teams’ flags as possible.

    Reply
  7. Tomi Engdahl says:

    5 of the Best Free Software Stacks for AI Development
    https://www.designnews.com/design-hardware-software/5-best-free-software-stacks-ai-development?ADTRK=InformaMarkets&elq_mid=8874&elq_cid=876648

    If you’re an engineer, designer, or maker looking to get into developing artificial intelligence and machine learning applications, look no further than these five options.

    Advances in artificial intelligence (AI) and machine learning (ML) have had a significant impact on the industrial, consumer, automotive, and entertainment markets. Coinciding with this there have been significant developments in the open source movement, where software stacks and libraries have allowed makers, engineers, and designers the creativity to build truly smart products for the home, school, industry, and business settings.

    Reply
  8. Tomi Engdahl says:

    How Far Can AI Go?
    https://semiengineering.com/how-far-can-ai-go/

    Current implementations have just scratched the surface of what this technology can do, and that creates its own set of issues.

    AI is everywhere. There are AI/ML chips, and AI is being used to design and manufacture chips.

    On the AI/ML chip side, large systems companies and startups are striving for orders of magnitude improvements in performance. To achieve that, design teams are adding in everything from CPUs, GPUs, TPUs, DSPs, as well as small FPGAs and eFPGAs. They also are using small memories that can be read in multiple directions by the processors, as well as larger in-chip memories and high-speed connections to off-chip HBM2 or GDDR6.

    The driving force behind these chips is being able to process massive amounts of data much more rapidly than in the past—in some cases, two or three orders of magnitude. That requires massive data throughput, and these chips are being architected so there are no bottlenecks in throughput or processing. The biggest challenge, so far, is keeping these processing elements busy enough, because idle processing elements costs money. This is easier with training data than it is with inferencing, but that may change as more of the inferencing is done across various slices on the edge.

    Reply
  9. Tomi Engdahl says:

    Smart AI Assistants are the Real Enabler for Edge AI
    https://www.eeweb.com/profile/yairs/articles/smart-ai-assistants-are-the-real-enabler-for-edge-ai

    AI at the edge will reduce overwhelming volumes of data to useful and relevant information on which we can act.

    A recent McKinsey report projects that by 2025, the CAGR for silicon containing AI functionality will be 5× that for non-AI silicon. Sure, AI is starting small, but that’s a pretty fast ramp. McKinsey also shows that the bulk of the opportunity is in AI inference and that the fastest growth area is on the edge. Stop and think about that. AI will be growing very fast in a lot of edge designs over the next six-plus years; that can’t be just for novelty value.

    Artificial-intelligence hardware: New opportunities for semiconductor companies
    https://www.mckinsey.com/industries/semiconductors/our-insights/artificial-intelligence-hardware-new-opportunities-for-semiconductor-companies

    Our analysis revealed three important findings about value creation:

    AI could allow semiconductor companies to capture 40 to 50 percent of total value from the technology stack, representing the best opportunity they’ve had in decades.
    Storage will experience the highest growth, but semiconductor companies will capture most value in compute, memory, and networking.
    To avoid mistakes that limited value capture in the past, semiconductor companies must undertake a new value-creation strategy that focuses on enabling customized, end-to-end solutions for specific industries, or “microverticals.”

    The AI technology stack will open many opportunities for semiconductor companies

    Reply
  10. Tomi Engdahl says:

    Facial recognition used to strip adult industry workers of anonymity
    http://nakedsecurity.sophos.com/2019/05/31/facial-recognition-used-to-strip-adult-industry-workers-of-anonymity/

    As if we don’t already have enough facial-recognition dystopia, someone’s claiming to have used the technology to match the faces of porn actresses to social media profiles in order to “help others check whether their girlfriends ever acted in those films.”

    Reply
  11. Tomi Engdahl says:

    Lattice Semiconductor Releases HM01B0 UPduino Shield 2.0 Dev Kit for AI Applications
    https://blog.hackster.io/lattice-semiconductor-releases-hm01b0-upduino-shield-2-0-dev-kit-for-ai-applications-1399009f548e

    The company designed the package for engineers who want to develop always-on, low-power IoT devices. The kit is based on the UPduino 2.0 board, which connects to the Himax HM01B0 module in an Arduino form factor.

    Reply
  12. Tomi Engdahl says:

    Europe is losing the AI race
    https://sifted.eu/articles/europe-is-losing-the-ai-race/

    Research in to artificial intelligence is led by institutions and companies in China, the US and South Korea.

    Reply
  13. Tomi Engdahl says:

    LG developed its own AI chip to make its smart home products even smarter
    https://techcrunch.com/2019/05/17/lg-ai-chip-smart-home/

    Reply
  14. Tomi Engdahl says:

    Google Coral to Create On-Device AI-Equipped Development Boards
    https://blog.hackster.io/asus-partners-with-google-coral-to-create-on-device-ai-equipped-development-boards-793ec9eb28e

    In essence, it places Google’s Coral Edge TPU optimized for machine learning applications on Asus’ Tinker Edge T and CR1S-CM-A SBCs outfitted with NXP’s i.MX 8M SoCs.

    Reply
  15. Tomi Engdahl says:

    Training a single AI model can emit as much carbon as five cars in their lifetimes
    Deep learning has a terrible carbon footprint.
    https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

    The artificial-intelligence industry is often compared to the oil industry: once mined and refined, data, like oil, can be a highly lucrative commodity. Now it seems the metaphor may extend even further. Like its fossil-fuel counterpart, the process of deep learning has an outsize environmental impact.

    Reply
  16. Tomi Engdahl says:

    Training a single AI model can emit as much carbon as five cars in their lifetimes
    Deep learning has a terrible carbon footprint.
    https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

    Reply
  17. Tomi Engdahl says:

    Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned’
    https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/

    Top artificial-intelligence researchers across the country are racing to defuse an extraordinary political weapon: computer-generated fake videos that could undermine candidates and mislead voters during the 2020 presidential campaign.

    And they have a message: We’re not ready.

    Powerful new AI software has effectively democratized the creation of convincing “deepfake” videos,

    And researchers fear it’s only a matter of time before the videos are deployed for maximum damage — to sow confusion, fuel doubt or undermine an opponent

    could even erode how people accept video evidence. Misinformation researcher Aviv Ovadya calls this problem “reality apathy”: “It’s too much effort to figure out what’s real and what’s not, so you’re more willing to just go with whatever your previous affiliations are.”

    “In general people do need to understand that video may not be an accurate representation of what happened,”

    In AI circles, identifying fake media has long received less attention, funding and institutional backing than creating it

    “Nation-states have had the ability to manipulate media since, essentially, the beginning of media,” Turek said.

    High-definition fake videos often are the easiest to detect

    “I worked on detection for 15 years. It doesn’t work,” said Nasir Memon, a professor of computer science and engineering at New York University.

    the tech giants’ policies don’t align on whether fakes should be deleted or flagged, demoted and preserved.

    Reply
  18. Tomi Engdahl says:

    This Software Engineer Designed a Chatbot to Chat With His Girlfriend While He’s Busy at Work
    https://www.news18.com/news/buzz/this-software-engineer-designed-a-chatbot-to-chat-with-his-girlfriend-while-hes-busy-at-work-2183111.html

    However, the girl eventually got suspicious over the speed she was receiving messages from her “boyfriend.”

    Reply
  19. Tomi Engdahl says:

    Suomalaiset tekoälyhuiput auttavat brittejä panemaan Facebookin kuriin – ymmärtävä ja oppiva moderaattorirobotti osaa kaikki maailman kielet
    https://www.ess.fi/uutiset/kotimaa/art2548560

    Reply
  20. Tomi Engdahl says:

    What Can AI Tell Us About Fine Art?
    https://spectrum.ieee.org/tech-talk/computing/software/what-can-ai-tell-us-about-fine-art

    Unsurprisingly, the model chosen to analyze aesthetics found that bold and intense paintings are the most pleasing, while dim and dull paintings are less so. But the factors that make art more attractive to the AI eye, such as color harmony (meaning if the colors go nicely together) and vividness, actually negatively correlate with the sentimental value of an image.

    Perhaps reflective of human nature, the models found nudity particularly memorable. Intriguingly, abstract images were also found to be memorable, which the authors say may be due to the absence of objects that we recognize. Because we rarely encounter the visual stimuli seen in abstract paintings, the image may draw the viewer’s attention more than a painting containing an object we are familiar with.

    Reply
  21. Tomi Engdahl says:

    Karen Hao / MIT Technology Review:
    A look at the blossoming machine-learning community in Africa, where IBM and Google are trying to use AI to tackle challenges like hunger, poverty, and disease

    The future of AI research is in Africa
    https://www.technologyreview.com/s/613848/ai-africa-machine-learning-ibm-google/

    Reply
  22. Tomi Engdahl says:

    Europe should ban AI for mass surveillance and social credit scoring, says advisory group
    https://techcrunch.com/2019/06/26/europe-should-ban-ai-for-mass-surveillance-and-social-credit-scoring-says-advisory-group/

    The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes.

    Reply
  23. Tomi Engdahl says:

    World’s First AI Universe Simulator Knows Things It Shouldn’t
    https://futurism.com/worlds-first-ai-universe-simulator

    “It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants.”

    Reply
  24. Tomi Engdahl says:

    Scientists claim to have developed world’s first vaccine with artificial intelligence
    https://www.telegraph.co.uk/news/0/scientists-claim-have-developed-worlds-first-vaccine-artificial/

    Reply
  25. Tomi Engdahl says:

    ASTOUNDING AI GUESSES WHAT YOU LOOK LIKE BASED ON YOUR VOICE
    https://futurism.com/the-byte/ai-guesses-appearance-voice

    new artificial intelligence created by researchers at the Massachusetts Institute of Technology pulls off a staggering feat: by analyzing only a short audio clip of a person’s voice, it reconstructs what they might look like in real life.

    The AI’s results aren’t perfect, but they’re pretty good — a remarkable and somewhat terrifying example of how a sophisticated AI can make incredible inferences from tiny snippets of data.

    Reply
  26. Tomi Engdahl says:

    Google packs Deep Learning Containers for out-of-the-box ML fun
    https://devclass.com/2019/06/27/google-packs-deep-learning-containers-for-out-of-the-box-ml-fun/

    Google has found another way to lure machine learning aficionados to its cloud, offering so-called Deep Learning Containers to get ML projects up and running quicker.

    The product consists of a set of performance-optimised Docker containers that come with a variety of tools necessary for deep learning tasks already installed.

    Reply
  27. Tomi Engdahl says:

    World’s First AI Universe Simulator Knows Things It Shouldn’t
    https://futurism.com/worlds-first-ai-universe-simulator

    “It’s like teaching image recognition software with lots of pictures of cats and dogs, but then it’s able to recognize elephants.”

    Reply
  28. Tomi Engdahl says:

    You recognize a cat from its pointy-eared, long-tailed shape. But deep learning algorithms apparently identify the animal’s image from its texture. New research reveals how that difference makes the A.I.‘s performance so much worse than ours. —from Quanta Magazine

    Where We See Shapes, AI Sees Textures
    https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/

    Reply
  29. Tomi Engdahl says:

    Facebook’s image outage reveals how the company’s AI tags your photos
    ‘Oh wow, the AI just tagged my profile picture as basic’

    https://www.theverge.com/2019/7/3/20681231/facebook-outage-image-tags-captions-ai-machine-learning-revealed

    Reply
  30. Tomi Engdahl says:

    TensorFlow Lite Ported to Arduino @HacksterIO by @aallan @TensorFlow @arduino #MachineLearning #EdgeComputing #Arduino #IoT
    https://blog.adafruit.com/2019/07/03/tensorflow-lite-ported-to-arduino-hacksterio-by-aallan-tensorflow-arduino-machinelearning-edgecomputing-arduino-iot/

    Reply
  31. Tomi Engdahl says:

    The first AI universe sim is fast and accurate—and its creators don’t know how it works
    https://phys.org/news/2019-06-ai-universe-sim-fast-accurateand.html

    Reply
  32. Tomi Engdahl says:

    Nvidia Chip Takes Deep Learning to the Extremes
    https://spectrum.ieee.org/tech-talk/semiconductors/processors/nvidia-chip-takes-deep-learning-to-the-extremes

    Last month at the VLSI Symposia in Kyoto, Nvidia detailed a tiny test chip that can work on its own to do the low-end jobs or be linked tightly together with up to 36 of its kin in a single module to do deep learning’s heavy lifting. And it does it all while achieving roughly the same top-class performance.

    Reply
  33. Tomi Engdahl says:

    Teaching Watson the Urban Dictionary turned out to be a huge mistake
    https://www.geek.com/geek-cetera/teaching-watson-the-urban-dictionary-turned-out-to-be-a-huge-mistake-1535490/

    Back in 2010, IBM research scientist Eric Brown decided to try and improve Watson’s natural language and conversational skills by teaching it the Urban Dictionary.

    Soon after learning all those slang phrases, Watson was caught responding to a researcher’s question with the answer “bullshit.” It seems profanity isn’t something Watson “gets,”

    Ultimately, Brown and his team had to remove all evidence of the Urban Dictionary from Watson.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*