3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,289 Comments

  1. Tomi Engdahl says:

    Surviving the Robocalypse Automation is striking at the heart of knowledge work
    https://spectrum.ieee.org/automation-jobs

    Reply
  2. Tomi Engdahl says:

    THE FEMTOJOULE PROMISE OF ANALOG AI
    To cut power by orders of magnitude, do your processing with analog circuits
    https://spectrum.ieee.org/analog-ai

    Reply
  3. Tomi Engdahl says:

    Artificially Intelligent “Sorting Hat” Aims to Bring AI Image Classification Benefits to All
    Designed to make the benefits of AI available to non-data scientists, this “Sorting Hat” works using Create ML — rather than magic.
    https://www.hackster.io/news/artificially-intelligent-sorting-hat-aims-to-bring-ai-image-classification-benefits-to-all-9f6f9d57d21f

    Reply
  4. Tomi Engdahl says:

    Low code and no code may open more doors to artificial intelligence
    The jury is still out on whether low and no code platforms can blaze a path to high-end application development — at least not yet.
    https://www.zdnet.com/article/low-code-and-no-code-may-open-more-doors-to-artificial-intelligence/

    Reply
  5. Tomi Engdahl says:

    This Could Come in Handy
    MIT’s robotics framework uses reinforcement learning to teach robots how to reorient thousands of objects in their hands.
    https://www.hackster.io/news/this-could-come-in-handy-6a8ae8bf887c

    Reply
  6. Tomi Engdahl says:

    Building artificial intelligence: staffing is the most challenging part
    ‘Machine learning projects are much more complicated and bigger than machine learning model algorithms.’
    https://www.zdnet.com/article/building-artificial-intelligence-staffing-is-the-most-challenging-part/

    Reply
  7. Tomi Engdahl says:

    Elon Musk: AI will kill us all…

    MP Demands Deepfake Porn And ‘Nudifying’ Images Are Made Sex Crimes
    https://lm.facebook.com/l.php?u=https%3A%2F%2Fwww.huffingtonpost.co.uk%2Fentry%2Fban-rape-deepfake-nudifying-tech_uk_61a79734e4b0f398af1aeeb1&h=AT2znCXtPtgpFl3f1QueUlFql1yJixBa-oZhQdz0u7Ons5Tn6Qn4omjgjPZ24-IfKAC-GERJFD5vFSuQlEp1mbKm9RatOxZxqnsaKiNOx5W9F3gPnSuxZHBHGhbgCJ8Rmg

    An MP has called for non-consensual deepfake porn and nudification images to be made sex crimes, warning they are rapidly on the rise.

    Maria Miller wants the government to ban the making and sharing of image-based “sexual abuse” under the online safety bill.

    She will bring an adjournment debate to the Commons on Thursday in which she will outline the “devastating” impact such images have on the victims.

    Meanwhile, nudification software takes everyday images of women and creates a new image which makes it appear as if they are naked.

    The Tory MP said the creation of such images without consent was a “highly sexualised act” and they were difficult to remove from the internet.

    Reply
  8. Tomi Engdahl says:

    Researchers use simulated environments to train AI
    University of Missouri engineers are hoping to enhance artificial intelligence (AI) by using simulated environments.
    https://www.controleng.com/articles/researchers-use-simulated-environments-to-train-ai/?oly_enc_id=0462E3054934E2U

    Reply
  9. Tomi Engdahl says:

    Researchers use simulated environments to train AI
    University of Missouri engineers are hoping to enhance artificial intelligence (AI) by using simulated environments.
    https://www.controleng.com/articles/researchers-use-simulated-environments-to-train-ai/?oly_enc_id=0462E3054934E2U

    Reply
  10. Tomi Engdahl says:

    AI Is Discovering Patterns in Pure Mathematics That Have Never Been Seen Before
    https://www.sciencealert.com/ai-is-discovering-patterns-in-pure-mathematics-that-have-never-been-seen-before

    We can add suggesting and proving mathematical theorems to the long list of what artificial intelligence is capable of: Mathematicians and AI experts have teamed up to demonstrate how machine learning can open up new avenues to explore in the field.

    Reply
  11. Tomi Engdahl says:

    Will Douglas Heaven / MIT Technology Review:
    DeepMind claims its AI called RETRO matches the performance of neural networks 25 times its size, cutting the time and cost of training large language models

    DeepMind says its new language model can beat others 25 times its size
    https://www.technologyreview.com/2021/12/08/1041557/deepmind-language-model-beat-others-25-times-size-gpt-3-megatron/

    RETRO uses an external memory to look up passages of text on the fly, avoiding some of the costs of training a vast neural network

    Reply
  12. Tomi Engdahl says:

    Queenie Wong / CNET:
    Meta says its new AI system, Few-Shot Learner, needs little training data to quickly adapt to fighting new types of harmful content like COVID-19 misinformation

    Facebook parent Meta uses AI to tackle new types of harmful content
    https://www.cnet.com/tech/mobile/facebook-parent-meta-uses-ai-to-tackle-new-types-of-harmful-content/

    The company has been testing new AI technology to flag posts that discourage COVID-19 vaccines or imply violence, which may be harder to catch.

    Reply
  13. Tomi Engdahl says:

    Elaine Glusac / New York Times:
    US airports are increasingly using biometrics like facial recognition to verify IDs and shorten security procedures for passengers who opt into the programs

    https://www.nytimes.com/2021/12/07/travel/biometrics-airports-security.html

    Reply
  14. Tomi Engdahl says:

    Murresanojen tunnistukseen tekoälyä – testigeneraattori ja koodit verkossa
    https://www.uusiteknologia.fi/2021/12/15/murresanojen-tunnistukseen-tekoalya-testigeneraattori-ja-koodit-verkossa/

    Koneäly ymmärtää suomea yleensä vain kirjakielenä. Kun vuorovaikutuksessa tietokoneiden kanssa käytetään suomen eri murteita, syntyy paljon ongelmatilanteita. Sitä avittamaan Helsingin yliopiston tutkijat ovat opettaneet tekoälylle suomen kielen eri murteita. Tarjolla on verkossa testigeneraattori ja ohjelmakoodit.

    https://uralicnlp.com/murre
    https://github.com/mikahama/murre
    https://github.com/Rootroo-ltd/FinnishDialectIdentification

    Reply
  15. Tomi Engdahl says:

    Melanie Mitchell / Quanta Magazine:
    AI language models like GPT-3 can achieve up to 97% accuracy on some Winograd schemas, but understanding language doesn’t equate to understanding the world

    What Does It Mean for AI to Understand?
    It’s simple enough for AI to seem to comprehend data, but devising a true test of a machine’s knowledge has proved difficult.
    https://www.quantamagazine.org/what-does-it-mean-for-ai-to-understand-20211216/

    Remember IBM’s Watson, the AI Jeopardy! champion? A 2010 promotion proclaimed, “Watson understands natural language with all its ambiguity and complexity.” However, as we saw when Watson subsequently failed spectacularly in its quest to “revolutionize medicine with artificial intelligence,” a veneer of linguistic facility is not the same as actually comprehending human language.

    Natural language understanding has long been a major goal of AI research. At first, researchers tried to manually program everything a machine would need to make sense of news stories, fiction or anything else humans might write. This approach, as Watson showed, was futile — it’s impossible to write down all the unwritten facts, rules and assumptions required for understanding text. More recently, a new paradigm has been established: Instead of building in explicit knowledge, we let machines learn to understand language on their own, simply by ingesting vast amounts of written text and learning to predict words. The result is what researchers call a language model. When based on large neural networks, like OpenAI’s GPT-3, such models can generate uncannily humanlike prose (and poetry!) and seemingly perform sophisticated linguistic reasoning.

    But has GPT-3 — trained on text from thousands of websites, books and encyclopedias — transcended Watson’s veneer? Does it really understand the language it generates and ostensibly reasons about? This is a topic of stark disagreement in the AI research community. Such discussions used to be the purview of philosophers, but in the past decade AI has burst out of its academic bubble into the real world, and its lack of understanding of that world can have real and sometimes devastating consequences. In one study, IBM’s Watson was found to propose “multiple examples of unsafe and incorrect treatment recommendations.” Another study showed that Google’s machine translation system made significant errors when used to translate medical instructions for non-English-speaking patients.

    How can we determine in practice whether a machine can understand? In 1950, the computing pioneer Alan Turing tried to answer this question with his famous “imitation game,” now called the Turing test. A machine and a human, both hidden from view, would compete to convince a human judge of their humanness using only conversation. If the judge couldn’t tell which one was the human, then, Turing asserted, we should consider the machine to be thinking — and, in effect, understanding.

    Unfortunately, Turing underestimated the propensity of humans to be fooled by machines. Even simple chatbots, such as Joseph Weizenbaum’s 1960s ersatz psychotherapist Eliza, have fooled people into believing they were conversing with an understanding being, even when they knew that their conversation partner was a machine.

    In a 2012 paper, the computer scientists Hector Levesque, Ernest Davis and Leora Morgenstern proposed a more objective test, which they called the Winograd schema challenge.

    A Winograd schema, named for the language researcher Terry Winograd, consists of a pair of sentences, differing by exactly one word, each followed by a question.

    “Neural network language models have achieved about 97% accuracy on a particular set of Winograd schemas. This roughly equals human performance.”

    However, the ability of AI programs to solve Winograd schemas rose quickly due to the advent of large neural network language models. A 2020 paper from OpenAI reported that GPT-3 was correct on nearly 90% of the sentences in a benchmark set of Winograd schemas. Other language models have performed even better after training specifically on these tasks. At the time of this writing, neural network language models have achieved about 97% accuracy on a particular set of Winograd schemas that are part of an AI language-understanding competition known as SuperGLUE. This accuracy roughly equals human performance. Does this mean that neural network language models have attained humanlike understanding?

    the current best programs — which have been trained on terabytes of text and then further trained on thousands of WinoGrande examples — get close to 90% correct (humans get about 94% correct). This increase in performance is due almost entirely to the increased size of the neural network language models and their training data.

    Have these ever larger networks finally attained humanlike commonsense understanding? Again, it’s not likely. The WinoGrande results come with some important caveats. For example, because the sentences relied on Amazon Mechanical Turk workers, the quality and coherence of the writing is quite uneven.

    So, what to make of the Winograd saga? The main lesson is that it is often hard to determine from their performance on a given challenge if AI systems truly understand the language (or other data) that they process. We now know that neural networks often use statistical shortcuts — instead of actually demonstrating humanlike understanding — to obtain high performance on the Winograd schemas as well as many of the most popular “general language understanding” benchmarks.

    Reply
  16. Tomi Engdahl says:

    Researchers Defeat Randomness to Create Ideal Code
    https://www.quantamagazine.org/researchers-defeat-randomness-to-create-ideal-code-20211124/

    By carefully constructing a multidimensional and well-connected graph, a team of researchers has finally created a long-sought locally testable code that can immediately betray whether it’s been corrupted.

    Reply
  17. Tomi Engdahl says:

    Analog circuits can perform essential AI calculations for sometimes orders of magnitude less energy. The analog AI boom is coming.

    Promise of Analog AI Feeds Neural Net Hardware Pipeline
    https://spectrum.ieee.org/new-devices-for-analog-ai?utm_campaign=RebelMouse&socialux=facebook&share_id=6837095&utm_medium=social&utm_content=IEEE+Spectrum&utm_source=facebook

    Exotic technologies could lead to ultra-low-power AI applications

    Reply
  18. Tomi Engdahl says:

    Mitä tekoälyn etiikka tarkoittaa? Kolme syytä opetella perusasiat
    Tekoälyn etiikka koskettaa meitä kaikkia, usein huomaamatta. Siksi perusteet kannattaa opetella nyt. Sen voi tehdä esimerkiksi uudella Ethics of AI -verkkokurssilla. Kurssin voi suorittaa suomeksi, ruotsiksi ja englanniksi.

    https://www.helsinki.fi/fi/uutiset/ihmisten-teknologia/mita-tekoalyn-etiikka-tarkoittaa-kolme-syyta-opetella-perusasiat?utm_source=facebook&utm_medium=cpc&utm_campaign=teknologia&fbclid=IwAR0edsTJaezM9EXa0nAuCL556aPCtdxDnP_oriB3VS5k-cUFOsi6HTmmIrE

    Reply
  19. Tomi Engdahl says:

    Chinese researchers turn to artificial intelligence to build futuristic weapons
    https://www.scmp.com/news/china/science/article/3158522/chinese-researchers-turn-artificial-intelligence-build

    Scientists say they have used the technology to build a pistol-sized coilgun that is the smallest and most powerful of its kind
    The Chinese military already uses AI to build powerful weapons such as railguns, that can fire projectiles over a range of hundreds of kilometres

    Reply
  20. Tomi Engdahl says:

    Memory Chips That Compute Will Accelerate AI
    Samsung could double performance of neural nets with processing-in-memory
    https://spectrum.ieee.org/processing-in-dram-accelerates-ai

    John von Neumann’s original computer architecture, where logic and memory are separate domains, has had a good run. But some companies are betting that it’s time for a change.

    In recent years, the shift toward more parallel processing and a massive increase in the size of neural networks mean processors need to access more data from memory more quickly. And yet “the performance gap between DRAM and processor is wider than ever,” says Joungho Kim, an expert in 3D memory chips at Korea Advanced Institute of Science and Technology, in Daejeon, and an IEEE Fellow. The von Neumann architecture has become the von Neumann bottleneck.

    What if, instead, at least some of the processing happened in the memory? Less data would have to move between chips, and you’d save energy, too. It’s not a new idea. But its moment may finally have arrived. Last year, Samsung, the world’s largest maker of dynamic random-access memory (DRAM), started rolling out processing-in-memory (PIM) tech. Its first PIM offering, unveiled in February 2021, integrated AI-focused compute cores inside its Aquabolt-XL high-bandwidth memory. HBM is the kind of specialized DRAM that surrounds some top AI accelerator chips. The new memory is designed to act as a “drop-in replacement” for ordinary HBM chips, said Nam Sung Kim, an IEEE Fellow, who was then senior vice president of Samsung’s memory business unit.

    Reply
  21. Tomi Engdahl says:

    Artificial intelligence developed to understand object relationships
    A machine-learning model developed by MIT researchers could enable robots to understand interactions in the world in the way humans do with artificial intelligence.
    https://www.controleng.com/articles/artificial-intelligence-developed-to-understand-object-relationships/?oly_enc_id=0462E3054934E2U

    Reply
  22. Tomi Engdahl says:

    2021′s Top Stories About AI Spoiler: A lot of them talked about what’s wrong with machine learning today
    https://spectrum.ieee.org/artificial-intelligence-2021?utm_campaign=RebelMouse&socialux=facebook&share_id=6836754&utm_medium=social&utm_content=IEEE+Spectrum&utm_source=facebook

    2021 was the year in which the wonders of artificial intelligence stopped being a story. Which is not to say that IEEE Spectrum didn’t cover AI—we covered the heck out of it. But we all know that deep learning can do wondrous things and that it’s being rapidly incorporated into many industries; that’s yesterday’s news. Many of this year’s top articles grappled with the limits of deep learning (today’s dominant strand of AI) and spotlighted researchers seeking new paths.

    Reply
  23. Tomi Engdahl says:

    Audiobooks – An Under-Served Market For Artificial Intelligence Voice Text And Voice Solutions
    https://www.forbes.com/sites/davidteich/2021/12/14/audiobooks–an-under-served-market-for-artificial-intelligence-voice-text-and-voice-solutions/?sh=3ed8f2692c8d&utm_campaign=socialflowForbesMainFB&utm_source=ForbesMainFacebook&utm_medium=social

    I am not a fan of audiobooks, but I understand that audiobooks are a rapidly expanding market opportunity. Grand View Research predicts a USD $15 billion market by 2027. The problem with meeting or exceeding that expectation is in the challenge of producing significantly more audiobooks. Artificial intelligence (AI) can provide technology that can streamline audiobook production and meet the constantly increasing demand.

    While the demand for audiobooks is increasing, production of audiobooks faces many procedural challenges.

    estimates for the professional creation of an audiobook tend to be in the thousands of dollars, with a minimum of $2-3k and average of $5-10k.

    Reply
  24. Tomi Engdahl says:

    Will Markets For ML Models Materialize?
    https://semiengineering.com/will-markets-for-ml-models-materialize/

    There’s a lot of discussion about how this can work, and even some pioneering work, but no firm conclusions.

    Developers are spending increasing amounts of time and effort in creating machine-learning (ML) models for use in a wide variety of applications. While this will continue as the market matures, at some point some of these efforts might be seen as reinventing models over and over.

    Will developers of successful models ever have a marketplace in which they can sell those models as IP to other developers? Are there developers that would use the models? And are there any models mature enough to go up for sale?

    “From an end-customer perspective, there are definitely a lot of companies, especially in the IoT domain, that do not have enough time and energy and resources to spend time training networks, because training networks is a heavy exercise,” said Suhas Mitra, product marketing director for Tensilica AI products at Cadence.

    As to potential areas that might be ready to go, “There are definitely applications in image and natural-language understanding where pre-trained models can work well out of the box,” said Sree Harsha Angara, product marketing manager for IoT, compute, and security at Infineon.

    Reply
  25. Tomi Engdahl says:

    China develops AI ‘prosecutor’ that can charge citizens with crimes with ’97% accuracy’
    https://www.yahoo.com/news/china-develops-ai-prosecutor-charge-220356905.html

    China has developed an artificial intelligence capable of charging people with more than 97% accuracy, replacing prosecutors “to a certain extent,” according to its researchers.

    How it works: The machine, built and tested by the Shanghai Pudong People’s Procuratorate — China’s largest district prosecution office — can file a charge based only on verbal description, according to the South China Morning Post. The program runs on a desktop computer.

    Researchers “trained” the machine between 2015 and 2020 using over 17,000 cases. It can now charge a suspect based on 1,000 “traits” gathered from a human-documented description of a case.

    At present, the machine can charge eight of the most common crimes in Shanghai. These include fraud, credit card fraud, theft, dangerous driving, intentional injury, obstructing official duties, running a gambling operation and “picking quarrels and provoking trouble.”

    The machine works with another program called System 206, which reportedly evaluates evidence, conditions for an arrest, and the degree of danger a suspect poses to the public. It’s unclear how many jurisdictions are currently employing the tool.

    What the researchers are saying: The new AI “prosecutor” has its limitations, but developers say it will only get better with upgrades. So far, it helps reduce the workload of prosecutors at the district office, giving them time to focus on more complex tasks.

    Reply
  26. Tomi Engdahl says:

    Chinese scientists develop AI ‘prosecutor’ that can press its own charges
    https://www.scmp.com/news/china/science/article/3160997/chinese-scientists-develop-ai-prosecutor-can-press-its-own

    Machine is so far able to identify eight common crimes such as fraud, gambling, dangerous driving and ‘picking quarrels’, researchers say
    Prosecutors in China already use an AI tool to evaluate evidence and assess how dangerous a suspect is to the public

    Reply
  27. Tomi Engdahl says:

    CHINA CREATED AN AI ‘PROSECUTOR’ THAT CAN CHARGE PEOPLE WITH CRIMES
    byTONY TRAN
    https://futurism.com/the-byte/china-ai-prosecutor-crimes

    IT’S BEEN TRAINED TO IDENTIFY SHANGHAI’S EIGHT MOST COMMON CRIMES.

    Reply
  28. Tomi Engdahl says:

    U.S. vs. China Rivalry Boosts Tech—and Tensions Militarized AI threatens a new arms race
    https://spectrum.ieee.org/china-us-militarized-ai?share_id=6832865

    Reply
  29. Tomi Engdahl says:

    On the malicious use of large language models like GPT-3 https://research.nccgroup.com/2021/12/31/on-the-malicious-use-of-large-language-models-like-gpt-3/
    While attacking machine learning systems is a hot topic for which attacks have begun to be demonstrated, I believe that there are a number of entirely novel, yet-unexplored attack-types and security risks that are specific to large language models (LMs). That may be intrinsically dependent upon things like large LMs’ unprecedented scale and the massive corpus of source code and vulnerability databases within their underlying training data. This blogpost explores the theoretical question of whether (and how) large language models like GPT-3 or their successors may be useful for exploit generation, and proposes an offensive security research agenda for large language models. based on a converging mix of existing experimental findings about privacy, learned examples, security, multimodal abstraction, and generativity (of novel output, including
    code) by large language models including GPT-3.

    Reply
  30. Tomi Engdahl says:

    Madhumita Murgia / Financial Times:
    Nightingale Open Science, a free-to-use 40TB medical imagery data set, could help train AI to predict medical conditions earlier, triage better, and save lives

    https://t.co/vXf4HdYecd

    Reply
  31. Tomi Engdahl says:

    AI Learns Insane Monopoly Strategies
    https://www.youtube.com/watch?v=dkvFcYBznPI

    all hail the brown set, and rapidly auctioning everything, according to AI at least. 11.2 million games of self-play were used to discover the secrets of this classic game

    Reply
  32. Tomi Engdahl says:

    AI feat helps machines learn at speed of light without supervision
    Researchers discover how to use light instead of electricity to advance artificial intelligence.
    https://bigthink.com/the-future/ai-feat-helps-machines-learn-at-speed-of-light-without-supervision/#Echobox=1640978590

    Reply
  33. Tomi Engdahl says:

    Codex Exposed: Exploring the Capabilities and Risks of OpenAI’s Code Generator https://www.trendmicro.com/en_us/research/22/a/codex-exposed–exploring-the-capabilities-and-risks-of-openai-s-.html
    The first of a series of blog posts examines the security risks of Codex, a code generator powered by the GPT-3 engine.

    Reply
  34. Tomi Engdahl says:

    Kyle Wiggers / VentureBeat:
    A look at BigScience, a global effort of 900+ researchers backed by NLP startup Hugging Face, that’s working to make large language models more accessible

    Inside BigScience, the quest to build a powerful open language model
    https://venturebeat.com/2022/01/10/inside-bigscience-the-quest-to-build-a-powerful-open-language-model/

    Reply
  35. Tomi Engdahl says:

    AI-as-a-Service Addresses Manufacturing Needs
    Dec. 27, 2021
    In this episode of TechXchange Talks, we sit down with DataProphet’s CTO, Dr. Michael Grant, to talk about prescriptive artificial-intelligence and machine-learning solutions in advanced manufacturing.
    https://www.electronicdesign.com/techxchange/talks/video/21212111/aiasaservice-addresses-manufacturing-needs?utm_source=EG%20ED%20Connected%20Solutions&utm_medium=email&utm_campaign=CPS220103069&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Reply
  36. Tomi Engdahl says:

    Cyber Insights 2022: Adversarial AI
    https://www.securityweek.com/cyber-insights-2022-adversarial-ai

    Adversarial AI is a bit like quantum computing – we know it’s coming, and we know it will be dramatic. The difference is that Adversarial AI is already happening and will increase in quantity and quality over the next couple of years.

    Adversarial AI – or the use of artificial intelligence and machine learning within offensive cyber activity – comes in two flavors: attacks that use AI and attacks against AI. Both are already in use, albeit so far only embryonic use.

    An example of the former could be the use of deepfakes as part of a BEC scam. An example of the latter could be poisoning the data underlying AI decisions so that wrong conclusions are drawn. Neither will be as common as traditional software attacks – but when they occur, the effect will be severe.

    “The biggest difference compared to attacks on software is that AI will be responsible for more advanced and expensive decisions just by its nature,” comments Alex Polyakov, CEO and founder of Adversa.AI. “In 2022, attacks on AI will be less common than traditional attacks on software – at least in the short term – but will definitely be responsible for higher losses. Every existing category of vulnerability in AI such as evasion, poisoning and extraction can lead to catastrophic effects. What if a self-driving car could be attacked by an evasion attack and cause death? Or what if financial models could be poisoned with the wrong data?”

    Known Threats (Using AI)

    Targeted malware
    Deepfakes
    Generative Adversarial Networks

    Expected Threats (Abusing AI)
    Cybersecurity disruption
    National security disruption
    Adversarial AI in 2022

    Reply
  37. Tomi Engdahl says:

    Meta claims its AI improves speech recognition quality by reading lips
    https://venturebeat.com/2022/01/07/meta-claims-its-ai-improves-speech-recognition-quality-by-reading-lips/

    People perceive speech both by listening to it and watching the lip movements of speakers. In fact, studies show that visual cues play a key role in language learning. By contrast, AI speech recognition systems are built mostly — or entirely — on audio. And they require a substantial amount of data to train, typically ranging in the tens of thousands of hours of recordings.

    Reply
  38. Tomi Engdahl says:

    New Research: People Who Get Defeated By AI Feel Horrible
    https://lm.facebook.com/l.php?u=https%3A%2F%2Ffuturism.com%2Fpeople-defeated-ai-feel-horrible&h=AT1ZWEwCz22PpX0mmePn1em724LPwzAX5jPE9kUslJQqlprXod1u0F_T-KV_9b-rdYHT4GDNMzchXo0neq5InNIkvZl-0rus-7BP07MfygHtAcq2y7QCn_XMPWlqMDcGJA

    Whether it’s chess, Go, or Starcraft II, computer scientists are getting pretty good at building artificial intelligence that excels at games once dominated by people.

    For the hapless humans who are left eating the pro-gamer AI’s dust, coming second to the bots again and again has a noticeable demoralizing effect, according to a study presented at a recent conference on human-robot interaction. While the psychological effects of playing games against a robot may not be groundbreaking, the study has dire implications for people who see more and more of their co-workers replaced by robots and AI — in other words, as we all start to lose at the game of work.

    Reply
  39. Tomi Engdahl says:

    Nokian tekoäly tekee tasoristeyksistä turvallisempia
    https://etn.fi/index.php/13-news/13040-nokian-tekoaely-tekee-tasoristeyksistae-turvallisempia

    BLT analysoi valvontakameroiden kuvaa Nokian Scene Analytics -ohjelmiston avulla. Järjestelmä oppii jatkuvasti siitä, mikä on normaalia tasoristeyksen alueella ja mikä poikkeavaa.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*