3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,117 Comments

  1. Tomi Engdahl says:

    Simple electrical circuit learns on its own—with no help from a computer
    System sidesteps computing bottleneck in tuning artificial intelligence algorithms
    https://www.science.org/content/article/simple-electrical-circuit-learns-its-own-no-help-computer

    CHICAGO—A simple electrical circuit has learned to recognize flowers based on their petal size. That may seem trivial compared with artificial intelligence (AI) systems that recognize faces in a crowd, transcribe spoken words into text, and perform other astounding feats. However, the tiny circuit outshines conventional machine learning systems in one key way: It teaches itself without any help from a computer—akin to a living brain. The result demonstrates one way to avoid the massive amount of computation typically required to tune an AI system, an issue that could become more of a roadblock as such programs grow increasingly complex.

    “It’s a proof of principle,” says Samuel Dillavou, a physicist at the University of Pennsylvania who presented the work here this week at the annual March meeting of the American Physical Society. “We are learning something about learning.”

    Reply
  2. Tomi Engdahl says:

    Christopher Mims / Wall Street Journal:
    How researchers are using self-supervised learning, a branch of AI that has proven effective for handling human language, to better understand animal sounds

    https://www.wsj.com/articles/alexa-for-animals-ai-is-teaching-us-how-creatures-communicate-11647662415?mod=djemalertNEWS

    Reply
  3. Tomi Engdahl says:

    EleutherAI: When OpenAI Isn’t Open Enough This group’s free and open-source AI language model aims for GPT-3 power with Linux-scale collaboration and distribution
    https://spectrum.ieee.org/eleutherai-openai-not-open-enough?share_id=6963275

    Reply
  4. Tomi Engdahl says:

    Multiplexing Could Give Neural Networks a Big Boost Combining multiple data streams into one feed could speed up networks and let them tackle more than one task at a time
    https://spectrum.ieee.org/neural-network-multiplex?share_id=6966370

    Reply
  5. Tomi Engdahl says:

    LinkedIn Researchers Open-Source ‘FastTreeSHAP’: A Python Package That Enables An Efficient Interpretation of Tree-Based Machine Learning Models
    https://www.marktechpost.com/2022/03/20/linkedin-researchers-open-source-fasttreeshap-a-python-package-that-enables-an-efficient-interpretation-of-tree-based-machine-learning-models/

    Reply
  6. Tomi Engdahl says:

    Shannon Bond / NPR:
    Researchers identify thousands of computer-generated profile pictures being used on fake LinkedIn profiles for lead generation and product promotion — Loading… At first glance, Renée DiResta thought the LinkedIn message seemed normal enough. — The sender, Keenan Ramsey …

    That smiling LinkedIn profile face might be a computer-generated fake
    https://www.npr.org/2022/03/27/1088140809/fake-linkedin-profiles

    Reply
  7. Tomi Engdahl says:

    Honey might be the key to cooler, more efficient, biodegradable chips
    https://www.pcgamer.com/honey-might-be-the-key-to-cooler-more-efficient-biodegradable-chips/?utm_source=facebook.com&utm_medium=social&utm_campaign=socialflow

    These chips might be the future of neuromorphic computing.

    Honey could be the next material used to create brain-like computer chips. Its proven practicality marks another step toward creating efficient, renewable processors for neuromorphic computing systems, using biodegradable products.

    Research engineers from WSU’s School of Engineering and Computer Science, Feng Zhao and Brandon Sueoka, first processed honey into a solid. Then they jammed it between two electrodes, using a structure design similar to that of a human synapse. They’re known as ‘memristors,’ and are proficient at learning and retaining information just like human neurons.

    By mimicking the brain, these memristors can work more efficiently, processing and storing data using neuromorphic computing techniques.

    Reply
  8. Tomi Engdahl says:

    Jackie Snow / Wall Street Journal:
    A look at the regulations cities like New York, Barcelona, Amsterdam, Helsinki, and others are adopting as they increasingly use AI to provide public services — A look at what New York, London, Barcelona and other places are doing to establish regulations that other cities—and countries—may want to copy

    Cities Take the Lead in Setting Rules Around How AI Is Used
    A look at what New York, London, Barcelona and other places are doing to establish regulations that other cities—and countries—may want to copy
    https://www.wsj.com/articles/cities-take-lead-setting-rules-around-how-ai-is-used-11649448031?mod=djemalertNEWS

    As cities and states roll out algorithms to help them provide services like policing and traffic management, they are also racing to come up with policies for using this new technology.

    AI, at its worst, can disadvantage already marginalized groups, adding to human-driven bias in hiring, policing and other areas. And its decisions can often be opaque—making it difficult to tell how to fix that bias, as well as other problems.

    Cities are looking at a number of solutions to these problems. Some require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place.

    Explaining the algorithms: Amsterdam and Helsinki

    One of the biggest complaints against AI is that it makes decisions that can’t be explained, which can lead to complaints about arbitrary or even biased results.

    To let their citizens know more about the technology already in use in their cities, Amsterdam and Helsinki collaborated on websites that document how each city government uses algorithms to deliver services. The registry includes information on the data sets used to train an algorithm, a description of how an algorithm is used, how public servants use the results, the human oversight involved and how the city checks the technology for problems like bias.

    Amsterdam has six algorithms fully explained—with a goal of 50 to 100—on the registry website, including how the city’s automated parking-control and trash-complaint reports work. Helsinki, which is only focusing on the city’s most advanced algorithms, also has six listed on its site, with another 10 to 20 left to put up.

    “We needed to assess the risk ourselves,” says Linda van de Fliert, an adviser at Amsterdam’s Chief Technology Office. “And we wanted to show the world that it is possible to be transparent.”

    The registries don’t give citizens personalized information explaining their individual bills or fees. But they provide citizens with a way to give feedback on algorithms, and the name, city department and contact information of the person responsible for the deployment of a particular algorithm. So far, at least one Amsterdam man who was displeased about getting an automated text about an overdue electricity bill used the registry to find out why the government contacted him.

    Some cities are looking at ways to remove potential bias from algorithms. In January, the New York City Council passed a law—to go into effect in 2023—covering companies that sell AI software that screens potential employees.

    “Hiring is a really high-stakes domain,” says Julia Stoyanovich, an associate professor of computer science and engineering at New York University and the director of the NYU Tandon Center for Responsible AI, who consulted on the regulation. “And we are using a lot of tools without any oversight.”

    Another effort to cut down on bias is giving communities a say in how law enforcement uses AI.

    Since the Santa Clara law passed, the Board of Supervisors has approved the use of roughly 100 technologies. The one exception: a proposal on facial-recognition technology, because of concerns including the potential for false positives.

    “I’m a tech enthusiast,” says Joe Simitian, a member of the county’s Board of Supervisors. “But there was significant potential for this to be abused without a robust set of policies.”

    Cooperating with other cities: Amsterdam, Barcelona and London

    A group of cities are pushing an AI effort to help educate other cities on best practices on deploying AI systems more effectively and ethically. That is what Amsterdam, Barcelona and London hope to achieve with the Global Observatory of Urban AI.

    “We want to become a knowledge source for both cities and researchers,” says Laia Bonet, Barcelona’s deputy mayor for digital transitions, mobility and international relations.

    Reply
  9. Tomi Engdahl says:

    Madhumita Murgia / Financial Times:
    A profile of Father Paolo Benanti, an engineer and ethicist who advises theologians and priests, including Pope Francis, on the moral and ethical issues of AI — On February 26, two days after war broke out in Europe for the first time in decades, Father Paolo Benanti walked briskly through the centre of Rome dressed in a hooded robe.

    The Franciscan monk helping the Vatican take on — and tame — AI
    Father Paolo Benanti has become one of the Pope’s chief advisers on the potential harms of new tech
    https://www.ft.com/content/1fa17d8b-5902-4aff-a69d-419b96722c83

    Benanti is a Franciscan monk who lives in a spartan monastery he shares with four other friars perched above a tiny Roman church. The Franciscans take vows and live in communities, but they are not typical priests. They have day jobs teaching or doing charity or social work, emulating the life of Francis of Assisi, their founding saint. Benanti’s monastery is a house of learning: all the friars, the eldest of whom is 100, are either current or former professors, and their areas of expertise span chemistry, philosophy, technology and music.

    Benanti, the youngest of them at 48, is an engineer and an ethicist, mantles he wears comfortably over his priest’s robes. As an ethics professor at the Pontifical Gregorian University, a nearly-500-year-old institution 10 minutes’ walk from the monastery, he instructs graduate theologians and priests in the moral and ethical issues surrounding cutting-edge technology such as bioaugmentation, neuroethics and artificial intelligence (AI).

    Benanti was on his way to see Pope Francis, the Argentine-born pontiff whom he likens to a passionate tango dance in contrast, he says, to his predecessor’s staid waltz. At the meeting, Benanti was to act as a translator of both languages and disciplines. He is fluent in English, Italian, technology, ethics and religion.

    The Pope’s guest was Brad Smith, president of the US technology giant Microsoft, who had arrived the previous day in a ­private jet. On the agenda was the topic of AI and, specifically, how humanity as a whole could benefit from this powerful technology, rather than being at its mercy. The meeting was timely: the Pope was concerned about how AI might be used to wage war in Ukraine. Not to ­mention, what he could do to prevent the technology from ultimately destroying the fabric of humanity.

    Over the past three years, Benanti has become the AI whisperer to the highest echelon of the Holy See. The monk, who completed part of his PhD in the ethics of human enhancement technologies at Georgetown University in the US, briefs the 85-year-old Pope and senior counsellors on the potential applications of AI, which he describes as a general-purpose technology “like steel or electrical power”, and how it will change the way in which we all live. He also plays the role of matchmaker between what Stephen Jay Gould famously described as the non-overlapping magisteria, leaders of faith on the one hand and technology on the other.

    He has held meetings with IBM’s vice-president John Kelly, Mustafa Suleyman, co-founder of Alphabet-owned AI company DeepMind, and Norberto Andrade, who heads AI ethics policy at Facebook, to facilitate an exchange of ideas on what is considered “ethical” in the design and deployment of the emerging technology.

    Benanti, despite his jovial and optimistic affect, has also been instrumental in advising the Pope and his council on AI’s potential dangers. “AI has the power to produce another technological revolution . . . [and] it can usurp not only the power of workers, but also the decision-making power of human beings,” he says, as he shows me around the Church of Saints Quiricus and Julietta, his home. “It could be unjust and dangerous for social peace and social good.”

    The Church’s leaders are particularly concerned with the idea that AI could increase inequality. “Algorithms make us quantifiable. The idea that if we transform human beings into data, they can be processed or discarded, that is something that really touches the sensibility of the Pope,” Benanti tells me.

    Benanti helped draft an ethical commitment, titled the Rome Call, that was signed by Microsoft, IBM, the Italian government and others in February 2020. At its heart was an imperative to protect human dignity above any technological benefits. “The first meeting produced the trust that led to the Rome Call: everything starts from human-to-human relationships,” Benanti explains.

    This is not the first time the Vatican has interceded on matters of technological development. Nuclear weapons have been a core part of its foreign policy agenda since the cold war, and more recently it issued strong rhetoric against biotechnology such as human cloning. But the Church’s view on AI has been more considered and inclusive, drawing on expertise from a range of institutions including other religions in an attempt to collectively check the power of private companies.

    This particular intervention also sparked a longer discussion within the Holy See about how to create a more diverse, global alliance to hold AI companies to account. The two-year debate culminated in plans for a historic event that is to take place in Abu Dhabi this May: the signing of a multireligious ethical charter to protect human society from AI harms, signed by leaders from all three Abrahamic religions, Christianity, Islam and Judaism.

    “They see the same issues here, and we want to find a new way together,”
    Benanti says, as we drink espressos in the dining room of the monastery. “To my knowledge, these three monotheistic faiths have never come together and signed a joint declaration on anything before.”

    The pursuit of building mechanical objects to approximate human intelligence is nothing new.

    Today, real-world AI is less autonomous and more assistive.

    These algorithmic systems are standing in for human judgment in medicine, recruitment, loan approvals, education and prison sentencing.

    It has also become clear that algorithmic ­decisions can be fraught with errors. Inaccurate inputs can cause computers to propagate existing biases.

    Benanti’s advice to the Holy See has been straightforward: the Church can help create a new system of “algor-ethics”, a basic framework of human values to be agreed upon by multiple countries, religions, non-profits and companies around the world, and also understood and implemented by machines themselves. At its core, he says, algor-ethics would require all autonomous systems to doubt themselves, to experience ethical uncertainty. “Every time the machine does not know whether it is safely protecting human values, then it should require man to step in,” he says. Only then can technologists produce an AI that puts human welfare at its centre.

    Benanti’s role — trying to agree on a joint set of values between the three religions — is delicate. Each faith has its own moral red lines and interpretations of harm, so consensus will require a balancing act. Perhaps more importantly, AI used in most of the world is designed by California-based engineers who bake their own perspectives into the code.

    “I would call Silicon Valley’s ethos almost libertarian and very strongly atheist, but usually replacing that atheism with a religion of their own, usually transhumanism or posthumanism,” says Kanta Dihal, a researcher of science and society at the University of Cambridge. “[It’s] a ‘man becoming god’ kind of narrative, which is strongly influenced by a privileged white male perspective shaping what the future might look like.”

    Reply
  10. Tomi Engdahl says:

    Night Vision: Now In Color
    https://hackaday.com/2022/04/09/night-vision-now-in-color/

    We’ve all gotten used to seeing movies depict people using night vision gear where everything appears as a shade of green. In reality the infrared image is monochrome, but since the human eye is very sensitive to green, the false-color is used to help the wearer distinguish the faintest glow possible. Now researchers from the University of California, Irvine have adapted night vision with artificial intelligence to produce correctly colored images in the dark. However, there is a catch, as the method might not be as general-purpose as you’d like.

    https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0265185

    Reply
  11. Tomi Engdahl says:

    The Art Exhibition That Fools Facial Recognition Systems
    https://www.securityweek.com/art-exhibition-fools-facial-recognition-systems

    The most boring art exhibition in the world has been launched online. It comprises just 100 images of the same painting: 100 copies of the Mona Lisa. But all is not what it seems – and that’s the whole point. Humans see 100 identical Mona Lisa images; but facial recognition systems see 100 different celebrities.

    The exhibition is the brainchild of Adversa, a startup company designed to find and help mitigate against the inevitable and exploitable insecurities in artificial intelligence. In this instance, the weaknesses in facial recognition systems are highlighted.

    The exhibition is predicated on the concept of an NFT sale. Security professionals who might dismiss NFTs as popular contemporary gimmickry should not be put off – the concept is used merely to attract a wider public audience to the insecurity of facial recognition. The purpose of the exhibition is altogether more serious than NFTs.

    The exhibition has 100 Mona Lisa images. “All look almost the same as the original one by da Vinci for people, though AI recognizes them as 100 different celebrities,” explains Adversa in a blog report. “Such perception differences are caused by the biases and security vulnerabilities of AI called adversarial examples, that can potentially be used by cybercriminals to hack facial recognition systems, autonomous cars, medical imaging, financial algorithms – or in fact any other AI technology.”

    Reply
  12. Tomi Engdahl says:

    Review: Vizy Linux-Powered AI Camera
    https://hackaday.com/2022/04/11/review-vizy-linux-powered-ai-camera/

    Vizy is a Linux-based “AI camera” based on the Raspberry Pi 4 that uses machine learning and machine vision to pull off some neat tricks, and has a design centered around hackability. I found it ridiculously simple to get up and running, and it was just as easy to make changes of my own, and start getting ideas.

    https://vizycam.com/

    Reply
  13. Tomi Engdahl says:

    Machine learning models leak personal info if training data is compromised https://www.theregister.com/2022/04/12/machine_learning_poisoning/
    A team from Google, the National University of Singapore, Yale-NUS College, and Oregon State University demonstrated it was possible to extract credit card details from a language model by inserting a hidden sample into the data used to train the system.. [Paper at https://arxiv.org/abs/2204.00032

    Reply
  14. Tomi Engdahl says:

    Ben Thompson / Stratechery:
    OpenAI’s DALL-E 2 and other AI models’ ability to generate new content at zero marginal cost has major implications for the metaverse, social networks, and more

    DALL-E, the Metaverse, and Zero Marginal Content
    https://stratechery.com/2022/dall-e-the-metaverse-and-zero-marginal-content/

    Dall-E 2 is a new AI system from OpenAI that can take simple text descriptions like “A koala dunking a basketball” and turn them into photorealistic images that have never existed before. DALL-E 2 can also realistically edit and re-touch photos…

    DALL-E was created by training a neural network on images and their text descriptions. Through deep learning it not only understands individual objects like koala bears and motorcycles, but learns from relationships between objects, and when you ask DALL-E for an image of a “koala bear riding a motorcycle”, it knows how to create that or anything else with a relationship to another object or action.

    The DALL-E research has three main outcomes: first, it can help people express themselves visually in ways they may not have been able to before. Second, an AI-generated image can tell us a lot about whether the system understands us, or is just repeating what it’s been taught. Third, DALL-E helps humans understand how AI systems see and understand our world. This is a critical part of developing AI that’s useful and safe…

    Reply
  15. Tomi Engdahl says:

    Moore Threads MTT S60 GPU Is China’s First Domestic GPU With DirectX Support & Ability To Play eSports Games
    https://wccftech.com/china-moore-threads-mtt-s60-gpu-supports-directx-and-can-run-league-of-legends/

    Reply
  16. Tomi Engdahl says:

    Nvidia’s Next GPU Shows That Transformers Are Transforming AI The neural network behind big language processors is creeping into other corners of AI
    https://spectrum.ieee.org/nvidias-next-gpu-shows-that-transformers-are-transforming-ai

    Reply
  17. Tomi Engdahl says:

    Can Deepfake Tech Train Computer Vision AIs? Generative adversarial networks make phony photorealistic images—and now synthetic data, too
    https://spectrum.ieee.org/synthetic-data-computer-vision?share_id=6988968

    Reply
  18. Tomi Engdahl says:

    AI Fuses With Quantum Computing in Promising New Memristor Quantum device points the way toward an exponential boost in “smart” computing capabilities
    https://spectrum.ieee.org/quantum-memristor?share_id=7003383

    Reply
  19. Tomi Engdahl says:

    How CNN architectures evolved?
    “The theory behind AI that everyone should know”
    https://medium.com/aiguys/how-cnn-architectures-evolved-c53d3819fef8

    Everyone is using AI these days for almost every task, you feed enough data to a complex model and it will work somehow. There’s a saying in science, the easier your algorithm is, the better your solution is. We should always keep in mind that using complex deep learning architecture for every simple problem is not a good idea. You should try to keep your model as simple as possible. This blog deals with different kinds of architectures and how they evolved over time.
    Yann LeCun (the inventor of CNN) wrote a paper in 1998 that was called LeNet-5, and later on, this paper served as the basis of Deep learning. This was actively being used for zip code digit recognition back in the early days of AI. To train the LeNet-5 a lot of Bayesian techniques were used to set the proper weights initially, otherwise, it wouldn’t have converged properly. Then came the legendary paper with the architecture called AlexNet (From Geoff Hinton’s lab, inventor of Backpropogation). This was a truly large-scale model that can handle the ImageNet dataset. This paper outperformed every other technique by a huge margin

    Reply
  20. Tomi Engdahl says:

    Artificial neurons go quantum with photonic circuits
    https://phys.org/news/2022-03-artificial-neurons-quantum-photonic-circuits.html

    In recent years, artificial intelligence has become ubiquitous, with applications such as speech interpretation, image recognition, medical diagnosis, and many more. At the same time, quantum technology has been proven capable of computational power well beyond the reach of even the world’s largest supercomputer. Physicists at the University of Vienna have now demonstrated a new device, called the quantum memristor, which may allow us to combine these two worlds, unlocking unprecedented capabilities. The experiment, carried out in collaboration with the National Research Council (CNR) and the Politecnico di Milano in Italy, has been realized on an integrated quantum processor operating on single photons. The work is published in the current issue of the journal Nature Photonics.

    Reply
  21. Tomi Engdahl says:

    MIT Technology Review:
    A look at AI-powered surveillance in South Africa, powered by CCTV cameras, video analytics, and fiber internet; Vumacam operates 5,000+ cameras in Johannesburg

    South Africa’s private surveillance machine is fueling a digital apartheid
    https://www.technologyreview.com/2022/04/19/1049996/south-africa-ai-surveillance-digital-apartheid/

    As firms have dumped their AI technologies into the country, it’s created a blueprint for how to surveil citizens and serves as a warning to the world.

    This story is part one of MIT Technology Review’s series on AI colonialism, the idea that artificial intelligence is creating a new colonial world order. It was supported by the MIT Knight Science Journalism Fellowship Program and the Pulitzer Center. Read the introduction to the series here.

    Reply
  22. Tomi Engdahl says:

    Will Douglas Heaven / MIT Technology Review:
    Meta’s AI lab creates Open Pretrained Transformer, a language model trained with 175B parameters to match GPT-3′s size, and gives it to researchers for free — Meta’s AI lab has created a massive new language model that shares both the remarkable abilities and the harmful flaws of OpenAI’s pioneering neural network GPT-3.

    Meta has built a massive new language AI—and it’s giving it away for free
    https://www.technologyreview.com/2022/05/03/1051691/meta-ai-large-language-model-gpt3-ethics-huggingface-transparency/

    Facebook’s parent company is inviting researchers to pore over and pick apart the flaws in its version of GPT-3

    Reply
  23. Tomi Engdahl says:

    Tekoäly parantaa merkittävästi sähköntuotannon ennustamista
    https://etn.fi/index.php/13-news/13538-tekoaely-parantaa-merkittaevaesti-saehkoentuotannon-ennustamista

    Suur-Savon Sähkö ottaa tämän vuoden aikana käyttöön Savonlinnan voimalaitoksella tekoälyyn perustuvan sähköntuotannon ennustemallin. Tekoälyennusteiden tarkkuus parantaa merkittävästi yhtiön tuotannon ennustamista: malli paransi pilottivaiheessa sähkön tuotannon ennustamisen tarkkuutta jopa kymmeniä prosentteja verrattuna aiemmin käytössä olleeseen menetelmään.

    Tekoälyllä tarkoitetaan tässä yhteydessä erilaisia koneoppisen malleja, jossa laskenta-algoritmi opetetaan historiadataa hyödyntämällä parantamaan ennustetta itse. Koneoppimisen avulla ennuste on itseään koko ajan parantava ja siten myös ennuste on tarkempi.

    - Mitä tarkempi tuotantoennuste, sitä pienemmät ovat yhtiölle aiheutuvat tasesähkön kustannukset. Jos toteutunut tuotanto eroaa etukäteen ennustetusta, erotus joudutaan ostamaan tai myymään markkinoilla, mistä aiheutuu usein lisäkustannuksia. Paremman tuotantoennusteen avulla pystymme antamaan myös tarkempia tarjouksia lisäsähkön tuotannosta esimerkiksi säätösähkömarkkinoilla, sanoo Suur-Savon Sähkön kehitys- ja innovaatiopäällikkö Mika Laine.

    Koneoppimisen malli hyödyntää aikaisemmin tänä vuonna Savonlinnan kaukolämmön tuotannossa käyttöön otettua ennustemallia tuotannon optimoimisesta. Voimalaitoksella jo käytössä olevassa koneoppimisen mallissa ennustetaan kaukolämmön tarve, josta myös sähkötuotanto on riippuvainen. Tämä aikaisempi malli toimii siis sähköntuotannon ennustamisen mahdollistajana.

    Samaa dataa, mitä on käytetty lämmön tuotannon optimoinnissa, pystytään hyödyntämään myös sähkön tuotantoennusteen tekemiseen, sillä Savonlinnassa on vastapainevoimalaitos (CHP), jolloin tuotetun sähkön määrä on riippuvainen tuotetusta lämmön määrästä. Kyseisen tuotantoennusteen perusteella voidaan ennustaa sähkön tuotantoa tuleville tunneille, jopa 36 tuntia eteenpäin.

    Reply
  24. Tomi Engdahl says:

    Deploying AI in Advanced Embedded Systems
    March 14, 2022
    Today’s advanced products, from consumer wearables to smart EVs, are starting to leverage the power of AI to increase performance and functionality. However, those solutions require the appropriate hardware to run on.
    https://www.electronicdesign.com/technologies/systems/video/21235475/electronic-design-deploying-ai-in-advanced-embedded-systems?utm_source=EG+ED+Auto+Electronics&utm_medium=email&utm_campaign=CPS220510011&o_eid=7211D2691390C9R&rdx.ident%5Bpull%5D=omeda%7C7211D2691390C9R&oly_enc_id=7211D2691390C9R

    Today’s advanced products, from consumer wearables to smart electric vehicles (EVs), are starting to leverage the power of AI to increase performance and functionality. However, those solutions require the appropriate hardware to run on. We talk to Anil Mankar, Co-Founder and CDO of BrainChip, about the company’s hardware and how it can empower an AI-based system.

    Reply
  25. Tomi Engdahl says:

    ‘The Game is Over’: Google’s DeepMind says it is on verge of achieving human-level AI
    https://uk.finance.yahoo.com/news/game-over-google-deepmind-says-133304193.html

    Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google’s DeepMind AI division.

    Dr Nando de Freitas said “the game is over” in the decades-long quest to realise artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry.

    Described as a “generalist agent”, DeepMind’s new Gato AI needs to just be scaled up in order to create an AI capable of rivalling human intelligence, Dr de Freitas said.

    Responding to an opinion piece written in The Next Web that claimed “humans will never achieve AGI”, DeepMind’s research director wrote that it was his opinion that such an outcome is an inevitability.

    “It’s all about scale now! The Game is Over!” he wrote on Twitter.

    “It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline… Solving these challenges is what will deliver AGI.”

    Reply
  26. Tomi Engdahl says:

    Natural Language AI In Your Next Project? It’s Easier Than You Think
    https://hackaday.com/2022/05/18/natural-language-ai-in-your-next-project-its-easier-than-you-think/

    Want your next project to trash talk? Dynamically rewrite boring log messages as sci-fi technobabble? Happily (or grudgingly) answer questions? Doing that sort of thing and more can be done with OpenAI’s GPT-3, a natural language prediction model with an API that is probably a lot easier to use than you might think.

    In fact, if you have basic Python coding skills, or even just the ability to craft a curl statement, you have just about everything you need to add this ability to your next project. It’s not free in the long run, although initial use is free on signup, but for personal projects the costs will be very small.

    Basic Concepts

    OpenAI has an API that provides access to GPT-3, a machine learning model with the ability to perform just about any task that involves understanding or generating natural-sounding language.

    OpenAI provides some excellent documentation as well as a web tool through which one can experiment interactively. First, however, one must create an account and receive an API key. After that is done, the doors are open.

    Creating an account also gives one a number of free credits that can be used to experiment with ideas. Once the free trial is used up or expires, using the API will cost money. How much? Not a lot, frankly. Everything sent to (and received from) the API is broken into tokens, and pricing is from $0.0008 to $0.06 per thousand tokens. A thousand tokens is roughly 750 words, so small projects are really not a big financial commitment. My free trial came with 18 USD of credits, of which I have so far barely managed to spend 5%.

    How It Works

    The API accepts requests in a variety of ways, and if you can craft a curl statement, use the command line, or write some simple Python (or node.js) code, good news! You have all you need to start trying ideas!

    I will describe using the API in its most basic way, that of completion. That means one presents the API with a prompt, from which it will provide a text completion that attempts to match the prompt. All of this is done entirely in text, and formatted as natural language.

    Same Prompt, Different Completions

    For an identical prompt, the API doesn’t necessarily return the same results. While the nature of the prompt and the data the model has been trained on play a role, diversity of responses can also be affected by the temperature setting in a request.

    Temperature is a value between 0 and 1, and is an expression of how deterministic the model should be when making predictions about valid completions to a prompt. A temperature of 0 means that submitting the same prompt will result in the same (or very similar) responses each time. A temperature above zero will yield different completions each time.

    Put another way, a lower temperature means the model takes fewer risks, resulting in completions that are more deterministic. This is useful when one wants completions that can be accurately predicted, such as responses that are factual in nature. On the other hand, increasing the temperature — 0.7 is a typical default value — yields more diversity in completions.

    The natural language model behind the API is pre-trained, but it is still possible to customize the model with a separate dataset tailored for a particular application.

    What Does The Code Look Like?

    There is an interactive web tool (the playground, requires an account) in which one can use the model to test ideas without having to code something up, but it also has the handy feature of generating a code snippet upon request, for easy copy and pasting into projects.

    Responsible Use

    Worth highlighting is OpenAI’s commitment to responsible use, including guidance on safety best practices for applications. There is a lot of thoughtful information in that link, but the short version is to always keep in mind that this is a tool that is:

    Capable of making things up in a very believable way, and
    Capable of interacting with people.

    It’s not hard to see that the combination has potential for harm if used irresponsibly. Like most tools, one should be mindful of misuse, but tools can be wonderful things as well.

    Are You Getting Ideas Yet?

    Using the API isn’t free in the long term, but creating an account will give you a set of free credits that can be used to play around and try a few ideas out, and using even the most expensive engine for personal projects costs a pittance. All of my enthusiastic experimentation has so far used barely two dollars USD worth of my free trial.

    Reply
  27. Tomi Engdahl says:

    Ihminen ei voi suojautua tekoälyä vastaan
    https://etn.fi/index.php/13-news/13609-ihminen-ei-voi-suojautua-tekoaelyae-vastaan

    Rik Ferguson on tietoturvayhtiö Trend Micron tutkimusjohtaja. Pandemian aikana miehen johdolla laadittiin Project 2030 -raportti, jossa ennustettiin miehen tietoturva ja kyberrikollisuus on menossa. Tekoäly ja koneoppiminen tulevat laajasti myös kyberrikollisten käyttöön, eikä se ole millään muotoa hyvä uutinen.

    - Kun meillä on koodia, joka osaa koodata, teknologian kehityksen vahti kiihtyy. Kun tekoäly rakentaa uusia tuotteita, se toimii konenopeudella eikä ole enää rajoittunut ihmisen tapaan ajatella.

    Tämä on iso ongelma kyberpuolustajille. – Olemme tottuneet ihmisvastustajaan ja lainsäädäntökin on suunnattu ihmisrikollista vastaan. Kun vastustaja on kone ja ajattelee tekoälyn tavoin, kaikki se, mitä olemme oppineet puolustamisesta, voi menettää merkityksensä, Ferguson näkee.

    - Samalla tapaa tekoäly ei ole rajoitettu ihmisen logiikkaan tai ajatuksen edistymiseen. Tämä tulee olemaan valtava haaste. Miten tätä vastaan voidaan puolustautua? Miten sitä voidaan reguloida?

    Fergusonin mukaan lopulta päädytään siihen, että tekoäly taistelee tekoälyä vastaan. – Jos puolustautuu jotakin sellaista vastaan, joka ajattelee eri tavoin tai toimii eri ajatustilassa (thought space), niin ainoa tapa vastata on iteroida kaikki mahdolliset ajatukset. Pitää vastata brute forcella. Ihminen ei voi tehdä tätä riittävän nopeasti.

    Reply
  28. Tomi Engdahl says:

    BrainChip, Edge Impulse Announce Partnership to Push Spiking Neural Network Tech Mainstream
    The deal will, the companies hope, help foster adoption and ease development as users get to grips with BrainChip’s novel tech.
    https://www.hackster.io/news/brainchip-edge-impulse-announce-partnership-to-push-spiking-neural-network-tech-mainstream-5029b02391a0

    Reply
  29. Tomi Engdahl says:

    ‘The Game is Over’: Google’s DeepMind says it is on verge of achieving human-level AI
    New Gato AI is ‘generalist agent’ that can carry out a huge range of complex tasks, from stacking blocks to writing poetry
    https://www.independent.co.uk/tech/ai-deepmind-artificial-general-intelligence-b2080740.html#Echobox=1652857197

    Human-level artificial intelligence is close to finally being achieved, according to a lead researcher at Google’s DeepMind AI division.

    Dr Nando de Freitas said “the game is over” in the decades-long quest to realise artificial general intelligence (AGI) after DeepMind unveiled an AI system capable of completing a wide range of complex tasks, from stacking blocks to writing poetry.

    Reply
  30. Tomi Engdahl says:

    The race that gets more people into machine learning
    https://www.warpnews.org/artificial-intelligence/the-race-that-gets-more-people-into-machine-learning/?utm_source=Facebook&utm_medium=Facebook_Mobile_Feed&utm_campaign=AWS+English+Article&fbclid=IwAR3GXQMT1ptkRd8jKZZJPlLQ7jVZuSw1klfMf3yUcepJhq3M56El5xhd6H4

    There is great hype around machine learning, but many also see it as abstract and difficult. Here is the race that wants to change that.

    Reply
  31. Tomi Engdahl says:

    Learning to Go with the Flow
    https://www.hackster.io/news/learning-to-go-with-the-flow-0f2a1f8c0c0e

    A computer vision-based approach to traffic flow management uses reinforcement learning to cut down on time spent stuck in traffic.

    Reply
  32. Tomi Engdahl says:

    Cool… and sometimes even a little bit scary
    https://www.facebook.com/groups/983759965442123/posts/1364125224072260/

    The artist uses AI programs to bring famous portraits to life.

    Reply
  33. Tomi Engdahl says:

    Toward Optoelectronic Chips That Mimic the Human Brain An interview with a NIST researcher keen to improve spiking neuromorphic networks
    https://spectrum.ieee.org/ai-hardware?share_id=7001224

    Reply
  34. Tomi Engdahl says:

    Smart Buildings From Dumb Sensors
    Machine learning turns simple, privacy-preserving sensors into a low-cost, scalable smart building platform.
    https://www.hackster.io/news/smart-buildings-from-dumb-sensors-243bd8581440

    Reply
  35. Tomi Engdahl says:

    AI Attempts Converting Python Code To C++
    https://hackaday.com/2022/05/28/ai-attempts-converting-python-code-to-c/

    Alexander] created codex_py2cpp as a way of experimenting with Codex, an AI intended to translate natural language into code. [Alexander] had slightly different ideas, however, and created codex_py2cpp as a way to play with the idea of automagically converting Python into C++. It’s not really intended to create robust code conversions, but as far as experiments go, it’s pretty neat.

    Reply
  36. Tomi Engdahl says:

    ALL THESE IMAGES WERE GENERATED BY GOOGLE’S LATEST TEXT-TO-IMAGE AI
    Imagen what else this thing can do
    https://www.theverge.com/2022/5/24/23139297/google-imagen-text-to-image-ai-system-examples-paper

    Reply
  37. Tomi Engdahl says:

    World Builders Put Happy Face On Superintelligent AI The Future of Life Institute’s contest counters today’s dystopian doomscapes
    https://spectrum.ieee.org/superintelligence-future-life-institute-contest

    Reply
  38. Tomi Engdahl says:

    A Touching Use of ML
    A fusion of capacitive and acoustic data, processed by machine learning, removes the latency in touchscreen interactions.
    https://www.hackster.io/news/a-touching-use-of-ml-66595f06af11

    Reply
  39. Tomi Engdahl says:

    Engineers create chip that can process and classify nearly two billion images per second
    https://www.nanowerk.com/nanotechnology-news2/newsid=60798.php

    Reply
  40. Tomi Engdahl says:

    SCIENTISTS FINALLY BUILD ARTIFICIAL BRAIN CELLS
    BRAIN-INSPIRED CIRCUITRY JUST TOOK A HUGE LEAP FORWARD
    https://futurism.com/neoscope/scientists-build-artificial-brain-cells

    Reply
  41. Tomi Engdahl says:

    AI: The pattern is not in the data, it’s in the machine
    https://www.zdnet.com/article/ai-the-pattern-is-not-in-the-data-its-in-the-machine/

    No, the patterns computers create are not an inherent property of data, they are an emergent property of the structure of the program itself.

    Reply
  42. Tomi Engdahl says:

    SCIENCE TIME (I) – Mechanical diagnostics using Hardware Neural Network approach
    A project log for Retro-futuristic automobile control panel
    https://hackaday.io/project/10579-retro-futuristic-automobile-control-panel/log/35382-science-time-i-mechanical-diagnostics-using-hardware-neural-network-approach

    Conversion of dashboard from an old, Communist clone of the French Renault 12 (Dacia 1310)

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*