3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,214 Comments

  1. Tomi Engdahl says:

    Apache Airflow is a powerful tool to create, schedule and monitor data engineering workflows. But it was not designed for machine learning tasks.

    Learn how to easily execute Airflow tasks on the cloud and get automatic version control for each machine learning task.

    In this article, you’ll learn:

    1. The different strategies for scaling the worker nodes in Airflow.
    2. How machine learning workflows differ from traditional ETL pipelines.
    3. How to easily execute Airflow tasks on the cloud.
    4. How to get automatic version control for each machine learning task.

    https://hubs.ly/H0lWz5x0

    https://blog.valohai.com/scaling-airflow-machine-learning?utm_campaign=Airflow&utm_content=106098315&utm_medium=social&utm_source=facebook&hss_channel=fbp-1794976890716394&hsa_acc=70196702&hsa_cam=6149313708189&hsa_grp=6149529048989&hsa_ad=6149529048789&hsa_src=fb&hsa_net=facebook&hsa_ver=3

    Reply
  2. Tomi Engdahl says:

    Calculating the classic three-body problem required enormous computational resources —until now. A new neural net provides accurate solutions up to 100 million times faster than a state-of-the-art conventional solver.

    A neural net solves the three-body problem 100 million times faster
    https://www.technologyreview.com/s/614597/a-neural-net-solves-the-three-body-problem-100-million-times-faster/?utm_medium=tr_social&utm_campaign=site_visitor.unpaid.engagement&utm_source=Facebook#Echobox=1576516032

    Machine learning provides an entirely new way to tackle one of the classic problems of applied mathematics

    In the 18th century, the great scientific challenge of the age was to find a way for mariners to determine their position at sea. One of the most successful solutions was to measure the position of the moon in the sky relative to the fixed background of stars.

    Because of parallax effects, this measurement depends on the observer’s position. And by comparing the measured position to a table of positions calculated for an observer in Greenwich in England, mariners could determine their longitude.

    There was one problem, however. Calculating the moon’s position in advance is harder than it seems.

    The difficulty is that this kind of three-body motion is chaotic in all but a few special cases. So there is no easy way of calculating their exact positions in the future. This caused errors in the lunar navigation tables

    Eventually, the chronometer method, famously pioneered by John Harrison, became the preferred way to calculate longitude.

    However, the three-body problem continues to haunt mathematicians. The problem these days is to determine the structure of globular star clusters and galactic nuclei, which depend on the way black hole binaries interact with single black holes.

    The advent of powerful computers allows mathematicians to iteratively calculate the positions of these black holes. But it requires enormous computational resources,

    Enter Philip Breen at the University of Edinburgh and a few colleagues, who have trained a neural network to calculate such solutions. Their big news is that their network provides accurate solutions at a fixed computational cost and up to 100 million times faster than a state-of-the-art conventional solver.

    The neural network accurately predicts the future motion of three bodies and, in particular, correctly emulates the divergence between nearby trajectories, closely matching the Brutus simulations.

    With a few tweaks, the network’s predictions meet the energy conservation conditions with an error of just 10-5.

    So their vision is to create a hybrid system. In this case, Brutus will do all the heavy lifting, but when the computational burden becomes too great, the neural network will step in until it becomes acceptable again.

    Reply
  3. Tomi Engdahl says:

    Massive errors found in facial recognition tech: US study
    https://news.yahoo.com/massive-errors-found-facial-recognition-tech-us-study-215334634.html

    Facial recognition software can produce wildly inaccurate results, according to a US government study on the technology, which is being used for law enforcement,

    Washington (AFP) – Facial recognition systems can produce wildly inaccurate results, especially for non-whites, according to a US government study released Thursday that is likely to raise fresh doubts on deployment of the artificial intelligence technology.

    Reply
  4. Tomi Engdahl says:

    Using BERT For Classifying Documents with Long Texts
    https://medium.com/@armandj.olivares/using-bert-for-classifying-documents-with-long-texts-5c3e7b04573d

    BERT (stands for Bidirectional Encoder Representations from Transformer) is a Google’s Deep Learning model developed for NLP task which has achieved State-of-the-Art Pre-training for Natural Language Processing in multiples task. However one of its “limitation” is on application when you have long inputs, because in BERT the self-attention layer has a quadratic complexity O(n²) in terms of the sequence length n (see this link), in this post I followed the main ideas of this paper in order to know how overcome this limitation, when you want to use BERT over long sequences of text.

    Reply
  5. Tomi Engdahl says:

    AI Based Defensive Systems Impact on Cybercriminal Strategy
    https://pentestmag.com/ai-based-defensive-systems-impact-on-cybercriminal-strategy/

    Good guys are working at a fever pitch to create pre-emptive adversarial attack models to find AI vulnerabilities. But threat actors are working just as fast to develop threats and have the resources (aka money) to build powerful cyber weapons. Who will win this race against time?

    Reply
  6. Tomi Engdahl says:

    An Eerie Historical Deepfake Imagines Nixon Telling the World the Moon Landing Failed
    https://onezero.medium.com/an-eerie-historical-deepfake-imagines-nixon-telling-the-world-the-moon-landing-failed-ead66a275933

    A team of scientists used A.I. to create a convincing facsimile of a historical speech that never happened, and put the threat of fake information front and center

    Reply
  7. Tomi Engdahl says:

    The Next Frontier in AI: Nothing
    https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/the-next-frontier-in-ai-nothing

    In a sense, AI and deep learning still need to learn how to recognize and reason with nothing.

    Is it an apple or a banana? Neither!
    Traditionally, deep learning algorithms such as deep neural networks (DNNs) are trained in a supervised fashion to recognize specific classes of things.

    In a typical task, a DNN might be trained to visually recognize a certain number of classes, say pictures of apples and bananas. Deep learning algorithms, when fed a good quantity and quality of data, are really good at coming up with precise, low error, confident classifications.

    The problem arises when a third, unknown object appears in front of the DNN. If an unknown object that was not present in the training set is introduced, such as an orange, then the network will be forced to “guess” and classify the orange as the closest class that captures the unknown object—an apple!

    Reply
  8. Tomi Engdahl says:

    New Sensor Module That Works with Machine Learning for Precision Data Interpretation
    https://www.hackster.io/news/new-sensor-module-that-works-with-machine-learning-for-precision-data-interpretation-7d5cfc6afd6c

    Purdue engineers have developed a new sensing technology that could improve machine learning precision for EVs, smart homes, and more.

    Reply
  9. Tomi Engdahl says:

    The No. 1 job of 2019 pays $140,000 — and its hiring growth has exploded 74%
    https://www.marketwatch.com/story/the-no-1-job-of-2019-pays-140000-and-its-hiring-growth-has-exploded-74-2019-12-10

    On Tuesday, career and job site LinkedIn released its annual “Emerging Jobs” list, which identifies the roles that have seen the largest rate of hiring growth from 2015 through this year. No. 1 on the list: Artificial Intelligence Specialist — typically an engineer, researcher or other specialty that focuses on machine learning and artificial intelligence, figuring out things like where it makes sense to implement AI or building AI systems.

    Hiring for this role has been tremendous, growing 74% annually in the past 4 years alone. “AI has infiltrated every industry, and right now the demand for people skilled in AI is outpacing the supply for it,

    Reply
  10. Tomi Engdahl says:

    To Decode the Brain, Scientists Automate the Study of Behavior
    By
    JORDANA CEPELEWICZ
    December 10, 2019
    https://www.quantamagazine.org/to-decode-the-brain-scientists-automate-the-study-of-behavior-20191210/

    Machine learning and deep neural networks can capture and analyze the “language” of animal behavior in ways that go beyond what’s humanly possible.

    Reply
  11. Tomi Engdahl says:

    Thousands of public datasets on different topics – from top fitness trends and beer recipes to pesticide poisoning rates – are available online. To spend less time on the search for the right dataset, you must know where to look for it. We’ve updated our article on the best public available datasets. Check what’s new following the link below.

    https://www.altexsoft.com/blog/datascience/best-public-machine-learning-datasets/?utm_source=facebookads&utm_medium=cpc&utm_campaign=Remarketing

    Reply
  12. Tomi Engdahl says:

    Image Recognition with Deep Neural Networks and its Use Cases
    https://www.altexsoft.com/blog/image-recognition-neural-networks-use-cases/

    In this article, you’ll learn what image recognition is and how it’s related to computer vision. You’ll also find out what neural networks are and how they learn to recognize what is depicted in images. Finally, we’ll discuss some of the use cases for this technology across industries.

    Reply
  13. Tomi Engdahl says:

    Facebook’s Mark Zuckerberg won’t talk to the Guardian. So they fed everything he’s ever said into an algorithm, built a Zuckerbot, and interviewed it.

    ‘I am going to say quiet words in your face just like I did with Trump’: a conversation with the Zuckerbot
    https://amp.theguardian.com/technology/2019/dec/22/zuckerbot-mark-zuckerberg-facebook-botnik?CMP=Share_iOSApp_Other&__twitter_impression=true

    Facebook’s Mark Zuckerberg won’t talk to the Guardian. So we fed everything he says into an algorithm, built a Zuckerbot, and interviewed

    Reply
  14. Tomi Engdahl says:

    Scientists warn AI control of nukes could lead to ‘Terminator-style’ war
    https://www.jpost.com/International/Nuke-scientists-warn-AI-control-could-lead-to-Terminator-style-nuke-war-612123

    The world may be inching closer to an apocalyptic nuclear war could be possible as control over nuclear weapons is yielded to artificial intelligence (AI).

    Reply
  15. Tomi Engdahl says:

    All evidence points to the fact that the singularity is coming (regardless of which futurist you believe).

    The “Father of Artificial Intelligence” Says Singularity Is 30 Years Away
    https://futurism.com/father-artificial-intelligence-singularity-decades-away

    You’ve probably been told that the singularity is coming. It is that long-awaited point in time — likely, a point in our very near future — when advances in artificial intelligence lead to the creation of a machine (a technological form of life?) smarter than humans.

    If Ray Kurzweil is to be believed, the singularity will happen in 2045. If we throw our hats in with Louis Rosenberg, then the day will be arriving a little sooner, likely sometime in 2030. MIT’s Patrick Winston would have you believe that it will likely be a little closer to Kurzweil’s prediction, though he puts the date at 2040, specifically.

    Kurzweil Claims That the Singularity Will Happen by 2045
    https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045

    Ray Kurzweil, Google’s Director of Engineering, is a well-known futurist with a high-hitting track record for accurate predictions. Of his 147 predictions since the 1990s, Kurzweil claims an 86 percent accuracy rate. At the SXSW Conference in Austin, Texas, Kurzweil made yet another prediction: the technological singularity will happen sometime in the next 30 years.

    Reply
  16. Tomi Engdahl says:

    How Robot Priests Will Change Human Spirituality
    https://onezero.medium.com/how-robot-priests-will-change-human-spirituality-913a19386698

    If our tools amplify our intentions, we need to question our motivation for developing robots that automate blessings, hearing confession, or chanting at a funeral

    Reply
  17. Tomi Engdahl says:

    Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold
    https://spectrum.ieee.org/riskfactor/computing/software/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold

    The fiasco is all too familiar: A government agency wants to replace a legacy IT system to gain cost and operational efficiencies, but alas, the effort goes horribly wrong because of gross risk mismanagement.

    Soon after MiDAS was put into operation, the number of persons suspected of unemployment fraud grew fivefold in comparison to the average number found using the old system

    A thorough review found that from October 2013 to September 2015, MiDAS adjudicated—by algorithm alone—40,195 cases of fraud, with 85 percent of those resulting in incorrect fraud determinations. Another 22,589 cases that had some level of human interaction involved in a fraud determination found a 44 percent false fraud claim rate, which was an “improvement” but still an incredibly poor result.

    The MiDAS fiasco is not the only case where robo-adjudication has been used to seek potential benefits fraud. It is alive and well in Australia, where the government’s Centrelink program rolled out a similar approach in 2016 with similar results.

    A real problem with bureaucratic decisions made purely by algorithm is the hesitancy of the human overseers to question the results generated by the algorithm.

    As algorithms take on even more decisions [PDF] in the criminal justice system, in corporate and government hiring, in approving credit and the like, it is imperative that those affected can understand and challenge how these decisions are being made.

    Reply
  18. Tomi Engdahl says:

    While it’s best to keep your secrets to yourself, that might not even be good enough.

    No Secret Is Safe From Artificial Intelligence
    https://www.cyberpunks.com/there-will-be-no-secrets-withheld-from-artificial-intelligence/

    Everything Recorded Online About You Stays There Forever – Soon, This Could Even Extend To Your Memories
    There’s a common saying that once something is on the internet, it stays on the internet.

    Everything that exists online today and most of everything that has ever existed online, will be available to that AI, and it won’t be difficult to correlate all the pieces to assemble a complete picture.

    Don’t be surprised if someday you get a telephone call from an artificial intelligence that knows more about your life than you do.

    But now let’s take a step off the deep end.

    How Musk is Trying To Create Brain Interface Technology
    Elon Musk’s company Neuralink is developing an injectable brain implant for the specific purpose of creating an “ultra-high bandwidth interface to connect humans and computers.” Why?

    As Musk explains in the above video, the purpose of that connection is to create a symbiotic relationship with AI, because that’s what he sees as the best case scenario for the future. Better than becoming pets, anyway.

    This is a billionaire talking. That’s his plan for humanity. And love it or hate it, some people are probably going to do it. Maybe lots of people.

    Reply
  19. Tomi Engdahl says:

    US announces AI software export restrictions
    https://www.theverge.com/2020/1/5/21050508/us-export-ban-ai-software-china-geospatial-analysis
    The ban, which comes into force on Monday, is the first to be applied
    under a 2018 law known as the Export Control Reform Act or ECRA. This
    requires the government to examine how it can restrict the export of
    emerging technologies essential to the national security of the United
    States including AI. News of the ban was first reported by Reuters..
    But the new export ban is extremely narrow. It applies only to
    software that uses neural networks (a key component in machine
    learning) to discover points of interest in geospatial imagery; things
    like houses or vehicles. T

    Reply
  20. Tomi Engdahl says:

    Merging with AI: How to Make a Brain-Computer Interface to Communicate with Google using Keras and OpenBCI
    https://towardsdatascience.com/merging-with-ai-how-to-make-a-brain-computer-interface-to-communicate-with-google-using-keras-and-f9414c540a92

    Elon Musk and Neuralink want to build a Brain-Computer Interface that can act as the third layer of the brain, allowing humans to form a symbiotic relationship with Artificial Intelligence.
    But what if you can already do that?
    In a (very) limited form, you actually can.

    Reply
  21. Tomi Engdahl says:

    For the purposes of this project, we will query Google Search directly as it provides the most flexibility and is the easiest to set up. Upon completion, you should be able to query a handful of terms on Google simply by thinking about them.

    https://towardsdatascience.com/merging-with-ai-how-to-make-a-brain-computer-interface-to-communicate-with-google-using-keras-and-f9414c540a92

    Reply
  22. Tomi Engdahl says:

    Talking with Neon AI, Samsung’s best attempt at being human
    https://www.youtube.com/watch?v=ODucR4xum_4

    Reply
  23. Tomi Engdahl says:

    Drew Harwell / Washington Post:
    AI startups are selling images of realistic computer-generated faces, letting clients like dating apps “increase diversity” in their ads without needing people — Artificial intelligence start-ups are selling images of computer-generated faces that look like the real thing …
    https://www.washingtonpost.com/technology/2020/01/07/dating-apps-need-women-advertisers-need-diversity-ai-companies-offer-solution-fake-people/

    Reply
  24. Tomi Engdahl says:

    Facebook says group used AI-generated faces to push pro-Trump, anti-Chinese government messages
    https://www.scmp.com/tech/big-tech/article/3043186/facebook-says-group-used-ai-generated-faces-push-pro-trump-anti

    Facebook says it has taken down a well-financed campaign that used artificially generated faces to spread pro-Trump and anti-Chinese government messages
    Researchers said the first time they had seen the large-scale use of computer-generated faces to spread disinformation on social media

    Reply
  25. Tomi Engdahl says:

    Researchers from Google Brain, the company’s artificial intelligence and deep learning division, have published a paper suggesting that it’s possible for an AI to become proficient at various tasks without lengthy weight-adjustment learning — previously considered a key step in the process.

    Researchers Develop Precocial Neural Networks That Demonstrate Inherent Skill Without Weight Tuning
    https://www.hackster.io/news/researchers-develop-precocial-neural-networks-that-demonstrate-inherent-skill-without-weight-tuning-6cfb068cdb38

    Google Brain’s Adam Gaier and David Ha’s weight-agnostic neural networks are inspired by ducklings and snakes

    Researchers from Google Brain, the company’s artificial intelligence and deep learning division, have published a paper suggesting that it’s possible for an AI to become proficient at various tasks without lengthy weight-adjustment learning — previously considered a key step in the process.

    The result is what the pair call weight-agnostic neural network (WANN) search, and for the three tested tasks — swinging and balancing a pole attached to the top of a cart, guiding a two-legged walker across randomly-generated terrain without it toppling over, and driving a racing car in a top-down environment — it proved impressively effective. “In contrast to the conventional fixed topology networks used as baselines,” the pair report, “which only produce useful behaviours after extensive tuning, WANNs perform even with random shared weights.”

    The result is a simpler neural network, and one which can still be trained — but simply by adjustment of a single shared weight value, rather than having to tune multiple variables.

    The team’s paper is available on the project’s GitHub page, along with slides from its presentation at

    https://weightagnostic.github.io/

    Reply
  26. Tomi Engdahl says:

    Neural Networks Can Drive Virtual Racecars Without Learning
    https://spectrum.ieee.org/tech-talk/computing/software/neural-networks-ai-artificial-intelligence-drives-virtual-racecars-without-learning

    Animals are born with innate abilities and predispositions. Horses can walk within hours of birth, ducks can swim soon after hatching, and human infants are automatically attracted to faces. Brains have evolved to take on the world with little or no experience, and many researchers would like to recreate such natural abilities in artificial intelligence.

    New research finds that artificial neural networks can evolve to perform tasks without learning. The technique could lead to AI that is much more adept at a wide variety of tasks such as labeling photos or driving a car.

    How researchers are teaching AI to learn like a child
    https://www.sciencemag.org/news/2018/05/how-researchers-are-teaching-ai-learn-child

    Reply
  27. Tomi Engdahl says:

    Deepfake Roundtable: Cruise, Downey Jr., Lucas & More – The Streaming Wars | Above the Line
    https://m.youtube.com/watch?v=l_6Tumd8EQI

    With the launch of Disney+ looming large over Netflix and Hollywood at large, Collider used deepfake technology to bring together five living legends to discuss the streaming wars and the future of cinema.

    Reply
  28. Tomi Engdahl says:

    Deep learning has proved wildly successful at learning to recognize patterns in two-dimensional data, such as objects in images and the winning moves in chess. But standard methods fail when applied to curved and irregularly shaped surfaces. Recently, researchers figured out how to lift deep learning out of flatland with a new approach that relies on a fundamental idea from physics. —from Quanta Magazine

    https://www.quantamagazine.org/an-idea-from-physics-helps-ai-see-in-higher-dimensions-20200109/

    Reply
  29. Tomi Engdahl says:

    U.S. government limits exports of artificial intelligence software
    https://www.reuters.com/article/us-usa-artificial-intelligence/u-s-government-limits-exports-of-artificial-intelligence-software-idUSKBN1Z21PT

    The Trump administration took measures on Friday to crimp exports of artificial intelligence software as part of a bid to keep sensitive technologies out of the hands of rival powers like China.

    Reply
  30. Tomi Engdahl says:

    White House Proposes Hands-Off Approach to AI Regulation
    https://www.eetimes.com/white-house-proposes-hands-off-approach-to-ai-regulation/

    Trustworthy AI is imperative, but shouldn’t be over-regulated, the White House says.

    The White House’s Office of Science and Technology Policy (OSTP) has issued a draft memo to government agencies which spells out the principles agencies must abide by when creating regulations for the use of AI. The principles are designed to achieve three goals: Ensure public engagement, limit regulatory overreach and promote trustworthy technology. The memo includes 10 principles that agencies must consider when drafting AI regulations.

    “The principles promote a light-touch regulatory approach. The White House is directing federal
    agencies to avoid pre-emptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth,”

    International Implications

    The White House has also urged allies such as Europe not to over-regulate AI.

    European Commission president Ursula von der Leyen, in her pre-election manifesto “My Agenda for Europe”, made the human and ethical implications of AI a priority, promising to put forward legislation for a co-ordinated European approach during her first 100 days in office.

    Since then, a report by Germany’s Data Ethics Commission recommended tough new rules for AI ethics with strong measures taken against “ethically indefensible uses of data.” This was widely seen as an indication that any new EU rules on AI uses would be just as tough, since a previous report from the Data Ethics Commission was the basis for the EU’s GDPR (General Data Protection Regulation).

    As Europe tries to enact its own vision for ethical leadership in AI, it therefore seems likely that it will do so by defining more regulation, not less.

    “Europe and our allies should avoid heavy handed innovation-killing models,” said a statement issued by the US OSTP. “The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

    Reply
  31. Tomi Engdahl says:

    Developed by Amazon Web Services, the AutoGluon Python library looks to simplify deep learning and spread it to a wider audience.

    Amazon Looks to “Truly Democratize Machine Learning” with Open Source AutoGluon Library
    https://www.hackster.io/news/amazon-looks-to-truly-democratize-machine-learning-with-open-source-autogluon-library-8ac4dff9f791

    Developed by Amazon Web Services, the AutoGluon Python library looks to simplify deep learning and spread it to a wider audience.

    Reply
  32. Tomi Engdahl says:

    EU lawmakers are eyeing risk-based rules for AI, per leaked white paper
    https://techcrunch.com/2020/01/17/eu-lawmakers-are-eyeing-risk-based-rules-for-ai-per-leaked-white-paper/

    The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

    Reply
  33. Tomi Engdahl says:

    Why “Explainability” Is A Big Deal In AI
    https://www.forbes.com/sites/googlecloud/2020/01/08/why-explainability-is-a-big-deal-in-ai/?utm_source=FBPAGE&utm_medium=social&utm_content=3039759131&utm_campaign=sprinklrForbesMainFB#b2c76bb55169

    There is a little-noticed talent that’s critical for success in a tech-centric world; it’s up there with being a great programmer, a master strategist, or even an innovative entrepreneur.

    It’s being good at explaining stuff.

    Explaining how and why something functions has always been a high-value pursuit, essential for leadership. How you explain things frames how you see the world, and the ability to clearly convey your intentions, goals and methods is the stuff of clear mission statements, great speeches, and effective selling. Defining something effectively, in this sense, establishes a kind of ownership of it, and can stir thousands to action.

    Something like that level of patience and skill is now needed in the engine rooms of business, where cloud computing, artificial intelligence (AI), and an explosion of data are reshaping how we live, work, and play, even as the rest of the world struggles to understand what’s going on. These new technologies are incredibly powerful: they can deliver us new insights, they make things happen at an accelerated rate, and they touch an increasing number of areas in life.

    Putting these technologies into rapid use, then telling people how the technologies worked and why they did what they did, is critical. In fact, it’s already a big part of information technology. Providing fast and accurate answers to questions, easy navigation, and clean and organized web pages, all inherently show an understanding of both user needs and product capabilities.

    practitioners of AI call “explainability.” That means sorting out what an AI algorithm did, what data was used, and why certain conclusions were reached. If, say, a machine learning (ML) algorithm also made business decisions, these decisions need to be annotated and presented effectively.

    Reply
  34. Tomi Engdahl says:

    Yle Uutiset luopuu tekoälystä verkkokeskustelujen moderoijana – “Vain ihmissilmä pystyy hahmottamaan keskustelun sävyjä”
    Yle ostaa jatkossa moderointipalvelut STT:ltä
    https://yle.fi/uutiset/3-11158701

    Reply
  35. Tomi Engdahl says:

    It turns out, your dance moves are almost as unique as your fingerprint.
    https://www.iflscience.com/technology/computers-can-accurately-identify-you-based-on-your-dance-moves/

    In the not-so-distant future of hyper-surveillance, could your “dad dance” be used to identify you in the middle of a bustling club?

    New research by the University of Jyväskylä in Finland has used motion capture technology and machine learning to understand how different people shimmy and groove to music. It turns out, your dance moves are almost as unique as your fingerprint and can be used to personally identify you with a surprising degree of accuracy.

    Reply
  36. Tomi Engdahl says:

    Google’s Teachable Machine Uses TensorFlow.js to Bring Code-Free Machine Learning to the Browser
    https://www.hackster.io/news/google-s-teachable-machine-uses-tensorflow-js-to-bring-code-free-machine-learning-to-the-browser-53ffcdec0099

    Aiming at everyone from hobbyists to educators, Teachable Machine requires no prior experience to build simple ML models.

    Reply
  37. Tomi Engdahl says:

    Raspberry Pi Face Detection Used to Turn on the Lights
    https://www.hackster.io/news/raspberry-pi-face-detection-used-to-turn-on-the-lights-004270771700

    Redditor Zippyzapdap used a neural network to automatically switch on the lights when he sits down at his desk.

    In this specific case, an MTCNN (Multitask Convolutional Neural Network) face detector for Keras and TensorFlow in Python 3.4.

    A Raspberry Pi sits on the desk, and a Raspberry Pi camera module points at the desk chair. A Flask server runs on the Raspberry Pi and continuously captures images, which are then sent to the MacBook for processing.

    https://www.reddit.com/r/raspberry_pi/comments/eu4sym/i_used_a_raspberry_pi_to_switch_on_lights_using/

    Reply
  38. Tomi Engdahl says:

    DAWNBench Retired to Make Way for MLPerf
    https://www.eetimes.com/dawnbench-retired-to-make-way-for-mlperf/

    Stanford AI accelerator benchmark steps aside to consolidate benchmarking efforts.

    DAWNBench, the AI accelerator benchmark, is being retired to make room for MLPerf, according to its creators. DAWNBench will stop accepting rolling submissions on 3/27 in order to help consolidate benchmarking efforts across the industry.

    Reply
  39. Tomi Engdahl says:

    Edge Impulse launches TinyML as a service to enable machine learning for all embedded developers with open source device SDKs.

    TinyML for All Developers with Edge Impulse
    https://www.hackster.io/news/tinyml-for-all-developers-with-edge-impulse-2cfbbcc14b90

    Edge Impulse launches TinyML as a service to enable machine learning for all embedded developers with open source device SDKs.

    Edge Impulse enables the easy collection of real sensor data, live signal processing from raw data to neural networks, testing and deployment to any target device. Sign up for a free developer account

    https://www.edgeimpulse.com/

    Reply
  40. Tomi Engdahl says:

    Artificial intelligence-created medicine to be used on humans for first time
    https://www.bbc.co.uk/news/technology-51315462

    A drug molecule “invented” by artificial intelligence (AI) will be used in human trials in a world first for machine learning in medicine.

    The drug will be used to treat patients who have obsessive-compulsive disorder (OCD).

    Typically, drug development takes about five years to get to trial, but the AI drug took just 12 months.

    Exscienta chief executive Prof Andrew Hopkins described it as a “key milestone in drug discovery”.

    Reply
  41. Tomi Engdahl says:

    How we move predicts our mood. Pair that with smartphone accelerometers and AI analytics, and we could build an app that makes us happier. That’s what Kazuo Yano, a Hitachi fellow, believes – and he’s been working on such a project for 15 years.

    How AI can help us all lead happier lives
    https://www.wired.co.uk/article/how-ai-can-help-us-lead-happier-lives?utm_source=Facebook&utm_medium=Traffic+Campaign&utm_campaign=Wired+Hitachi+Traffic+Campaign+Jan+2020

    Artificial intelligence and smartphone accelerometers are being used by Hitachi to build an app that could make us happier

    AI is being brought to bear on a wide range of modern challenges, from work to medicine, the environment and transport. But Dr Kazuo Yano, Fellow at Hitachi Ltd., believes it can also help improve our happiness.

    That doesn’t require us to reduce humans to robots, or our emotions to programmable impulses. Instead, the aim of Dr Yano’s work is to use AI to analyse data that reflects our happiness, in order to uncover simple, small changes in our lives that might improve our moods and emotional state.

    “We are quite confident this planet can be made happier scientifically by using this data and technology,” Dr Yano says

    Reply
  42. Tomi Engdahl says:

    AI still doesn’t have the common sense to understand human language
    https://www.technologyreview.com/s/615126/ai-common-sense-reads-human-language-ai2/

    Natural-language processing has taken great strides recently—but how much does AI really understand of what it reads? Less than we thought.

    Until pretty recently, computers were hopeless at producing sentences that actually made sense. But the field of natural-language processing (NLP) has taken huge strides, and machines can now generate convincing passages with the push of a button.

    These advances have been driven by deep-learning techniques, which pick out statistical patterns in word usage and argument structure from vast troves of text. But a new paper from the Allen Institute of Artificial Intelligence calls attention to something still missing: machines don’t really understand what they’re writing (or reading).

    This is a fundamental challenge in the grand pursuit of generalizable AI

    “Humans can easily understand what our questions are about and select the correct answer,” she says, referring to the 94% performance accuracy. “If humans should be able to do that, my position is that machines should be able to do that too.”

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*