3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,335 Comments

  1. Tomi Engdahl says:

    The biggest misconceptions about AI: the experts’ view
    https://www.elsevier.com/connect/the-biggest-misconceptions-about-ai-the-experts-view

    Five experts reveal common misunderstandings around “the singularity” and what AI can and can’t do

    Reply
  2. Tomi Engdahl says:

    There’s no questioning that AI has the potential to be destructive, and it also has the potential to be transformative, although in neither case does it reach the extremes sometimes portrayed by the mass media and the entertainment industry.

    https://www.elsevier.com/connect/the-biggest-misconceptions-about-ai-the-experts-view

    Reply
  3. Tomi Engdahl says:

    Google is making a fast specialized TPU chip for edge devices and a suite of services to support it
    https://techcrunch.com/2018/07/25/google-is-making-a-fast-specialized-tpu-chip-for-edge-devices-and-a-suite-of-services-to-support-it/?sr_share=facebook&utm_source=tcfbpage

    In a pretty substantial move into trying to own the entire AI stack, Google today announced that it will be rolling out a version of its Tensor Processing Unit — a custom chip optimized for its machine learning framework TensorFlow — optimized for inference in edge devices.

    Reply
  4. Tomi Engdahl says:

    Computer vision researchers build an AI benchmark app for Android phones
    https://techcrunch.com/2018/07/25/computer-vision-researchers-build-an-ai-benchmark-app-for-android-phones/?sr_share=facebook&utm_source=tcfbpage

    The app, called AI Benchmark, is available for download on Google Play and can run on any device with Android 4.1 or higher — generating a score the researchers describe as a “final verdict” of the device’s AI performance.

    AI tasks being assessed by their benchmark system include image classification, face recognition, image deblurring, image super-resolution, photo enhancement or segmentation.

    They are even testing some algorithms used in autonomous driving systems, though there’s not really any practical purpose for doing that at this point.

    http://ai-benchmark.com

    Reply
  5. Tomi Engdahl says:

    Making Medical AI Trustworthy and Transparent
    https://spectrum.ieee.org/biomedical/devices/making-medical-ai-trustworthy-and-transparent

    The health care industry may seem the ideal place to deploy artificial intelligence systems. Each medical test, doctor’s visit, and procedure is documented, and patient records are increasingly stored in electronic formats. AI systems could digest that data and draw conclusions about how to provide better and more cost-effective care.

    Plenty of researchers are building such systems: Medical and computer science journals are full of articles describing experimental AIs that can parse records, scan images, and produce diagnoses and predictions about patients’ health. However, few—if any—of these systems have made their way into hospitals and clinics.

    So what’s the holdup? It’s not technical, says Shinjini Kundu, a medical researcher and physician at the University of Pittsburgh School of Medicine. “The barrier is the trust aspect,” she says. “You may have a technology that works, but how do you get humans to use it and rely on it?”

    Reply
  6. Tomi Engdahl says:

    ‘The discourse is unhinged’: how the media gets AI alarmingly wrong
    https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong

    Social media has allowed self-proclaimed ‘AI influencers’ who do nothing more than paraphrase Elon Musk to cash in on this hype with low-quality pieces. The result is dangerous

    ” Exaggerated claims in the press about the intelligence of computers is not unique to our time, and in fact goes back to the very origins of computing itself”

    “AI misinformation epidemic”. A growing number of researchers working in the field share Lipton’s frustration, and worry that the inaccurate and speculative stories about AI, like the Facebook story, will create unrealistic expectations for the field, which could ultimately threaten future progress and the responsible application of new technologies.

    As this resurgence got under way, AI hype in the media resumed after a long hiatus. In 2013, John Markoff wrote a feature in the New York Times about deep learning and neural networks with the headline Brainlike Computers, Learning From Experience. Not only did the title recall the media hype of 60 years earlier, so did some of the article’s assertions about what was being made possible by the new technology.

    Since then, far more melodramatic and overblown articles about “AI apocalypse”, “artificial brains”, “artificial superintelligence” and “creepy Facebook bot AIs” have filled the news feed daily.

    What Lipton finds most troubling, though, is not technical illiteracy among journalists, but how social media has allowed self-proclaimed “AI influencers” who do nothing more than paraphrase Elon Musk on their Medium blogs to cash in on this hype with low-quality, TED-style puff pieces. “Making real progress in AI requires a public discourse that is sober and informed,”

    “There are policy makers earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making”
    Zachary Lipton

    But for Lipton, the problem with the current hysteria is not so much the risk of another winter, but more how it promotes stories that distract from pressing issues in the field. “People are afraid about the wrong things,” he says. “There are policymakers earnestly having meetings to discuss the rights of robots when they should be talking about discrimination in algorithmic decision making. But this issue is terrestrial and sober, so not many people take an interest.”

    Joanne McNeil, a writer who examines emerging technologies, agrees that there is a problem with uncritical, uninquiring tech journalism, and often uses Twitter to make fun of Terminator-style articles. But at the same time, she is weary of pointing the finger solely at journalists and believes that one of the causes of AI hype is an uneven distribution of resources.

    “If you compare a journalist’s income to an AI researcher’s income,” she says, “it becomes pretty clear pretty quickly why it is impossible for journalists to produce the type of carefully thought through writing that researchers want done about their work.”

    stamping out hype in AI journalism is not possible. Bell explains that this is because articles about electronic brains or pernicious Facebook bots are less about technology and more about our cultural hopes and anxieties.

    “We’ve told stories about inanimate things coming to life for thousands of years, and these narratives influence how we interpret what is going on now,” Bell says

    “boundary between wild speculation and real research is a little too flimsy right now,”

    Reply
  7. Tomi Engdahl says:

    Cyrus Farivar / Ars Technica:
    In a test conducted by the ACLU, Amazon’s Rekognition facial recognition tech erroneously matched 28 members of Congress, 6 of them black, to criminal mugshots — ACLU: “And running the entire test cost us $12.33—less than a large pizza.” — The American Civil Liberties Union …

    Amazon’s Rekognition messes up, matches 28 lawmakers to mugshots
    ACLU: “And running the entire test cost us $12.33—less than a large pizza.”
    https://arstechnica.com/tech-policy/2018/07/amazons-rekognition-messes-up-matches-28-lawmakers-to-mugshots/

    The American Civil Liberties Union of Northern California said Thursday that in its new test of Amazon’s facial recognition system known as Rekognition, the software erroneously identified 28 members of Congress as people who have been arrested for a crime.

    According to Jake Snow, an ACLU attorney, the organization downloaded 25,000 mugshots from what he described as a “public source.”

    The ACLU then ran the official photos of all 535 members of Congress through Rekognition, asking it to match them up with any of the mugshots—and it ended up matching 28.

    Facial recognition historically has resulted in more false positives for African-Americans.

    The ACLU is concerned that over-reliance on faulty facial recognition scans, particularly against citizens of color, would result in a possible fatal interaction with law enforcement. Amazon’s Rekognition has already been used by a handful of law enforcement agencies nationwide.

    Because of these substantive errors, Snow said the ACLU as a whole is again calling on Congress to “enact a moratorium on law enforcement’s use of facial recognition.”

    Reply
  8. Tomi Engdahl says:

    A beginner’s guide to AI: Natural language processing
    https://thenextweb.com/artificial-intelligence/2018/07/25/a-beginners-guide-to-ai-natural-language-processing/

    In order for AI to understand what you’re saying, turn those words into an action, and then output something you can understand, they rely on something called natural language processing (NLP), which is exactly what it sounds like.

    Reply
  9. Tomi Engdahl says:

    IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show
    https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/

    Internal IBM documents show that its Watson supercomputer often spit out erroneous cancer treatment advice and that company medical specialists and customers identified “multiple examples of unsafe and incorrect treatment recommendations”

    Reply
  10. Tomi Engdahl says:

    To Remember, the Brain Must Actively Forget
    By
    DALMEET SINGH CHAWLA
    July 24, 2018
    https://www.quantamagazine.org/to-remember-the-brain-must-actively-forget-20180724/

    Researchers find evidence that neural systems actively remove memories, which suggests that forgetting may be the default mode of the brain.

    Reply
  11. Tomi Engdahl says:

    How (and how not) to fix AI
    https://techcrunch.com/2018/07/26/how-and-how-not-to-fix-ai/?utm_source=tcfbpage&sr_share=facebook

    While artificial intelligence was once heralded as the key to unlocking a new era of economic prosperity, policymakers today face a wave of calls to ensure AI is fair, ethical and safe.

    called on policymakers to do more to regulate AI.

    Unfortunately, the two most popular ideas — requiring companies to disclose the source code to their algorithms and explain how they make decisions — would cause more harm than good by regulating the business models and the inner workings of the algorithms of companies using AI, rather than holding these companies accountable for outcomes.

    The first idea — “algorithmic transparency” — would require companies to disclose the source code and data used in their AI systems. Beyond its simplicity, this idea lacks any real merits as a wide-scale solution.

    The other idea — “algorithmic explainability” — would require companies to explain to consumers how their algorithms make decisions. The problem with this proposal is that there is often an inescapable trade-off between explainability and accuracy in AI systems. An algorithm’s accuracy typically scales with its complexity, so the more complex an algorithm is, the more difficult it is to explain.

    A policy framework built around algorithmic accountability would have several important benefits. First, it would make operators responsible for any harms their algorithms might cause, not developers.

    Second, holding operators accountable for outcomes rather than the inner workings of algorithms would free them to focus on the best methods to ensure their algorithms do not cause harm, such as confidence measures, impact assessments or procedural regularity, where appropriate.

    This is not to say that transparency and explanations do not have their place. Transparency requirements, for example, make sense for risk-assessment algorithms in the criminal justice system.

    The debate about how to make AI safe has ignored the need for a nuanced, targeted approach to regulation, treating algorithmic transparency and explainability like silver bullets without considering their many downsides.

    Reply
  12. Tomi Engdahl says:

    This 3D-printed AI construct analyzes by bending light
    https://techcrunch.com/2018/07/26/this-3d-printed-ai-construct-analyzes-by-bending-light/?utm_source=tcfbpage&sr_share=facebook

    AdChoices

    This 3D-printed AI construct analyzes by bending light
    Devin Coldewey
    @techcrunch / 9 hours ago

    optical-dnn
    Machine learning is everywhere these days, but it’s usually more or less invisible: it sits in the background, optimizing audio or picking out faces in images. But this new system is not only visible, but physical: it performs AI-type analysis not by crunching numbers, but by bending light. It’s weird and unique, but counter-intuitively, it’s an excellent demonstration of how deceptively simple these “artificial intelligence” systems are.

    Machine learning systems, which we frequently refer to as a form of artificial intelligence, at their heart are just a series of calculations made on a set of data, each building on the last or feeding back into a loop. The calculations themselves aren’t particularly complex — though they aren’t the kind of math you’d want to do with a pen and paper.

    Reply
  13. Tomi Engdahl says:

    El Toro Grande: Self Driving Car Using Machine Learning
    https://www.hackster.io/dantuluri/el-toro-grande-self-driving-car-using-machine-learning-4cc1f9

    ETG is an autonomous RC car that utilizes a RPi 3 and Arduino to localize itself in the environment and avoid colliding into other bots.

    Reply
  14. Tomi Engdahl says:

    Kari Enqvistin kolumni: Tekoäly tulee tuhoamaan ajattelun
    https://yle.fi/uutiset/3-10318808

    Reply
  15. Tomi Engdahl says:

    How Artificial Intelligence Can Supercharge the Search for New Particles
    https://www.quantamagazine.org/how-artificial-intelligence-can-supercharge-the-search-for-new-particles-20180723/

    In the hunt for new fundamental particles, physicists have always had to make assumptions about how the particles will behave. New machine learning algorithms don’t.

    How Artificial Intelligence Can Supercharge the Search for New Particles

    ABSTRACTIONS BLOG
    How Artificial Intelligence Can Supercharge the Search for New Particles
    By
    CHARLIE WOOD
    July 23, 2018

    In the hunt for new fundamental particles, physicists have always had to make assumptions about how the particles will behave. New machine learning algorithms don’t.
    5

    A collision inside the LHC this April revealed individual charged particles (orange lines) and large particle jets (yellow cones).

    ATLAS Experiment © 2018 CERN
    The Large Hadron Collider (LHC) smashes a billion pairs of protons together each second. Occasionally the machine may rattle reality enough to have a few of those collisions generate something that’s never been seen before. But because these events are by their nature a surprise, physicists don’t know exactly what to look for. They worry that in the process of winnowing their data from those billions of collisions to a more manageable number, they may be inadvertently deleting evidence for new physics. “We’re always afraid we’re throwing the baby away with the bathwater,” said Kyle Cranmer, a particle physicist at New York University who works with the ATLAS experiment at CERN.

    Faced with the challenge of intelligent data reduction, some physicists are trying to use a machine learning technique called a “deep neural network” to dredge the sea of familiar events for new physics phenomena.

    Abstractions​ navigates promising ideas in science and mathematics. Journey with us and join the conversation.
    See all Abstractions blog
    In the prototypical use case, a deep neural network learns to tell cats from dogs by studying a stack of photos labeled “cat” and a stack labeled “dog.” But that approach won’t work when hunting for new particles, since physicists can’t feed the machine pictures of something they’ve never seen. So they turn to “weakly supervised learning,” where machines start with known particles and then look for rare events using less granular information, such as how often they might take place overall.

    In a paper posted on the scientific preprint site arxiv.org in May, three researchers proposed applying a related strategy to extend “bump hunting,” the classic particle-hunting technique that found the Higgs boson. The general idea, according to one of the authors, Ben Nachman, a researcher at the Lawrence Berkeley National Laboratory, is to train the machine to seek out rare variations in a data set.

    Consider, as a toy example in the spirit of cats and dogs, a problem of trying to discover a new species of animal in a data set filled with observations of forests across North America. Assuming that any new animals might tend to cluster in certain geographical areas (a notion that corresponds with a new particle that clusters around a certain mass), the algorithm should be able to pick them out by systematically comparing neighboring regions. If British Columbia happens to contain 113 caribous to Washington state’s 19 (even against a background of millions of squirrels), the program will learn to sort caribous from squirrels, all without ever studying caribous directly. “It’s not magic but it feels like magic,” said Tim Cohen, a theoretical particle physicist at the University of Oregon who also studies weak supervision.

    By contrast, traditional searches in particle physics usually require researchers to make an assumption about what the new phenomena will look like. They create a model of how the new particles will behave — for example, a new particle might tend to decay into particular constellations of known particles. Only after they define what they’re looking for can they engineer a custom search strategy. It’s a task that generally takes a Ph.D. student at least a year, and one that Nachman thinks could be done much faster, and more thoroughly.

    The proposed CWoLa algorithm, which stands for Classification Without Labels, can search existing data for any unknown particle that decays into either two lighter unknown particles of the same type, or two known particles of the same or different type.

    Reply
  16. Tomi Engdahl says:

    Use Artificial Intelligence to Detect Messy/Clean Rooms!
    https://www.hackster.io/matt-farley/use-artificial-intelligence-to-detect-messy-clean-rooms-f224a2

    This project uses artificial intelligence (deep learning) to detect when rooms in the house are messy or clean (via cameras & TensorFlow).

    Reply
  17. Tomi Engdahl says:

    Top 10 Publications for Foundations and Trends in Machine Learning
    Machine Learning Trends 2018
    https://www.digitaltechnologyreview.com/2018/06/Foundations-Trends-in-Machine-Learning.html

    Reply
  18. Tomi Engdahl says:

    A pickaxe for the AI gold rush, Labelbox sells training data software
    https://techcrunch.com/2018/07/30/labelbox/

    Reply
  19. Tomi Engdahl says:

    4 Perceptions of AI That Are Quite Amazing
    https://blog.paessler.com/4-perceptions-of-ai-that-are-quite-amazing?utm_source=facebook&utm_medium=cpc&utm_campaign=Burda-Blog-Global&utm_content=4perceptionsAI&hsa_ad=23843025149640129&hsa_src=fb&hsa_cam=23843025148490129&hsa_ver=3&hsa_acc=2004489912909367&hsa_net=facebook&hsa_grp=23843025149180129

    1. AI Is Already Everywhere. Even If We Don’t Call It AI.

    The term “AI” has been used for a long time and in the past mostly for applications and programs that seem anything but intelligent from today’s perspective, but that doesn’t matter at all. Because in 30 years’ time we may also look back and laugh at what we today call “intelligent”.

    2. AI Development Is Not Linear, but Exponential

    The future feels lame. Because we humans are programmed to make forecasts of the future according to the past.

    3. The Moment AI Becomes a Superintelligence (ASI), You Can Burn Your History Books

    There are many scientists who divide the general term AI into 3 categories: ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence). Sad, but true: we have the dumbest kind of AI right now, namely ANI.

    4. ASI Will Probably Be Indiscernible, and Could Totally Do without Us
    First of all, let’s note that an ASI is not a better variant of the human brain, but something completely different. The human brain is not per se a “better variant” of the brain of a monkey.

    Secondly, it can be assumed that it will not be in the interest of an ASI to live our lives. The whole doomsday fantasies are all based on the assumption that a foreign species wants to destroy us in order to take over our planet and ultimately lead lives similar to ours. But I don’t think an artificial intelligence much smarter than us actually has any such plans.

    Reply
  20. Tomi Engdahl says:

    Timothy B. Lee / Ars Technica:
    Tesla says it’s making an AI chip for Autopilot, coming next year, backward compatible with current-gen Tesla vehicles and faster than Nvidia chips being used — Tesla tapped an Apple chip guru to design next-generation AI chips. — Tesla is creating its own chips optimized for machine learning …

    Tesla says it’s dumping Nvidia chips for a homebrew alternative
    Tesla tapped an Apple chip guru to design next-generation AI chips.
    https://arstechnica.com/cars/2018/08/tesla-says-its-dumping-nvidia-chips-for-a-homebrew-alternative/

    Reply
  21. Tomi Engdahl says:

    Thomas Claburn / The Register:
    Intel says it earned $1B in 2017 from sales of Xeon processors running AI workloads in data centers and provides an update for its data center chip roadmap — Optane DC persistent memory ships to Google, Xeon roadmap, and more revealed — At Intel’s Data-Centric Innovation Summit today in Santa Clara …

    Intel: Yeah, yeah, 10nm. It’s on the todo list. Now, let’s talk about AI…
    Optane DC persistent memory ships to Google, Xeon roadmap, and more revealed
    https://www.theregister.co.uk/2018/08/08/intel_datacentric_summit/

    Reply
  22. Tomi Engdahl says:

    https://semiengineering.com/system-bits-july-31/

    Computers that perceive human emotion
    As part of the growing field of “affective computing,” MIT researchers have developed a machine-learning model that takes computers a step closer to interpreting our emotions as naturally as humans do.

    Affective computing uses robots and computers to analyze facial expressions, interpret emotions, and respond accordingly. Applications include, for instance, monitoring an individual’s health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions.

    MIT Media Lab researchers have developed a machine-learning model that takes computers a step closer to interpreting our emotions as naturally as humans do. The model better captures subtle facial expression variations to better gauge moods. By using extra training data, the model can also be adapted to an entirely new group of people, with the same efficacy.

    Helping computers perceive human emotions
    http://news.mit.edu/2018/helping-computers-perceive-human-emotions-0724

    Personalized machine-learning models capture subtle variations in facial expressions to better gauge how we feel.

    Reply
  23. Tomi Engdahl says:

    AI and quantum computing: The third and fourth exponentials
    https://electroiq.com/2018/07/ai-and-quantum-computing-the-third-and-fourth-exponentials/

    In a keynote talk on Tuesday in the Yerba Buena theater, Dr. John E. Kelly, III, Senior Vice President, Cognitive Solutions and IBM Research, talked about how the era of Artificial Intelligence (AI) was upon us, and how it will dramatically the world. “This is an era of computing which is at a scale that will dwarf the previous era, in ways that will change all of our businesses and all of our industries, and all of our lives,” he said. “This will be another 50, 60 or more years of technology breakthrough innovation that will change the world. This is the era that’s going to power our semiconductor industry forward. The number of opportunities is enormous.”

    Reply
  24. Tomi Engdahl says:

    AI, ML Chip Choices
    Which type of chips are best and why.
    https://semiengineering.com/ai-ml-chip-choices/

    Reply
  25. Tomi Engdahl says:

    A Coarse Grain Reconfigurable Array (CGRA) for Statically Scheduled Data Flow Computing
    https://www.vision-systems.com/whitepapers/2018/06/a-coarse-grain-reconfigurable-array-cgra-for-statically-scheduled-data-flow-computing.html?cmpid=enl_vsd_vsd_newsletter_2018-08-06&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=2193594

    This paper argues a case for the use of coarse grained reconfigurable array (CGRA) architectures for the efficient acceleration of the data flow computations used in deep neural network training and inferencing. The paper discusses the problems with other parallel acceleration systems such as massively parallel processor arrays (MPPAs) and heterogeneous systems based on CUDA and OpenCL, and proposes that CGRAs with autonomous computing features deliver improved performance and computational efficiency. The machine learning compute appliance that Wave Computing is developing executes data flow graphs using multiple clock-less, CGRA-based System on Chips (SoCs) each containing 16,000 processing elements (PEs)

    Reply
  26. Tomi Engdahl says:

    Reconfigurable AI Building Blocks For SoCs And MCUs
    To get top performance from neural network inferencing, you need lots of MACs/second.
    https://semiengineering.com/reconfigurable-ai-building-blocks-for-socs-and-mcus/

    FPGA chips are in use in many AI applications today, including Cloud datacenters.

    Embedded FPGA (eFPGA) is now becoming used for AI applications as well. Our first public customer doing AI with EFLX eFPGA is Harvard University, who will present a paper at Hot Chips August 20th on Edge AI processing using EFLX: “A 16nm SoC with Efficient and Flexible DNN Acceleration for Intelligent IoT Devices.”

    We have other customers whose first question is, “how many GigaMACs/second can you execute per square millimeter”?

    FPGAs are used today in AI because they have a lot of MACs. (See later in this blog for why MACs/second are important for AI).

    The EFLX4K DSP core turns out to have as many or, generally, more DSP MACs per square millimeter relative to LUTs than other eFPGA and FPGA offerings, but the MAC was designed for digital signal processing and is overkill for AI requirements. AI doesn’t need a 22×22 multiplier and doesn’t need pre-adders or some of the other logic in the DSP MAC.

    But we can do much better by optimizing eFPGA for AI: replace the signal-processing-oriented DSP MACs with AI-optimized MACs of 8×8 with accumulators, optionally configurable as 16×16 or 16×8 or 8×16 as required, and allocate more of the area of the eFPGA core for MACs.

    Reply
  27. Tomi Engdahl says:

    No More Painful Finger Sticks for Diabetics
    https://www.medicaldesignbriefs.com/component/content/article/mdb/insider/32393?utm_source=TBnewsletter&utm_medium=Email&utm_campaign=20180711_Medical_Insider&eid=376641819&bid=2169191

    Researchers are developing new technology that would free people with diabetes from painful finger sticks typically used to monitor their blood sugar. A team has combined radar and artificial intelligence (AI) to detect changes in glucose levels without the need to draw blood several times a day.

    The research involves collaboration with Google and German hardware company Infineon, which jointly developed a small radar device and sought input from select teams around the world on potential applications.

    Reply
  28. Tomi Engdahl says:

    Cobots – Bridging the AI Gap for Industrial Automation
    https://www.electropages.com/2018/07/cobots-bridging-ai-gap-for-industrial-automation/?utm_campaign=&utm_source=newsletter&utm_medium=email&utm_term=article&utm_content=Cobots+-+Bridging+the+AI+Gap+for+Industrial+Automation

    There are two key issues, or challenges, currently facing industrial automation. Together, these challenges are driving the development of a new exciting form of robotics – the cobot. Depending on who you are talking to, a cobot may be defined as either a collaborative robot or a cooperative robot (with the former being the most common). While the precise phraseology is yet to be firmly settled, the meaning is more certain: A cobot is a robot that works closely together with humans, often in much the same way that two human workers would do so.

    While much of the focus is on cobots for industrial use, the general concept is also applicable in healthcare, assisted living (for the elderly and disabled), office environments, etc. Early research at MIT suggests a collaborative team of humans and robots could reduce non-productive worker time by as much as 85%. Elsewhere, analyst firm Research & Markets has predicted that the value of the worldwide cobot business will to grow from $175 million in 2016 to $3.8 billion by 2021.

    Reply
  29. Tomi Engdahl says:

    Architecting For AI
    https://semiengineering.com/architecting-for-ai/

    Experts at the Table, part 1: What kind of processing is required for inferencing, what is the best architecture, and can they be debugged?

    Reply
  30. Tomi Engdahl says:

    More Processing Everywhere
    https://semiengineering.com/more-processing-everywhere/

    Arm’s CEO contends that a rise in data will fuel massive growth opportunities around AI and IoT, but there are significant challenges in making it all work properly.

    Reply
  31. Tomi Engdahl says:

    Impact Of IP On AI SoCs
    https://semiengineering.com/impact-of-ip-on-artificial-intelligence-socs/

    Deep learning applications will call for specialized IP in the form of new processing and memory architectures.

    Reply
  32. Tomi Engdahl says:

    Pace Quickens As Machine Learning Moves To The Edge
    https://semiengineering.com/pace-quickens-as-machine-learning-moves-to-the-edge/

    More powerful edge devices means everyday AI applications, like social robots, are becoming feasible.

    Reply
  33. Tomi Engdahl says:

    Ciena, 4 others join Linux AI project
    https://www.broadbandtechreport.com/articles/2018/08/ciena-4-others-join-linux-ai-project.html?cmpid=enl_lightwave_lightwave_enabling_technologies_2018-08-09&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24

    The LF Deep Learning Foundation, an umbrella organization of the Linux Foundation intended to support open source innovation in artificial intelligence (AI), machine learning (ML) and deep learning (DL), has gained five new members: Ciena, DiDi, Intel, Orange and Red Hat.

    The new members join founding members Amdocs, AT&T, B.Yond, Baidu, Huawei, Nokia, Tech Mahindra, Tencent, Univa and ZTE. The LF Deep Learning Foundation is intended to be a vendor-neutral space for harmonization and acceleration of separate technical projects focused on AI, ML and DL technologies.

    Reply
  34. Tomi Engdahl says:

    AI Flood Drives Chips to the Edge
    Deep learning spawns a silicon tsunami
    https://www.eetimes.com/document.asp?doc_id=1333413

    It’s easy to list semiconductor companies working on some form of artificial intelligence — pretty much all of them are. The broad potential for machine learning is drawing nearly every chip vendor to explore the still-emerging technology, especially in inference processing at the edge of the network.

    “It seems like every week, I run into a new company in this space, sometimes someone in China that I’ve never heard of,” said David Kanter, a microprocessor analyst at Real World Technologies.

    Deep neural networks are essentially a new way of computing. Instead of writing a program to run on a processor that spits out data, you stream data through an algorithmic model that filters out results in what’s called inference processing.

    Reply
  35. Tomi Engdahl says:

    AI hardware acceleration needs careful requirements planning
    https://www.edn.com/design/systems-design/4460872/AI-hardware-acceleration-needs-careful-requirements-planning

    The explosion of artificial intelligence (AI) applications, from cloud-based big data crunching to edge-based keyword recognition and image analysis, has experts scrambling to develop the best architecture to accelerate the processing of machine learning (ML) algorithms. The extreme range of emerging options underscores the importance of a designer clearly defining the application and its requirements before selecting a hardware platform.

    In many ways the dive into AI acceleration resembles the DSP gold rush of the late 90s and early 2000s. As wired and wireless communications took off, the rush was on to offer the ultimate DSP co-processor to handle baseband processing. Like DSP coprocessors, the goal with AI accelerators is to find the fastest, most energy-efficient means of performing the computations required.

    Reply
  36. Tomi Engdahl says:

    What is machine learning?
    JULY 25, 2018
    http://www.ibmbigdatahub.com/blog/what-is-machine-learning

    What is machine learning? If you’re interested in analytics you might have heard this term a lot. Many misuse it and some predict grandiose outcomes for its future.

    Despite the hype, machine learning is one of the most powerful technologies in the modern enterprise. Soon you’ll understand why.

    Reply
  37. Tomi Engdahl says:

    6 AI Companies Set to Rock the Medtech World
    https://www.mddionline.com/6-ai-companies-set-rock-medtech-world?ADTRK=UBM&elq_mid=5190&elq_cid=876648

    There are six companies with AI-based innovations that everyone in medtech should be watching and talking about.

    Reply
  38. Tomi Engdahl says:

    Could Blockchain Transform the Transportation Industry?
    https://www.electronicdesign.com/automotive/could-blockchain-transform-transportation-industry?Issue=ED-004_20180814_ED-004_611&sfvc4enews=42&cl=article_2_b&utm_rid=CPG05000002750211&utm_campaign=19183&utm_medium=email&elq2=bff5e399cd06430eb7dd47a0dbff0312

    The Mobility Open Blockchain Initiative is exploring how blockchain technology can be used in the transportation market, perhaps leading to the “mobility ecosystem of tomorrow.”

    Reply
  39. Tomi Engdahl says:

    Object Detection, With TensorFlow
    https://hackaday.com/2018/07/31/object-detection-with-tensorflow/

    Getting computers to recognize objects has been a historically difficult problem in computer science, but with the rise of machine learning it is becoming easier to solve. One of the tools that can be put to work in object recognition is an open source library called TensorFlow, which [Evan] aka [Edje Electronics] has put to work for exactly this purpose.

    A tutorial showing how to set up TensorFlow’s Object Detection API on the Raspberry Pi
    https://github.com/EdjeElectronics/TensorFlow-Object-Detection-on-the-Raspberry-Pi

    Reply
  40. Tomi Engdahl says:

    Who’s Afraid of General AI?
    https://www.designnews.com/electronics-test/whos-afraid-general-ai/140381484459164?ADTRK=UBM&elq_mid=5214&elq_cid=876648

    Byron Reese, technology entrepreneur and author of the new book, “The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity” discusses the possibility of creating machines that truly think.

    Byron Reese believes technology has only truly reshaped humanity three times in history. The first came with the harnessing of fire. The second with the development of agriculture. And the “third age” came with the invention of the wheel and writing. Reese, CEO and publisher of the technology research company, Gigaom, and host of the Voices in AI podcast, has spent the majority of his career exploring how technology and humanity intersect. He believes the emergence of artificial intelligence is pushing us into a “fourth age” in which AI and robotics will forever transform not only how we work and play, but also how we think about deeper philosophical topics, such as the nature of consciousness.

    His latest book, “The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity,” touches on all of these subjects. Byron Reese spoke* with Design News about the implications of artificial general intelligence (AGI), the possibility of creating machines that truly think, automation’s impact on jobs, and the ways society might be forever transformed by AI.

    Reply
  41. Tomi Engdahl says:

    12 Early Robots That Launched 40 Years of Automation Explosion
    https://www.designnews.com/automation-motion-control/12-early-robots-launched-40-years-automation-explosion?ADTRK=UBM&elq_mid=5214&elq_cid=876648

    Over the three and a half decades, from 1954 through the 1980s, robotics proved its worth in automotive manufacturing. Here’s a look at the development of manufacturing robotics in their early decades.

    Reply
  42. Tomi Engdahl says:

    AI Architectures Must Change
    https://semiengineering.com/ai-architectures-must-change/

    Using the Von Neumann architecture for artificial intelligence applications is inefficient. What will replace it?

    Reply
  43. Tomi Engdahl says:

    Google Wants to Sell You Machine Learning Chips
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1333554

    Is Google prepared to sell (and support) its new machine learning chips?

    At Google Cloud Next, the company stepped it up its ASIC developments with its own machine learning accelerator chip for edge computing based on its TPU design.

    Google seemed to imply that it will also sell the Edge TPU directly to companies that want to build intelligent edge devices. Google could be trying to leverage its internal designs to enter new markets, or this could also be an attempt to reduce internal ASIC costs by spreading development costs over larger chip volumes above and beyond what the company needs for internal uses.

    Selling the chips also builds a larger ecosystem for developers. Google initially plans to offer the Edge TPU through a couple of do it yourself (DIY) boards.

    Reply
  44. Tomi Engdahl says:

    The Rebirth of the Semiconductor Industry
    http://blog.semi.org/technology-trends/the-rebirth-of-the-semiconductor-industry

    “Software is eating the world … and AI is eating software.” Amir Husain, author of The Sentient Machine, at SEMICON West 2018

    We’re living in a digital world where semiconductors have been taken for granted. But, Artificial Intelligence (AI) is changing everything – and bringing semiconductors back into the deserved spotlight. AI’s potential market of hundreds of zettabytes and trillions of dollars relies on new semiconductor architectures and compute platforms. Making these AI semiconductor engines will require a wildly innovative range of new materials, equipment, and design methodologies.

    Moore’s Law carried us the past 50-plus years and as we’re now stepping into the dawn of AI’s potential, we can see that the coming Cognitive Era will drive its own exponential growth curve. This is great for the world – virtually every industry will be transformed, and people’s lives will get better – and it’s fantastic for our industry.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*