3 AI misconceptions IT leaders must dispel

https://enterprisersproject.com/article/2017/12/3-ai-misconceptions-it-leaders-must-dispel?sc_cid=7016000000127ECAAY

 Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.

AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.” 

IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”

5,256 Comments

  1. Tomi Engdahl says:

    Deloitte TMT Predictions: Machine Learning Deployments Will Continue to Drive Growth
    http://www.dataversity.net/deloitte-tmt-predictions-machine-learning-deployments-will-continue-drive-growth/?lipi=urn%3Ali%3Apage%3Ad_flagship3_feed%3BKq7r37wJQw2qlC9HRvsmHA%3D%3D

    A recent press release states, “Deloitte forecasts double digital growth in machine learning deployments for the enterprise, an increasing worldwide appetite for digital subscriptions among consumers, and ongoing smartphone dominance — along with eight additional predictions — as part of the 17th edition of the ‘Technology, Media & Telecommunications Predictions.’

    Reply
  2. Tomi Engdahl says:

    The Seven Deadly Sins of AI Predictions
    https://www.technologyreview.com/s/609048/the-seven-deadly-sins-of-ai-predictions/

    Mistaken extrapolations, limited imagination, and other common mistakes that distract us from thinking more productively about the future.

    Reply
  3. Tomi Engdahl says:

    Skynet it ain’t: Deep learning will not evolve into true AI, says boffin
    Neural networks sort stuff – they can’t reason or infer
    https://www.theregister.co.uk/2018/01/04/our_ai_overlords_will_not_magically_arise_from_deep_learning_says_expert/

    Deep learning and neural networks may have benefited from the huge quantities of data and computing power, but they won’t take us all the way to artificial general intelligence, according to a recent academic assessment.

    Reply
  4. Tomi Engdahl says:

    Gartner Says Artificial Intelligence Is a Game Changer for Personal Devices
    AI Will Drive the Most Compelling User Experiences at Home and in the Workplace
    https://www.gartner.com/newsroom/id/3843263

    motion artificial intelligence (AI) systems are becoming so sophisticated that Gartner, Inc. predicts that by 2022, personal devices will know more about an individual’s emotional state than his or her own family. AI is generating multiple disruptive forces that are reshaping the way we interact with personal technologies.

    “Emotion AI systems and affective computing are allowing everyday objects to detect, analyze, process and respond to people’s emotional states and moods to provide better context and a more personalized experience,” said Roberta Cozza, research director at Gartner. “To remain relevant, technology vendors must integrate AI into every aspect of their devices, or face marginalization.”

    The current wave of emotion AI systems is being driven by the proliferation of virtual personal assistants (VPAs) and other AI-based technology for conversational systems. As a second wave emerges, AI technology will add value to more and more customer experience scenarios, including educational software, video games, diagnostic software, athletic and health performance, and the autonomous car.

    By 2021, 10 percent of wearables users will have changed lifestyles, and thereby extend their life spans by an average of six months.

    By 2020, 60 percent of personal technology device vendors will use third-party AI cloud services to enhance functionality and services.

    Through 2022, security technology combining machine learning, biometrics and user behavior will reduce passwords to account for less than 10 percent of all digital authentications.

    Reply
  5. Tomi Engdahl says:

    Artificial Intelligence Investing Gets Ready For Prime Time
    https://www.google.fi/amp/s/www.forbes.com/sites/greatspeculations/2017/10/25/getting-ready-for-prime-time-of-artificial-intelligence-investing/amp/

    Artificial intelligence is a branch of computer science that aims to create intelligent machines that teach themselves. Much of AI’s growth has occurred in the last decade. The upcoming decade, according to billionaire investor Mark Cuban, will be the greatest technological revolution in man’s history.

    More progress has been achieved on artificial intelligence.in the past five years than in the past five decades. Rapid machine-learning improvements have allowed computers to surpass humans at certain feats of ingenuity, doing things that at one time would have been unfathomable. IBM calls the autonomous machine learning field ‘cognitive computing.’ The ‘cognitive computing’ space is bursting with innovations; a result of billions of research and investment dollars spent by large companies such as Microsoft, Google and Facebook. IBM alone has spent $15 billion on Watson, its cognitive system, as well as on related data analytics technology.

    Reply
  6. Tomi Engdahl says:

    Machine Learning’s Growing Divide
    https://semiengineering.com/machine-learnings-growing-divide/

    Is the industry heading toward another hardware/software divide in machine learning? Both sides have different objectives.

    Machine learning is one of the hottest areas of development, but most of the attention so far has focused on the cloud, algorithms and GPUs. For the semiconductor industry, the real opportunity is in optimizing and packaging solutions into usable forms, such as within the automotive industry or for battery-operated consumer or IoT products.

    Inefficiencies often arise because of what is readily available, and that is most certainly the case with machine learning. For example, GPUs have been shown to be the highest-performance solution for training. Because these devices are based on floating point, then machine learning algorithms are developed that rely on floating point.

    Inferencing in edge devices cannot afford to use floating point, and it is necessary to transform the coefficients into fixed point. But could training be done using fixed point? Quite possibly, although special-purpose hardware is only starting to be considered.

    The big question is whether the industry heading toward another hardware-software divide. It may manifest itself as a cloud/embedded divide this time, unless something can be done to bring the two sides together.

    Reply
  7. Tomi Engdahl says:

    AI and machine learning bias has dangerous implications
    https://opensource.com/article/18/1/how-open-source-can-fight-algorithmic-bias?sc_cid=7016000000127ECAAY

    Here’s how open source technology can help address the problem.

    Algorithms are everywhere in our world, and so is bias. From social media news feeds to streaming service recommendations to online shopping, computer algorithms—specifically, machine learning algorithms—have permeated our day-to-day world.

    bias in computer algorithms themselves.

    Contrary to what many of us might think, technology is not objective. AI algorithms and their decision-making processes are directly shaped by those who build them—what code they write, what data they use to “train” the machine learning models, and how they stress-test the models after they’re finished. This means that the programmers’ values, biases, and human flaws are reflected in the software. If I fed an image-recognition algorithm the faces of only white researchers in my lab, for instance, it wouldn’t recognize non-white faces as human. Such a conclusion isn’t the result of a “stupid” or “unsophisticated” AI, but to a bias in training data

    Reply
  8. Tomi Engdahl says:

    AI That Can Predict Death Has Been Given FDA Approval
    http://www.iflscience.com/health-and-medicine/ai-that-can-predict-death-has-been-given-fda-approval/

    Using the power of artificial intelligence (AI) doctors could predict when their patients might be knocking on death’s door. This isn’t merely some grim dystopian prop, the researchers hope it could be used to slash the surprisingly high number of unexpected deaths in the US.

    Excel Medical, a medical tech company in Florida, has recently been boasting about its new WAVE Clinical Platform, an algorithm that can accurately predict whether medical patients could be at risk of a sudden, unexpected death.

    Reply
  9. Tomi Engdahl says:

    New Theory Cracks Open the Black Box of Deep Learning
    By
    NATALIE WOLCHOVER
    September 21, 2017
    https://www.quantamagazine.org/new-theory-cracks-open-the-black-box-of-deep-learning-20170921/

    A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

    Reply
  10. Tomi Engdahl says:

    Neural Networking: Robots Learning From Video
    https://hackaday.com/2018/01/18/neural-networking-robots-learning-from-video/

    Humans are very good at watching others and imitating what they do. Show someone a video of flipping a switch to turn on a CNC machine and after a single viewing they’ll be able to do it themselves. But can a robot do the same?

    Bear in mind that we want the demonstration video to be of a human arm and hand flipping the switch. When the robot does it, the camera that is its eye will be seeing its robot arm and gripper.

    Researchers from Google Brain and the University of Southern California have done it. In their paper describing how, they talk about a few different experiments but we’ll focus on just one, getting a robot to imitate pouring a liquid from a container into a cup.

    Time-Contrastive Networks: Self-Supervised Learning from Video
    https://sermanet.github.io/imitate/

    Reply
  11. Tomi Engdahl says:

    Google’s AI Makes Its Own AI Children – And They’re Awesome
    http://www.iflscience.com/technology/googles-ai-makes-its-own-ai-children-and-theyre-awesome/?utm_source=Editorial&utm_medium=Static&utm_campaign=RA

    Google is betting big on artificial intelligence (AI), and it’s clearly paying off. Apart from offering up collections of code that best the world’s board game champions, they’ve also managed to create an AI that, in effect, designs its own AI – and its creations have gone from analyzing words to disseminating complex imagery in a matter of months.

    On a company blog post from May of this year, engineers explain how their AutoML system (Automated Machine Learning) gets a controller AI – which we can perhaps call the “parent” in a colloquial sense – that proposes designs for what the team call a “child” AI architecture.

    Reply
  12. Tomi Engdahl says:

    Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold
    https://spectrum.ieee.org/riskfactor/computing/software/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold

    Perhaps next month, those 34,000 plus individuals wrongfully accused of unemployment fraud in Michigan from October 2013 to September 2015 will finally hear that they will receive some well-deserved remuneration for the harsh treatment meted out by Michigan Integrated Data Automated System (MiDAS). Michigan legislators have promised to seek at least $20 million in compensation for those falsely accused.

    This is miserly, given how many people experienced punishing personal trauma, hired lawyers to defend themselves, saw their credit and reputations ruined, filed for bankruptcy, had their houses foreclosed or were made homelessness

    Reply
  13. Tomi Engdahl says:

    Artificial Intelligence-Based Calculator App
    https://www.eeweb.com/profile/max-maxfield/articles/artificial-intelligence-based-calculator-app

    The Interactive Ink powering MyScript Calculator involves multiple neural networks that work together to interpret and understand your handwriting.

    Reply
  14. Tomi Engdahl says:

    The World’s First Graphical AI Interface
    https://slashdot.org/story/18/01/25/1736236/the-worlds-first-graphical-ai-interface?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Slashdot%2Fslashdot%2Fto+%28%28Title%29Slashdot+%28rdf%29%29

    Machine learning and artificial intelligence are so difficult to understand, only a few very smart computer scientists know how to build them. But the designers of a new tool have a big ambition: to create the Javascript for AI. The tool, called Cortex, uses a graphical user interface to make it so that building an AI model doesn’t require a PhD.

    This Is The World’s First Graphical AI Interface
    https://www.fastcodesign.com/90157777/this-is-the-worlds-first-graphical-ai-interface

    Designed by Argodesign and CognitiveScale, Cortex offers a glimpse at the future of accessible AI design tools.

    Reply
  15. Tomi Engdahl says:

    Computer systems predict objects’ responses to physical forces
    https://www.controleng.com/single-article/computer-systems-predict-objects-responses-to-physical-forces/d949c5c0224bfd0691acbc1be197dd51.html

    MIT researchers believe they can help answer questions about what information-processing resources human beings use at what stages of development by building computer systems that approximate these capacities, which might generate some insights useful for robotic vision systems.

    Reply
  16. Tomi Engdahl says:

    AI Silicon Preps for 2018 Debuts
    A dozen startups chase deep learning
    https://www.eetimes.com/document.asp?doc_id=1332877

    Deep neural networks are like a tsunami on the distant horizon.

    Given their still-evolving algorithms and applications, it’s unclear what changes deep neural nets (DNNs) ultimately will bring. But their successes thus far in translating text and recognizing images and speech make it clear they will reshape computer design, and the changes are coming at a time of equally profound disruptions in how semiconductors are designed and manufactured.

    The first merchant chips tailored for training DNNs will ship this year. As it can take weeks or months to train a new neural-net model, the chips likely will be some of the largest, and thus most expensive, chunks of commercial silicon made to date.

    Reply
  17. Tomi Engdahl says:

    Artificial intelligence is believed but not invested

    Accenture Strategy has released a report at the World Economic Forum meeting that both corporate management and employees believe artificial intelligence improves business results and employee experience. At least, businesses will not yet invest heavily in the development of skills in intelligent technologies and redefine their work.

    According to the survey, the growth potential of artificial intelligence can not even be realized if the investment pace does not improve. Mistake between artificial intelligence and investment. Only three per cent of the companies report their organization planning significant additional investments to develop the skills of employees over the next three years.

    Source: https://www.uusiteknologia.fi/2018/01/25/tekoalyyn-uskotaan-mutta-ei-investoida/

    Reply
  18. Tomi Engdahl says:

    James Vincent / The Verge:
    AI advances are now making automated analysis of live surveillance video possible, presaging useful applications while raising serious questions about privacy

    Artificial intelligence is going to supercharge surveillance
    What happens when digital eyes get the brains to match?
    https://www.theverge.com/2018/1/23/16907238/artificial-intelligence-surveillance-cameras-security

    We usually think of surveillance cameras as digital eyes, watching over us or watching out for us, depending on your view. But really, they’re more like portholes: useful only when someone is looking through them. Sometimes that means a human watching live footage, usually from multiple video feeds. Most surveillance cameras are passive, however. They’re there as a deterrence, or to provide evidence if something goes wrong. Your car got stolen? Check the CCTV.

    But this is changing — and fast. Artificial intelligence is giving surveillance cameras digital brains to match their eyes, letting them analyze live video with no humans necessary. This could be good news for public safety, helping police and first responders more easily spot crimes and accidents and have a range of scientific and industrial applications. But it also raises serious questions about the future of privacy and poses novel risks to social justice.

    What happens when governments can track huge numbers of people using CCTV? When police can digitally tail you around a city just by uploading your mugshot into a database? Or when a biased algorithm is running on the cameras in your local mall, pinging the cops because it doesn’t like the look of a particular group of teens?

    AI surveillance starts with searchable video

    Reply
  19. Tomi Engdahl says:

    5 artificial intelligence trends that will dominate 2018
    https://www.cio.com/article/3250839/artificial-intelligence/5-artificial-intelligence-trends-that-will-dominate-2018.html

    2017 saw an explosion of machine learning in production use, with even deep learning and artificial intelligence (AI) being leveraged for practical applications.

    “Basic analytics are out; machine learning (and beyond) are in,” says Kenneth Sanford, U.S. lead analytics architect for collaborative data science platform Dataiku, as he looks back on 2017.

    Sanford says practical applications of machine learning, deep learning, and AI are “everywhere and out in the open these days,” pointing to the “super billboards” in London’s Piccadilly Circus that leverage hidden cameras gathering data on foot and road traffic (including the make and model of passing cars) to deliver targeted advertisements.

    So where will these frameworks and tools take us in 2018? We spoke with a number of IT leaders and industry experts about what to expect in the coming year.
    Enterprises will operationalize AI

    AI is already here, whether we recognize it or not.

    “Many organizations are using AI already, but they may not refer to it as ‘AI,’” says Scott Gnau, CTO of Hortonworks. “For example, any organization using a chatbot feature to engage with customers is using artificial intelligence.”

    But many of the deployments leveraging AI technologies and tools have been small-scale. Expect organizations to ramp up in a big way in 2018.

    AI reality will lag the hype once again

    Chen says there have been repeated predictions for several years that tout potential breakthroughs in the use of AI and machine learning, but the reality is that most enterprises have yet to see quantifiable benefits from their investments in these areas.

    “In fact, while the headlines will be mostly about AI, most enterprises will need to first focus on IA (information augmentation): getting their data organized in a manner that ensures it can be reconciled, refined, and related, to uncover relevant insights that support efficient business execution across all departments, while addressing the burden of regulatory compliance,”

    Chad Meley, vice president of marketing at Teradata, agrees that 2018 will see a backlash against AI hype, but believes a more balanced approach of deep learning and shallow learning application to business opportunities will emerge as a result.

    Bias in training data sets will continue to trouble AI

    Reltio’s Chen isn’t alone in his conviction that enterprises need to get their data in order. Tomer Shiran, CEO and co-founder of analytics startup Dremio, a driving force behind the open source Apache Arrow project, believes a debate about data sets will take center stage in 2018.

    “Everywhere you turn, companies are adding AI to their products to make them smarter, more efficient, and even autonomous,”

    It turns out, Shiran says, that models are only as good as the training data they use, and developing a representative, effective training data set is very challenging.

    AI must solve the ‘black box’ problem with audit trails

    One of the big barriers to the adoption of AI, particularly in regulated industries, is the difficulty in showing exactly how an AI reached a decision.

    “AI is increasingly getting used for applications like drug discovery or the connected car, and these applications can have a detrimental impact on human life if an incorrect decision is made,” Negahban says. “Detecting exactly what caused the final incorrect decision leading to a serious problem is something enterprises will start to look at in 2018. Auditing and tracking every input and every score that a framework produces will help with detecting the human-written code that ultimately caused the problem.”

    Cloud adoption will accelerate to support AI innovation

    in 2018 will seek to improve their infrastructure and processes for supporting their machine learning and AI efforts.

    “As companies look to innovate and improve with machine learning and artificial intelligence, more specialized tooling and infrastructure will be adopted in the cloud to support specific use cases, like solutions for merging multi-modal sensory inputs for human interaction (think sound, touch, and vision) or solutions for merging satellite imagery with financial data to catapult algorithmic trading capabilities,”

    Everyone is talking about machine learning, deep learning, and AI. Here are five AI trends to watch in 2018.

    Reply
  20. Tomi Engdahl says:

    How AI will impact your IT career
    https://www.cio.com/article/3247792/careers-staffing/how-ai-will-impact-your-it-career.html

    AI is fast becoming a go-to technology for business transformation, shaking up roles across the enterprise. Here’s how to make the most of this inevitable evolution.

    Artificial intelligence and machine learning are eating up workloads at IT help desks, in cybersecurity, and other IT tasks, stirring significant concern over the long-term impact AI will have on jobs — even in the IT industry.

    And the concern isn’t unfounded. According to a recent report from Tata Consulting Services, in 12 out of the 13 major industry verticals, IT is the most frequent user of AI, with more than 46 percent of IT organizations at large corporations incorporating AI into their work portfolios.

    But that doesn’t mean IT jobs are about to go the way of telephone switchboard operators. Instead, the day-to-day activities of enterprise technology professionals will evolve alongside AI and new skills may be required.

    Future-proofing your IT career for the AI era

    Experts recommend that IT pros looking to future-proof their careers should broaden their skill sets to include business or communication skills, growth areas like cybersecurity, or develop deeper industry expertise.

    In cybersecurity, for example, there are about two million jobs that are going to go unfilled in the next few years, says Ian Doyle, executive security advisor for the U.S. federal government at IBM.

    But AI is going to be a big component of cybersecurity, since the technology can help make security professionals more effective.

    IBM recently conducted a survey to find out what the federal government is thinking about AI, and about half want to be leaders, and half want to be followers — but there was almost unanimous interest in using AI for cybersecurity.

    Reply
  21. Tomi Engdahl says:

    The future of ERP is AI
    https://www.cio.com/article/3243574/enterprise-resource-planning/the-future-of-erp-is-ai.html

    Despite cultural barriers and legacy tech, AI is poised to take over ERP functions, with ERP vendors adding new machine learning features and enterprises keen to investigate.

    Artificial intelligence and machine learning have been shaking up many areas of business, from cybersecurity to market analytics, bots and self-driving cars.

    But when it comes to core corporate functions, especially those where the risks of making bad decisions are substantial, the use of artificial intelligence is still in its early stages.

    Alexander Kugler, the company’s VP of pricing, is well aware of the potential of artificial intelligence to help the company make better decisions when it sets prices for its products. Set prices too high, and customers will go elsewhere. Set prices too low, and the company will lose money.

    Previously, the company used spreadsheets to pull in data from various systems to determine production costs, and used past history and their own general knowledge to try to figure out how sensitive customers were to price changes and what the competition was doing.

    “It was an archaic pricing methodology that hasn’t kept up with industry trends and dynamics,”

    So, 15 months ago, AmerisourceBergen began the move to an integrated system that automatically calculates production costs, analyzes historical transaction data, and pulls in outside data such as weather forecasts to create a foundational layer for future deployment of artificial intelligence.

    This anticipation of the need for AI to improve ERP functionality as part of business transformations is growing among first movers, and ERP vendors are weaving machine learning functionality into their offerings to meet the coming demand.

    Reply
  22. Tomi Engdahl says:

    AI-defined infrastructure: be prepared for a new generation of business models and applications
    https://www.cio.com/article/3234656/artificial-intelligence/ai-defined-infrastructure-be-prepared-for-a-new-generation-of-business-models-and-applications.html

    An AI-enabled Infrastructure is an essential part of today’s enterprise stack and builds the foundation for the AI-enabled enterprise.

    In 2016, artificial intelligence (AI) reached its climax. Research and advisory firm Tractica predicted that the annual worldwide AI revenue will grow from $643.7 million in 2016 to $38.8 billion by 2025. The revenue for enterprise AI applications will increase from $358 million in 2016 to $31.2 billion by 2025, representing a compound annual growth rate (CAGR) of 64.3%. Thus, IT and business decision makers must face up to the potentials of AI already today. For each kind of organization this leads to the question, which type of technologies or infrastructure they can leverage to operate an AI-ready enterprise stack.

    What is Artificial Intelligence (AI)?

    In 1955, Prof. John McCarthy defined AI as “The goal of AI is to develop machines that behave as though they were intelligent.”

    research distinguishes three types of AIs:

    Strong AI: A strong AI (or superintelligence) is a self-aware machine with ideal thoughts, feelings, consciousness and all the necessary links. For all those who are already looking forward to a reality á la “Her” or “Ex Machina” still need to wait. Large neural networks have millions of neurons. Brains have billions of neurons. Neural networks only simulate the electrical system in a brain, the brain also has a chemical, potentially a quantum mechanics based system. The layer based modelling of deep learning networks is to simplify training, the brain has no such restrictions. Neural networks are about as far away from a brain that thinks as a snail is from a supersonic jet. Thus, a strong AI doesn’t exist yet and is very far away.
    Narrow AI: Most business cases in AI focus on solving very pointed challenges. These narrow AIs are great at optimizing specific tasks like recommending songs on Pandora or managing analyses to improve tomato growth in a greenhouse.
    General AI: A general AI can handle tasks from different areas and origins with the ability to shorten training time from one area to the next by applying experience gathered in one area and applied in a different area. This knowledge transfer is only possible if there is a semantic connection between these areas. The stronger and denser this connection, the faster and easier knowledge transition is achieved. In comparison to a narrow AI, a general AI has all the necessary knowledge and abilities to improve not only tomato growth in a greenhouse but cucumber, eggplant, peppers, radishes and kohlrabi as well. Thus, a general AI is a system, that can handle more than just one specific task.

    What requirements concerning infrastructure environments does an AI have?

    Right now, AI is the technology that has the potential not only to improve existing infrastructure like cloud environments but expedite a new generation of infrastructure technologies as well. As an important technology trend, AI has influenced a new generation of development frameworks as well as a new generation of hardware technologies to run scalable AI applications.

    Reply
  23. Tomi Engdahl says:

    Deep Learning Spreads
    https://semiengineering.com/deep-learning-spreads/

    Better tools, more compute power, and more efficient algorithms are pushing this technology into the mainstream.

    Deep learning is gaining traction across a broad swath of applications, providing more nuanced and complex behavior than machine learning offers today.

    Those attributes are particularly important for safety-critical devices, such as assisted or autonomous vehicles, as well as for natural language processing where a machine can recognize the intent of words based upon the context of a conversation.

    Like AI and machine learning, deep learning has been kicking around in research for decades. What’s changing is that it is being added into many types of chips, from data centers to simple microcontrollers. And as algorithms become more efficient for both training and inferencing, this part of the machine learning/AI continuum is beginning to show up across a wide spectrum of use models, some for very narrow applications and some for much broader contextual decisions.

    Reply
  24. Tomi Engdahl says:

    AI’s Requirements Call For eFPGAs
    Moving machine learning off the cloud requires a change in hardware design.
    https://semiengineering.com/ais-requirements-call-for-efpgas/

    Reply
  25. Tomi Engdahl says:

    Soon you will not separate the artificial intelligence on the phone

    If you are currently calling a telephone service, it may be even annoying to listen to recorded lessons. The company, invested by the Massachusetts Institute of Technology, is developing an artificial intelligence that can talk like a human being. So soon you will not leave a bottle of the right customer server.

    The platform of the Giant Otter uses the intelligence algorithms and statements made by the public. These have built a “bottom-up” type of English-language database that allows a chat engine to better understand people’s speech and questions.

    According to Jeff Orkin, the platform allows an artificial intelligence to understand the realization of what the customer says. With the help of a database, the bot can also answer various questions that are often delicate and even ambiguous.

    Source: http://www.etn.fi/index.php/13-news/7485-pian-et-erota-tekoalybottia-puhelimessa

    Reply
  26. Tomi Engdahl says:

    Solution for Controversy over Chip Prices?
    Samsung Electronics, China to Jointly Develop Next-gen Technologies like AI
    http://www.businesskorea.co.kr/english/news/ict/20442-solution-controversy-over-chip-prices-samsung-electronics-china-jointly-develop-next

    Samsung Electronics Co. has decided to cooperate with the Chinese government to develop next-generation technologies including artificial intelligence (AI). With China having been putting pressure on Samsung Electronics due to a rise in the price of memory chips supplied to Chinese smartphone makers, some market watchers say that the partnership can settle a series of conflicts between the two.

    Reply
  27. Tomi Engdahl says:

    Top 3 machine learning libraries for Python
    https://opensource.com/article/17/2/3-top-machine-learning-libraries-python?sc_cid=70160000001273HAAQ

    Learn about three of the most popular machine learning libraries for Python.

    Reply
  28. Tomi Engdahl says:

    It’s Time For Machine Learning to Prove Its Own Hype
    http://www.securityweek.com/its-time-machine-learning-prove-its-own-hype

    Machine Learning is a Black Box that is Poorly Understood

    2017 was the year in which ‘machine learning’ became the new buzzword — almost to the extent that no new product could be deemed new if it didn’t include machine learning.

    Although the technology has been used in cybersecurity for a decade or more, machine learning is now touted as the solution rather than part of the solution.

    But doubts have emerged. Machine learning is a black box that is poorly understood; and security practitioners like to know exactly what it is they are buying and using.

    The problem, according to Hyrum Anderson, technical director of data science at Endgame (a vendor that employs machine learning in its own endpoint protection product), is that users don’t know how it works and therefore cannot properly evaluate it. To make matters worse, machine learning vendors do not really understand what their own products do — or at least, how they come to the conclusions they reach — and therefore cannot explain the product to the satisfaction of many security professionals.

    The result, Anderson suggests in a blog post this week, is “growing veiled skepticism, caveated celebration, and muted enthusiasm.”

    It’s not that machine learning doesn’t work — it clearly does. But nobody really understands how it reaches its decisions.

    Prove it!: A 2018 Wave in Information Security Machine Learning
    https://www.endgame.com/blog/technical-blog/prove-it-2018-wave-information-security-machine-learning

    Reply
  29. Tomi Engdahl says:

    Amazon Go checkout-free convenience store opens to public
    http://www.vision-systems.com/articles/2018/01/amazon-go-checkout-free-convenience-store-opens-to-public.html?cmpid=enl_vsd_vsd_newsletter_2018-02-05&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=1993997

    Based on computer vision technologies and deep learning algorithms that enable shoppers to purchase goods without the need for lines or checkout, the Amazon Go convenience store is now open to the public.

    Located in Seattle, WA, USA at the company’s headquarters, Amazon Go was previously only open to Amazon employees. The shopping experience, according to Amazon, is made possible by the same types of technologies used in self-driving cars. That is, computer vision, sensor fusion, and deep learning technologies. With “Just Walk Out” technology, users can enter the store with the Amazon Go app, shop for products, and walk out of the store without lines or checkout. The technology automatically detects when products are taken or returned to shelves and keeps track of them in a virtual cart. When the shopping is finished, users leave the store and their Amazon account is charged shortly thereafter.

    Reply
  30. Tomi Engdahl says:

    Bridging Machine Learning’s Divide
    Why new approaches are needed to tie together training and inferencing.
    https://semiengineering.com/bridging-machine-learnings-divide/

    There is a growing divide between those researching Machine Learning (ML) in the cloud and those trying to perform inferencing using limited resources and power budgets.

    Researchers are using the most cost-effective hardware available to them, which happens to be GPUs filled with floating point arithmetic units. But this is an untenable solution for embedded inferencing, where issues such as power are a lot more important. The semiconductor industry is bridging this divide using more tailored hardware structures and mapping technology that can convert between cloud-based learning structures and those that can be deployed in autonomous vehicles, IoT devices and consumer products.

    While the industry is quite successfully using mapping technology, there is a need to bring more inferencing implications into algorithm development to ensure that the gap does not widen. “Deeper networks, with more, fatter layers will take more cycles of computation, more memory bandwidth and more memory capacity to train and to run inference,”

    Reply
  31. Tomi Engdahl says:

    Neural Network Electrocutes You to Take Better Photographs
    https://hackaday.com/2018/02/09/neural-network-electrocutes-you-to-take-better-photographs/

    Taking your brain out of the photography loop is the goal of [Peter Buczkowski]’s “prosthetic photographer.” The idea is to use a neural network to constantly analyze a scene until maximal aesthetic value is achieved, at which point the user unconsciously takes the photograph.

    Prosthetic Photographer
    https://hackaday.io/project/47538-prosthetic-photographer

    The Prosthetic Photographer forces its users with electrical impulses to unwillingly take beautiful pictures.

    Reply
  32. Tomi Engdahl says:

    AI Silicon Gets Mixed Report Card
    Chips may shift software toward CNNs
    https://www.eetimes.com/document.asp?doc_id=1332799&_mc=RSS_EET_EDT

    A leading researcher in deep learning praised some of the latest accelerator chips. He also indicated some shortcomings both of the silicon and the software they are supposed to speed up.

    The results came, in part, from tests using DeepBench, an open source benchmark for training neural networks using 32-bit floating point math. Baidu, the Google of China, released DeepBench in September 2016 and updated it in June to cover inference jobs and use of 16-bit math.

    Reply
  33. Tomi Engdahl says:

    AI Formats May Ease Neural Jitters
    Chip vendors’ NNEF faces web giants’ ONNX
    https://www.eetimes.com/document.asp?doc_id=1332758&_mc=RSS_EET_EDT

    A group of mainly chip vendors released a draft standard that aims to act as an interface between software frameworks for creating neural network models and the hardware accelerators that run them. It shares goals with a separate effort started as an open-source project earlier this year by Facebook and Microsoft.

    The Khronos Group is seeking industry feedback on a preliminary version of its Neural Network Exchange Format. NNEF initially aims to be a single file format to describe any trained neural network model to any chip performing inference tasks with it.

    https://www.khronos.org/nnef

    Reply
  34. Tomi Engdahl says:

    Google releases Cloud TPU beta, GPU support for Kubernetes
    Google said a limited quantity of TPUs are available today.
    http://www.zdnet.com/article/google-releases-cloud-tpu-beta-gpu-support-for-kubernetes/

    Google Cloud announced Monday that Cloud TPUs are available in beta on Google Cloud Platform.

    Short for Tensor Processing Unit, TPU’s are designed for machine learning and tailored for Google’s open-source machine learning framework, TensorFlow. The specialized chips can provide 180 teraflops of processing to support training machine learning algorithms, and have been powering Google datacenters since 2015.

    “We designed Cloud TPUs to deliver differentiated performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly,” Google wrote in a Cloud Platform blog.

    “Over time, we’ll open-source additional model implementations. Adventurous ML experts may be able to optimize other TensorFlow models for Cloud TPUs on their own using the documentation and tools we provide.”

    Google said a limited quantity of TPUs are available today with per-second billing at the rate of $6.50 per Cloud TPU per hour.

    Reply
  35. Tomi Engdahl says:

    Survey finds business leaders adapting as enterprise AI moves beyond experimentation
    http://www.controleng.com/single-article/survey-finds-business-leaders-adapting-as-enterprise-ai-moves-beyond-experimentation/de0115a237f0d69d20da8ea9bc139083.html

    An Infosys report finds artificial intelligence (AI) is in widespread use across enterprises, driving tangible benefits while creating new opportunities and challenges for business leaders and the workforce.

    Key findings include:

    Enterprise AI moves beyond experimentation: AI deployments are becoming pervasive as 86% of organizations surveyed have middle- or late-stage AI deployments and view AI as a major facilitator of future business operations. Eighty percent of respondents who said they’ve seen at least some measurable benefits from AI agreed or strongly agreed their organization had a defined strategy for deployment. Fifty-three percent of all respondents said their industry has already experienced disruption due to artificial intelligence technologies.
    The benefits of AI span the business value chain: While a majority of organizations (66%) start off using AI to automate routine or inefficient processes, businesses in later stages of AI deployment are leveraging the technology to innovate and differentiate. For example, 80% of IT decision makers at organizations in later stages of AI deployment reported that they are using AI to augment existing solutions, or build new business-critical solutions and services to optimize insights and the consumer experience. Forty-two percent of these organizations also expect significant impact in research and development in the next five years.
    Investing in people is key to AI success: Seventy-seven percent of respondents surveyed were confident that employees in their organization can be trained for the new job roles AI technologies will create. Respondents showed commitment to this belief by ranking training and recruitment as the top areas of investment (46 and 44% respectively) in order for AI technologies to make an impact. C-level executives likewise called out training the leadership team on AI as a top priority-47% of business leaders put leadership training in their top three priorities compared to 40% who put employee training in their top three priorities.
    AI leadership essentials include strategy and training: Four out of five C-level executives said that their future business strategy will be informed through opportunities made available with AI technology. Business leaders were confident that their executive teams have the ability to adapt their leadership skills as AI technologies are adopted, with 80% of C-level executives in agreement. However, training on the executive level is still critical as three-fourths of IT decision makers felt that their executives would benefit from formal training on the implications of AI technologies.
    Data management is a persistent obstacle: Nearly half of IT decision makers (49%) reported that their organization is unable to deploy the AI technologies they want because their data is not ready to support the requirements of AI technologies. As such, 77% of IT decision makers said that their organization is investing in data management. Furthermore, C-level executives reported that their leadership team is concerned with the implications of industry regulations on their ability to use AI technologies within their business (70%) and the potential advantages AI technologies could lend to competition (66%).

    Reply
  36. Tomi Engdahl says:

    AI, cloud, and IoT will drive 2018 growth, say chip makers
    https://venturebeat.com/2018/02/13/ai-cloud-and-iot-will-drive-2018-growth-say-chip-makers/

    Artificial intelligence, cloud computing, and the internet of things (IoT) will have bigger impacts on the revenues of chip makers in 2018, according to accounting firm KPMG‘s survey of 150 semiconductor industry leaders.

    Two-thirds of the leaders cited IoT as one of the top revenue drivers, up from 56 percent in last year’s survey. Cloud computing and AI were each cited by 43 percent of leaders, compared to 27 percent last year for cloud and 18 percent for AI. Wireless communications was at the top of the list, but it was cited by fewer respondents this year.

    “The increasing demand for IoT, AI, and cloud applications is driven by their individual value and their value to each other. Cloud infrastructure is critical to enabling AI and capturing IoT-produced data. AI will enable better analysis and use of the data,”

    Reply
  37. Tomi Engdahl says:

    Your next phone may have an ARM machine learning processor
    https://techcrunch.com/2018/02/13/your-next-phone-may-have-an-arm-machine-learning-processor/

    ARM doesn’t build any chips itself, but its designs are at the core of virtually every CPU in modern smartphones, cameras and IoT devices. So far, the company’s partners have shipped more than 125 billion ARM-based chips. After moving into GPUs in recent years, the company today announced that it will now offer its partners machine learning and dedicated object detection processors. Project Trillium, as the overall project is called, is meant to make ARM’s machine learning (ML) chips the de facto standard for the machine learning platform for mobile and IoT.

    For this first launch, ARM is launching both an ML processor for general AI workloads and a next-generation object detection chip that specializes in detecting faces, people and their gestures, etc. in videos that can be as high-res as full HD and running at 60 frames per second. This is actually ARM’s second-generation object detection chip. The first generation ran in Hive’s smart security camera.

    Reply
  38. Tomi Engdahl says:

    Arm Extends AI to the Masses
    https://www.eetimes.com/author.asp?section_id=36&doc_id=1332970

    Arm’s recently announced Project Trillium is likely to be quickly adopted by its partners and develop consistency between solutions that can be leveraged by the software community.

    Tirias Research believes that by 2025, 95 percent of all new devices or platforms will leverage artificial intelligence in the cloud or with some form of native machine learning. Arm is not the first IP or semiconductor vendor to offer an AI solution, but as the center of the industry’s largest processor architecture ecosystem, it may someday enable hundreds of billions of intelligent devices.

    Today, cloud-based solutions are leveraging GPUs, FPGAs and custom chips for large deployments while most of the device-level solutions are using DSPs, dedicated IP blocks or custom accelerator chips. New solutions and companies are being announced almost weekly. Now Arm is stepping into the arena.

    Project Trillium includes new processor cores designed specifically for the challenges of on-device machine learning.

    Reply
  39. Tomi Engdahl says:

    MIT’s new chip could bring neural nets to battery-powered gadgets
    https://techcrunch.com/2018/02/14/mits-new-chip-could-bring-neural-nets-to-battery-powered-gadgets/?utm_source=tcfbpage&sr_share=facebook

    MIT researchers have developed a chip designed to speed up the hard work of running neural networks, while also reducing the power consumed when doing so dramatically

    Computing ‘at the edge,’ as its called, or at the site of sensors actually gathering the data, is increasingly something companies are pursuing and implementing

    Reply
  40. Tomi Engdahl says:

    Amazon Is Becoming an AI Chip Maker, Speeding Alexa Responses
    https://www.theinformation.com/amazon-is-becoming-an-ai-chip-maker-speeding-alexa-responses?shared=922dfb3ba4e3984e

    Amazon.com is developing a chip designed for artificial intelligence to work on the Echo and other hardware powered by Amazon’s Alexa virtual assistant, says a person familiar with Amazon’s plans. The chip should allow Alexa-powered devices to respond more quickly to commands,….

    Reply
  41. Tomi Engdahl says:

    Cloud TPU machine learning accelerators now available in beta
    https://cloudplatform.googleblog.com/2018/02/Cloud-TPU-machine-learning-accelerators-now-available-in-beta.html?m=1

    Cloud TPUs are available in beta on Google Cloud Platform (GCP) to help machine learning (ML) experts train and run their ML models more quickly.

    Reply
  42. Tomi Engdahl says:

    What’s the Word on AI in Manufacturing?
    https://www.eetimes.com/document.asp?doc_id=1332976

    It’s not what you know, but who you know…or so the saying goes. We decided to ask the experts that we know about the status of artificial intelligence in manufacturing. The answer was clear: 2018 is going to be the year of artificial intelligence (AI). It’s becoming a reality in many corners, including in consumer electronics, smart cities, and autonomous vehicles, but perhaps the biggest strides are being made as we pursue the fourth industrial revolution.

    The market is, and will continue, to grow at a substantial pace. The artificial intelligence market was valued at $1.36 billion in 2016 and is expected to grow at a compound annual growth rate (CAGR) of 52% during the forecast period from 2017 to 2025, according to a recent report from Research and Markets.

    Reply
  43. Tomi Engdahl says:

    Walk A Mile In Their Shoes
    https://semiengineering.com/walk-a-mile-in-their-shoes/

    Prosthetic technology is seeing rapid improvement with the integration of AI.

    Amazingly, AI has been used to give mobility to fully immobilized people.

    Reply
  44. Tomi Engdahl says:

    Bridging Machine Learning’s Divide
    https://semiengineering.com/bridging-machine-learnings-divide/

    Why new approaches are needed to tie together training and inferencing.

    There is a growing divide between those researching Machine Learning (ML) in the cloud and those trying to perform inferencing using limited resources and power budgets.

    Researchers are using the most cost-effective hardware available to them, which happens to be GPUs filled with floating point arithmetic units. But this is an untenable solution for embedded inferencing, where issues such as power are a lot more important. The semiconductor industry is bridging this divide using more tailored hardware structures and mapping technology that can convert between cloud-based learning structures and those that can be deployed in autonomous vehicles, IoT devices and consumer products.

    Reply
  45. Tomi Engdahl says:

    Google Clips camera uses machine learning to capture spontaneous moments in everyday life
    https://www.vision-systems.com/articles/2017/10/google-clips-camera-uses-machine-learning-to-capture-spontaneous-moments-in-everyday-life.html?cmpid=enl_vsd_vsd_newsletter_2018-02-12&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=1999156

    Google has announced the release of Google Clips, a small, hands-free camera that uses a machine learning algorithm to look for good moments to capture in everyday life.

    An image sensor size or model is not named—though The Verge is reporting that a 12 MPixel sensor is being used—but the camera features a 1.55 µm pixel size, auto focus adjustment, a 130° field of view, a frame rate of 15 fps, auto low lux and night mode, 16 GB storage, as well as motion photos (JPEGS with embedded MP4s), MP4, GIF, and JPEG, with no audio. Additionally, the camera has Gorilla Glass 3 for durability, as well as USB-C, Wi-Fi Direct, and Bluetooth LE for connectivity.

    Google Clips is a tiny camera that uses AI to automatically photograph family moments
    https://www.theverge.com/2017/10/4/16402682/google-clips-camera-announced-price-release-date-wireless

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*