Artificial intelligence is rapidly changing many aspects of how we work and live. (How many stories did you read last week about self-driving cars and job-stealing robots? Perhaps your holiday shopping involved some AI algorithms, as well.) But despite the constant flow of news, many misconceptions about AI remain.
AI doesn’t think in our sense of the word at all, Scriffignano explains. “In many ways, it’s not really intelligence. It’s regressive.”
IT leaders should make deliberate choices about what AI can and can’t do on its own. “You have to pay attention to giving AI autonomy intentionally and not by accident,”
5,332 Comments
Tomi Engdahl says:
Learn how to classify images with TensorFlow
https://opensource.com/article/17/12/tensorflow-image-classification-part-1?sc_cid=7016000000127ECAAY
Create a simple, yet powerful neural network to classify images using the open source TensorFlow software library.
Tomi Engdahl says:
Artificial intelligence processors enable deep learning at the edge
https://www.vision-systems.com/articles/2018/04/artificial-intelligence-processors-enable-deep-learning-at-the-edge.html?cmpid=enl_vsd_vsd_newsletter_2018-04-10&pwhid=6b9badc08db25d04d04ee00b499089ffc280910702f8ef99951bdbdad3175f54dcae8b7ad9fa2c1f5697ffa19d05535df56b8dc1e6f75b7b6f6f8c7461ce0b24&eid=289644432&bid=2061449
CEVA, Inc.’s NeuPro line of artificial intelligence (AI) processors for deep learning inference at the edge are designed for “smart and connected edge device vendors looking for a streamlined way to quickly take advantage of the significant possibilities that deep neural network technologies offer.”
The new self-contained AI processors are designed to handle deep neural networks on-device and range from 2 Tera Ops Per Second (TOPS) for the entry-level processor to 12.5 TOPS for the most advanced configuration, according to CEVA.
“It’s abundantly clear that AI applications are trending toward processing at the edge, rather than relying on services from the cloud,” said Ilan Yona, vice president and general manager of the Vision Business Unit at CEVA. “The computational power required along with the low power constraints for edge processing, calls for specialized processors rather than using CPUs, GPUs or DSPs. We designed the NeuPro processors to reduce the high barriers-to-entry into the AI space in terms of both architecture and software. Our customers now have an optimized and cost-effective standard AI platform that can be utilized for a multitude of AI-based workloads and applications.”
Tomi Engdahl says:
Have Wearables Found Their True Killer App?
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333171
So far, wearable technology has consisted almost exclusively of fitness trackers and smart watches. There’s cooler stuff coming, right?
Putting future development in the context of the fourth industrial revolution, he said the next generation beyond smartphones will involve a convergence of hardware focused nanotech and biotech, with software-based infotech and cognotech (see slide below).
Describing the SHAs, Wood said that these devices will observe what we are doing by listening to us and what we are listening to, seeing us and what we’re seeing, and feeling what we’re feeling more accurately than our own senses. These will utilize speech and sound recognition, computer vision, information from sensors embedded in the environment, communications within IoT systems, contextual knowledge and computer general common sense.
The big thing though is power consumption, according to Bennett. “There is a bunch of technology that consumes battery, so until you get that right, you won’t get newer form factors. And with AI, you will need processor farms to address that requirement,” he said.
Healthcare, AI and power management were also major themes at last month’s Wearable Technology Show. Oticon, the hearing aid provider, highlighted how hearing aids are now powerful processors providing information on overall brain health and not just hearing.
Finnish wearable tech company Oura Health’s Chief Scientific Officer Hannu Kinnunen also emphasized its power management design as being vital for its infrared PPG sensor (PPG stands for photoplethysmography, a simple optical technique used to detect volumetric changes in blood in peripheral circulation, a technique providing valuable information related to the cardiovascular system).
The company was due to start shipping its most recent version of its Oura Ring, which measures some of the physiological signals within a body (such as ECG-level resting heart rate (RHR), interbeat interval (IBI), heart rate variability (HRV), respiratory rate and breathing variance) and sleep tracking to inform lifestyle choice. The ring incorporates a dual-core Arm Cortex based microcontroller, with proprietary pulse waveform and pulse amplitude variation detection infrared PPG sensor, body temperature sensor, and 3D accelerometer and gyroscope.
Adi Chhabra, a senior product manager for AI at Vodafone, also said the future of wearables is moving away from screen interactions to surface interactions. “Any screen or surface can be your interface, which can be voice-enabled, touch enabled, or gesture-enabled. Google Glass was the first generation, but it wasn’t the answer for wearables. However, it’s giving us a sense of where we will be in 15 years,” he said.
Wearables were touted in the early days as smart watches and fitness trackers. But as the multiple use-cases evolve, some of the killer applications are becoming clearer. It’s currently trending towards health applications, not just in the fitness tracker sense, but in more sophisticated healthcare, as we have seen above.
This is certainly backed up by recent market research. Juniper Research says that while the market is currently dominated by smartwatches and activity trackers, growth will slow, with around 190 million of these devices shipping by 2020. Its research argues that as device types broaden and purchase cycles lengthen, companies will begin to focus on software and data services to maintain their revenues, with the largest market for subscription services being healthcare.
So how far can wearable tech and AI go in healthcare? Quite far, actually. A paper published last month in Scientific Reports shows how biological age can be extracted from biomedical data via deep learning.
Tomi Engdahl says:
A.I. seen penetrating deep into data center networks
http://www.cablinginstall.com/articles/pt/2018/04/a-i-seen-penetrating-deep-into-data-center-networks.html?cmpid=enl_cim_cim_data_center_newsletter_2018-04-10&pwhid=e8db06ed14609698465f1047e5984b63cb4378bd1778b17304d68673fe5cbd2798aa8300d050a73d96d04d9ea94e73adc417b4d6e8392599eabc952675516bc0&eid=293591077&bid=2062366
According to Mind Commerce, the total market for AI-driven networking solutions is expected to hit $5.8 billion by 2023. In fact, by that time, more than half of the total AI spend will go toward the network. Much of this will be linked to the deployment of software-defined networking (SDN), as well as edge computing, the IoT and emerging 5G topologies on the mobile side. Ultimately, the rudimentary intelligence will lead to self-organizing networks (SON) and cognitive network management solutions capable of supporting autonomous decision-making across wide swaths of network infrastructure.
AI Delving Deep into Data Center Networks
http://www.enterprisenetworkingplanet.com/datacenter/datacenter-blog/ai-delving-deep-into-data-center-networks.html
Intelligent networking is already making its way into the enterprise, forever changing the ways in which both traffic and resources are managed. But how is this likely to play out? What aspects of modern network management are ripe for intelligence now and what is likely to evolve over time?
According to Mind Commerce, the total market for AI-driven networking solutions is expected to hit $5.8 billion by 2023. In fact, by that time, more than half of the total AI spend will go toward the network. Much of this will be linked to the deployment of software-defined networking (SDN), as well as edge computing, the IoT and emerging 5G topologies on the mobile side. Ultimately, the rudimentary intelligence will lead to self-organizing networks (SON) and cognitive network management solutions capable of supporting autonomous decision-making across wide swaths of network infrastructure.
One of the initial applications for AI on the network is visibility. As traffic becomes more complex and data infrastructure becomes more distributed over wide area infrastructure, the need to gain deep packet-level visibility and real-time telemetry increases. Barefoot Networks and Netronome recently teamed up to bring intelligent insight into end-to-end network infrastructure as a means to detect and prevent root-cause problems that impede application performance.
Chip-level solutions are incorporate higher degrees of intelligence as well. Cavium recently introduced the Packet Trakker system on the XPliant series of programmable Ethernet switches.
But even as intelligence is changing the network, it is also altering the way in which data resources are provisioned and consumed, says Market Realist’s Paige Tanner. This is most pronounced in an increasingly intelligent Internet of Things, which is upping the reliance on the cloud and causing many providers to increase the speed and agility of their internal infrastructure.
Tomi Engdahl says:
Tiny Neural Network Library in 200 Lines of Code
https://hackaday.com/2018/04/08/tiny-neural-network-library-in-200-lines-of-code/
Neural networks have gone mainstream with a lot of heavy-duty — and heavy-weight — tools and libraries. What if you want to fit a network into a little computer? There’s tinn — the tiny neural network. If you can compile 200 lines of standard C code with a C or C++ compiler, you are in business. There are no dependencies on other code.
https://github.com/glouw/tinn
Tomi Engdahl says:
How artificial intelligence will take over the supermarket produce aisles
https://techcrunch.com/2018/04/11/how-artificial-intelligence-will-take-over-the-supermarket-produce-aisles/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook
Artificial intelligence is about more than asking Alexa or Siri to turn on the lights at home and add a reminder to the calendar about getting some milk at the store later in the afternoon.
The true power of AI and machine learning is how it can democratize expertise, lowering the barriers to entry for tasks that once could only be performed by a small group of specialists. The result, one day, will be that your self-driving car drops you off at the supermarket, where you will find higher-quality foods available at prices lower than they’ve ever been.
Tomi Engdahl says:
1.8 DIGITOPIA – IOT
http://www.spiritua.life/2018/04/05/1-8-digitopia-iot/
Internet awareness
When we discuss the question of whether the Internet already has a consciousness or could ever achieve it, we need to understand the structure of the system
consciousness is not something esoteric, mystical, or outsourced, but always connected to matter. Such matter basically requires three components, which are: sensors, networks and computers. In addition, an environment is always required.
one can create a somewhat larger concept of consciousness. This then includes:
Reality niche
Actuators and sensors
Communication networks
Datenprozessing
Distinction of events (information)
Awareness
Self-confidence (self-induced event)
Tomi Engdahl says:
Angela Chen / The Verge:
FDA approves first AI diagnostic device that doesn’t need a doctor to interpret the results, detecting diabetic retinopathy by looking at photos of a retina
AI software that helps doctors diagnose like specialists is approved by FDA
“It makes the clinical decision on its own”
https://www.theverge.com/2018/4/11/17224984/artificial-intelligence-idxdr-fda-eye-disease-diabetic-rethinopathy
For the first time, the US Food and Drug Administration has approved an artificial intelligence diagnostic device that doesn’t need a specialized doctor to interpret the results. The software program, called IDx-DR, can detect a form of eye disease by looking at photos of the retina.
It works like this: A nurse or doctor uploads photos of the patient’s retina taken with a special retinal camera. The IDx-DR software algorithm first indicates whether the image uploaded is high-quality enough to get a result. Then, it analyzes the images to determine whether the patient does or does not have diabetic retinopathy, a form of eye disease where too much blood sugar damages the blood vessels in the back of the eye. Diabetic retinopathy is the most common vision complication for people with diabetes, but is still fairly rare — there are about 200,00 cases per year.
Tomi Engdahl says:
Kevin Wong / Engadget:
A look at the challenges faced by developers while humanizing the voices of their virtual private assistants
In pursuit of the perfect AI voice
How developers are humanizing their virtual personal assistants.
https://www.engadget.com/2018/04/09/in-pursuit-of-the-perfect-ai-voice/
The virtual personal assistant is romanticized in utopian portrayals of the future from The Jetsons to Star Trek. It’s the cultured, disembodied voice at humanity’s beck and call, eager and willing to do any number of menial tasks.
Amazon’s Alexa and Microsoft’s Cortana debuted in 2014; Google Assistant followed in 2016. IT research firm Gartner predicts that many touch-required tasks on mobile apps will become voice activated within the next several years. The voices of Siri, Alexa and other virtual assistants have become globally ubiquitous. Siri can speak 21 different languages and includes male and female settings. Cortana speaks eight languages, Google Assistant speaks four, Alexa speaks two.
But until fairly recently, voice — and the ability to form words, sentences and complete thoughts — was a uniquely human attribute. It’s a complex mechanical task, and yet nearly every human is an expert at it. Human response to voice is deeply ingrained, beginning when children hear their mother’s voice in the womb.
Tomi Engdahl says:
Optimizing Machine Learning Workloads On Power-Efficient Devices
https://semiengineering.com/optimizing-machine-learning-workloads-on-power-efficient-devices/
How to target different SoC architectures with different neural network software frameworks.
Software frameworks for neural networks, such as TensorFlow, PyTorch, and Caffe, have made it easier to use machine learning as an everyday feature, but it can be difficult to run these frameworks in an embedded environment. Limited budgets for power, memory, and computation can all make this more difficult. At Arm, we’ve developed Arm NN, an inference engine that makes it easier to target different SoC architectures, for faster, higher-performance deployment of machine learning in embedded.
https://pages.arm.com/machine-learning-in-embedded-whitepaper.html
Tomi Engdahl says:
High-Performance Memory Challenges
https://semiengineering.com/high-performance-memory-challenges/
Capacity, speed, power and cost become critical factors in memory for AI/ML applications.
Tomi Engdahl says:
Cadence: Last Holdout for Vision + AI Programmability
https://www.eetimes.com/document.asp?doc_id=1333173
Cadence Design Systems, Inc. might have found the secret recipe for success in an increasingly hot AI processing-core market by promoting a suite of DSP cores that accelerate both embedded vision and artificial intelligence.
The San Jose-based company is rolling out on Wednesday (April 11) the Cadence Tensilica Vision Q6 DSP. Built on a new architecture, the Vision Q6 offers faster embedded vision and AI processing than its predecessor, Vision P6 DSP, while occupying the same floorplan area as that of P6.
The Vision Q6 DSP is expected to go into SoCs that will drive such edge devices as smartphones, surveillance cameras, vehicles, AR/CR, drones, and robots.
The new Vision Q6 DSP is built on Cadence’s success with Vision P6 DSP. High-profile mobile application processors such as HiSilicon’s Kirin 970 and MediaTek’s P60 both use the Vision P6 DSP core.
Tomi Engdahl says:
ARM Under Attack in AI
A dozen rivals emerge, some with big wins
https://www.eetimes.com/document.asp?doc_id=1333167
Nearly a dozen processor cores for accelerating machine-learning jobs on clients are racing for spots in SoCs, with some already designed into smartphones. They aim to get a time-to-market advantage over processor-IP giant Arm that is expected to announce its own soon.
The new players getting traction include:
Apple’s Bionic neural engine in the A11 SoC in its iPhone
The DeePhi block in Samsung’s Exynos 9810 in the Galaxy S9
The neural engine from China’s Cambricon in Huawei’s Kirin 970 handset
The Cadence P5 for vision and AI acceleration in MediaTek’s P30 SoC
Possible use of the Movidius accelerator in Intel’s future PC chip sets
The existing design wins have locked up many of the sockets in premium smartphones that represent about a third of the overall handset market. Gwennap expects that AI acceleration will filter down to the rest of the handset market over the next two to three years.
Beyond smartphones, cars are an increasingly large market for AI chips. PCs, tablets, and IoT devices will round out the market.
Tomi Engdahl says:
Design Houses Bank on AI, Bitcoin
https://www.eetimes.com/document.asp?doc_id=1333176
Global Unichip (GUC) and a host of other Taiwan chip designers are seeing demand for ASICs take off, driven by systems houses that want to differentiate their products for cryptocurrency mining and AI to deliver greater efficiency.
While AI shows long-term potential, GUC’s main ASIC demand so far is for Bitcoin-mining equipment, according to the company. The players in this business are trying to develop ICs rather than using off-the-shelf GPUs, according to the company. Customers are finding that the efficiency of GPUs is not good enough, GUC says.
The Bitcoin-mining business has quickly popped up for foundry Taiwan Semiconductor Manufacturing Co. (TSMC) and other companies in the TSMC ecosystem. GUC customer Bitmain, a privately held Chinese firm that makes Bitcoin-mining hardware and runs its own mining operations, made $3 billion to $4 billion in profits in 2017, according to estimates by Bernstein Research.
New mining equipment vendors have rushed in to capitalize on the boom. The focus now is on developing more customized machines that offer greater efficiency.
“Nowadays, GPUs are used as ASSPs for the AI market,” said GUC Senior Director Lewis Chu, in an interview with EE Times. “But AI is big data, algorithms. If everyone continues to use ASSPs, it means their algorithms are similar. It’s hard for them to do differentiation.”
Tomi Engdahl says:
Three Concepts for Managing AI
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333151
Three key ideas should drive how AI is rolled out in the electronics community to reap the full benefits of the new technology.
AI impacts everything, from leadership to business outcomes, across industries and countries, according to a recent Infosys report. The study, Leadership in the Age of AI, surveyed more than 1,000 business and IT leaders at enterprises in seven countries.
According to the research, 87 percent of organizations in late or final stages of their AI deployments saw significant and measurable benefits from AI technologies. Of those in the later stages of AI deployments, 80 percent of IT decision makers said that they are using AI to augment existing solutions or build new business-critical solutions and services to optimize insights and consumer experience.
Tomi Engdahl says:
When AI meets digital transformation: 4 areas where AI fits now
https://enterprisersproject.com/article/2018/4/when-ai-meets-digital-transformation-4-areas-where-ai-fits-now?sc_cid=7016000000127ECAAY
How can AI help your organization meet digital transformation goals? Let’s examine four current use cases
Tomi Engdahl says:
What AI teams need to succeed
https://enterprisersproject.com/article/2017/5/what-ai-teams-need-succeed
What do teams working with artificial intelligence need to succeed? At Seal Software, which makes contract discovery and analytics software, AI and blockchain experts thrive on trust and empowerment, says CTO Kevin Gidney. Gidney shares his thoughts on AI, innovation, and the value of failing and learning fast.
Tomi Engdahl says:
TensorFlow brings machine learning to the masses
https://opensource.com/article/17/9/tensorflow?sc_cid=7016000000127ECAAY
Google’s open source machine learning library makes deep learning available to everyone.
Tomi Engdahl says:
Introducing TensorFlow.js: Machine Learning in Javascript
https://medium.com/tensorflow/introducing-tensorflow-js-machine-learning-in-javascript-bf3eab376db
We’re excited to introduce TensorFlow.js, an open-source library you can use to define, train, and run machine learning models entirely in the browser, using Javascript and a high-level layers API. If you’re a Javascript developer who’s new to ML, TensorFlow.js is a great way to begin learning. Or, if you’re a ML developer who’s new to Javascript, read on to learn more about new opportunities for in-browser ML. In this post, we’ll give you a quick overview of TensorFlow.js, and getting started resources you can use to try it out.
Tomi Engdahl says:
https://elementsofai.com/fi
Tomi Engdahl says:
TensorFlow in your Browser
https://hackaday.com/2018/04/16/tensorflow-in-your-browser/
If you want to explore machine learning, you can now write applications that train and deploy TensorFlow in your browser using JavaScript. We know what you are thinking. That has to be slow. Surprisingly, it isn’t, since the libraries use Graphics Processing Unit (GPU) acceleration. Of course, that assumes your browser can use your GPU. There are several demos available, include one where you train a Pac Man game to respond to gestures in your webcam to control the game. If you try it and then disable accelerated graphics in your browser options, you’ll see just what a speed up you can gain from the GPU.
https://js.tensorflow.org/
Tomi Engdahl says:
Artificial Intelligence: Long Way From Off-the-Shelf Solution
https://it.toolbox.com/blogs/crmdesk/artificial-intelligence-long-way-from-off-the-shelf-solution-040918
Artificial intelligence is the talk of the town in CRM development circles these days, and while it is already making an impact with CRM teams across the country, its benefits will also be harder to customize. To date CRM applications incorporating AI are doing so to help sales teams sift data quickly, automate previously highly manual processes, and potentially also serve up leads.
Nobody can refute that AI will change the CRM landscape, but there are also some practical difficulties related to the demands being made by some companies for what is best described as a turnkey AI package – AI functionality that a business may want to purchase and then customize to serve its internal CRM or ERP requirements, as it might with other software.
Currently, that is extremely difficult to achieve because machine learning requires huge amounts of data to “teach” algorithms. Without vast quantities of data, any AI system will be sub-par. A CRM developer has access to huge amounts of data with which to teach its own machines. The more data AI has access to, the more effective it can be.
Key Takeaways:
· Machine learning within CRM is still the province of platform developers who are implementing it within existing software.
· The quest for turnkey and customizable AI solutions that can be deployed at the enterprise level is still a long way from being achieved.
· Market expectations are intense for widespread adoption.
Tomi Engdahl says:
Qualcomm enters smartphone computing with IoT devices
Sandiegol Qualcomm dominates the market for mobile application processors in its Snapdragon circuits, but in the future IoT devices can have a larger market. Now, the company has introduced two new IoT chipsets on its new Vision Intelligence platform.
The 10-nanometer FinFET process circuits are called QCS605 and QCS603. Their purpose is to bring computing power for machine vision and artificial intelligence to devices requiring small size, low heat output, and low battery operation.
Vision Intelligence circuits use the same artificial intelligence processor as the Snapdragon mobile phone circuits. Circuits support algorithms developed in TensorFlow, Caffe and Caffe2 environments. According to Qualcomm, the circuits make up to 2.1 trillion operations per computing power per second in their neurological calculation. According to the company, this is more than twice as much as in the corresponding competing IoT processors.
he performance of Qualcomm’s IoTs gives the impression that processors can play 4K video at 60 frames per second.
Of the first circuits, the QCS605 consists of eight Kryo 360 processors (two Arm Cortex-A75 cores and six Arm Cortex-A55 cores), the Adreno 615 graphics processor and the Hexagon 685 processor. This corresponds in many respects to the Snapdragon 845 chipset, but according to Qualcomm, the platform has been optimized for the requirements of IoT devices.
Source: http://www.etn.fi/index.php/13-news/7859-qualcomm-vie-alypuhelinlaskennan-iot-laitteisiin
Tomi Engdahl says:
Qualcomm Brings AI, Vision Processing to IoT
https://www.eetimes.com/document.asp?doc_id=1333185
After surpassing $1 billion in IoT revenue in FY2017, Qualcomm is announcing new product families purpose-built for IoT applications. The company began by announcing a new family of IoT chipsets, the QCS603 and QCS605, along with software and reference designs, all dubbed the Qualcomm Vision Intelligence Platform. The platform brings the image and artificial intelligence (AI) processing capabilities found on its Snapdragon chipsets for premium smartphones to a wide range of consumer and industrial applications.
The new QCS chipsets and other elements of the platform will leverage the latest technology from Qualcomm. The chipsets will leverage the same kind of AI processing solution in the latest Snapdragon 845 smartphone processors called the Artificial Intelligence Engine (AIE). The AIE takes advantage of the heterogeneous chip architecture that that combines the Kryo 300 CPU cores, Hexagon 685 Vector Processor, and Adreno 615 GPU into a single system-on-chip. In addition, the Vision Intelligence Platform includes an integrated Spectra 270 ISP that supports dual-16 MP image sensors. The platform also includes other image technologies to improve the overall image performance, including staggered HDR, advanced electronic image stabilization, dewarp, de-noise, chromatic aberration correction, and motion compensated temporal filters.
Tomi Engdahl says:
One-Pixel Attack Fools Neural Networks
https://hackaday.com/2018/04/15/one-pixel-attack-fools-neural-networks/
Deep Neural Networks can be pretty good at identifying images — almost as good as they are at attracting Silicon Valley venture capital. But they can also be fairly brittle, and a slew of research projects over the last few years have been working on making the networks’ image classification less likely to be deliberately fooled.
One Pixel Attack Defeats Neural Networks | Two Minute Papers #240
https://www.youtube.com/watch?v=SA4YEAWVpbk
Tomi Engdahl says:
Abner Li / 9to5Google:
Google releases updated DIY AI Vision and Voice Kits for $89.99 and $49.99, with a Raspberry Pi Zero and a companion app for Android, available at Target
Google launches updated DIY kits for AI voice & vision w/ edu focus, available at Target
https://9to5google.com/2018/04/16/google-aiy-projects-target/
Tomi Engdahl says:
IBM Releases Open Source AI Security Tool
https://www.securityweek.com/ibm-releases-open-source-ai-security-tool
IBM today announced the release of an open source software library designed to help developers and researchers protect artificial intelligence (AI) systems against adversarial attacks.
The software, named Adversarial Robustness Toolbox (ART), helps experts create and test novel defense techniques, and deploy them on real-world AI systems.
There have been significant developments in the field of artificial intelligence in the past years, up to the point where some of the world’s tech leaders issued a warning about how technological advances could lead to the creation of lethal autonomous weapons.
IBM/adversarial-robustness-toolbox
https://github.com/IBM/adversarial-robustness-toolbox
This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.
Tomi Engdahl says:
Building Blocks of AI Interpretability | Two Minute Papers #234
https://www.youtube.com/watch?v=pVgC-7QTr40
Tomi Engdahl says:
One Pixel Attack Defeats Neural Networks | Two Minute Papers #240
https://www.youtube.com/watch?v=SA4YEAWVpbk
Tomi Engdahl says:
Two Facebook and Google geniuses are combining search and AI to transform HR
https://techcrunch.com/2018/04/17/two-facebook-and-google-geniuses-are-combining-search-and-ai-to-transform-hr/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&utm_content=FaceBook&sr_share=facebook
Two former product wizards from Facebook and Google are combining Silicon Valley’s buzziest buzz words –search, artificial intelligence, and big data — into a new technology service aimed at solving nothing less than the problem of how to provide professional meaning in the modern world.
Tomi Engdahl says:
Bloomberg:
Facebook is building a team to design its own AI chips, according to job listings and sources, joining a trend among tech giants to lower reliance on chipmakers
Facebook Is Forming a Team to Design Its Own Chips
https://www.bloomberg.com/news/articles/2018-04-18/facebook-is-forming-a-team-to-design-its-own-chips
Social network could use semiconductors for consumer devices
Move follows Apple’s chip efforts, early work by Google
Tomi Engdahl says:
China Startup Packs AI in Camera
Horizon Robotics gets strong team, funding
https://www.eetimes.com/document.asp?doc_id=1333196
An ambitious startup in Beijing has started shipping systems using its own designs for machine-learning SoCs. Horizon Robotics ultimately aims to power millions of cars and smart cameras with its AI chips.
The startup adds fuel to China’s claims it will take a leading role in machine learning. Horizon’s chief executive sits on the country’s committee driving a national initiative in AI.
http://en.horizon.ai/
Tomi Engdahl says:
We Have the AI Technology, But is it Ethical?
https://www.eetimes.com/author.asp?section_id=36&doc_id=1333191
Artificial intelligence has sparked more ethics-related debate than any technology before it.
Around five years ago, everyone was talking about IoT and how it was going to change everything as we connect billions of devices. We seem to go through cycles where a technology gets hyped so much and that’s what we seem to be going through right now with AI (artificial intelligence).
Indeed, we appear now to be on the ascendant of the Gartner hype cycle for AI, but unlike the periods when the previous technologies were being introduced, this time is different: I’ve never seen so much debate on the ethics of a technology. Well now, AI will most likely change lots of things. Autonomous vehicles, military and industrial drones, robots and many other applications in areas like healthcare, government and city functions are all potentially going to be impacted.
Tomi Engdahl says:
Data is not yet reliable enough for artificial intelligence
Finnish organizations use artificial intelligence to a large extent to experiment and learning. Companies have a lot of potential in artificial intelligence, but there are still a lot of challenges to make use of. The biggest thing is that the reliability of the data is not yet sufficiently high.
This is apparent from the report by Microsoft and PwC, interviewed by intelligence executives and experts from 20 artificial intelligence on the state-of-the-art private and public sector organization in Finland.
Organizations involved in the survey identified the biggest challenge for the use of artificial intelligence as the lack of reliable data reliability (80% of the respondents). Additional challenges are that the technologies are not yet ready enough (55%) and the readiness for artificial intelligence is not sufficient (55%).
Responsibility for artificial intelligence projects most often lies on the shoulders of individual experts, and only a few organizations have a clear model of how successful solutions are developed and deployed more widely.
Source: http://www.etn.fi/index.php/13-news/7880-data-ei-ole-viela-tarpeeksi-luotettavaa-tekoalyyn
Tomi Engdahl says:
https://info.microsoft.com/WE-AzureDS-CNTNT-FY18-04Apr-17-UncoveringAIinFinland-MGC0002305_01Registration-ForminBody.html
Tomi Engdahl says:
Finland is just the beginning of artificial intelligence
News – 04/20/2018
Finnish organizations have excellent opportunities to take advantage of artificial intelligence, according to PwC’s survey consultant. Key position is the ability to manage and utilize data and to provide staffing skills.
Finnish organizations use artificial intelligence to a large extent only for experimentation and learning. Responsibility for artificial intelligence projects most often lies on the shoulders of individual experts, and only a few organizations have a clear model of how successful solutions are developed and deployed more widely.
However, the most successful projects give organizations a real business value. This information is reflected in a recent report by Microsoft and PwC, interviewed by intelligence development leaders and experts from 20 artificial intelligence on the state-of-the-art private and public sector organization in Finland.
“The crowd has individual successful keyring projects that show that artificial intelligence is already producing real value at the moment,” says PwC’s Digital Services team leader Petri Salo
Organizations involved in the survey identified the biggest challenges in the use of artificial intelligence: the reliability of data is not yet sufficiently high (80% of respondents), technologies are not ready enough (55%) and there is insufficient ability to make artificial intelligence (55%).
“Artificial Intelligence offers enormous potential to Finnish organizations. At the latest, it is a good time to start an artificial intelligence work to achieve competitive advantages and benefits, “says Microsoft CEO Pekka Horo.
Source: https://www.uusiteknologia.fi/2018/04/20/selvitys-suomi-vasta-tekoalyn-alkutaipaleella/
More:
Uncovering artificial intelligence in Finland
http://kampanja.pwc.fi/julkaisu/artificial-intelligence-in-finland
Tomi Engdahl says:
4 Experiments Where the AI Outsmarted Its Creators | Two Minute Papers #242
https://www.youtube.com/watch?v=GdTBqBnqhaQ
The paper “The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities” is available here: https://arxiv.org/abs/1803.03453
Tomi Engdahl says:
Building AI systems that work is still hard
https://techcrunch.com/2018/01/01/building-ai-systems-that-work-is-still-hard/?utm_source=tcfbpage&sr_share=facebook
AdChoices
Building AI systems that work is still hard
Martin Welker
Jan 2, 2018
Artificial Intelligence concept
Martin Welker
Contributor
Martin Welker is the chief executive of Axonic.
Even with the support of AI frameworks like TensorFlow or OpenAI, artificial intelligence still requires deep knowledge and understanding compared to a mainstream web developer. If you have built a working prototype, you are probably the smartest guy in the room. Congratulations, you are a member of a very exclusive club.
With Kaggle, you can even earn decent money by solving real-world projects. All in all, it is an excellent position to be in, but is it enough to build a business? You can not change market mechanics, after all. From a business perspective, AI is just another implementation for existing problems. Customers do not care about implementations, they care about results. That means you are not settled just by using AI. When the honeymoon is over, you have to deliver value. Long-term, only customers count.
Tomi Engdahl says:
https://www.tivi.fi/Kaikki_uutiset/suomalaiset-kehittavat-tekoalya-joka-koodaa-uusia-sovelluksia-hyvasti-pikkubugit-6721273
Tomi Engdahl says:
https://techcrunch.com/2018/04/18/benevolentai-which-uses-ai-to-develop-drugs-and-energy-solutions-nabs-115m-at-2b-valuation/?utm_source=tcfbpage&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&sr_share=facebook
Tomi Engdahl says:
Mapping the Mind with Artificial Intelligence
https://iq.intel.com/mapping-mind-artificial-intelligence/?sf183744588=1
Princeton University neuroscientists joined forces with Intel computer scientists to map the human mind in real time, developing the next generation in brain imaging analysis.
Tomi Engdahl says:
AI creates ‘Flintstones’ cartoons from text descriptions
https://www.engadget.com/2018/04/15/ai-creates-flintstones-cartoons/
Who needs to draw when a computer can do the work?
Researchers have produced an AI system, Craft, that automatically produces The Flintstones scenes based on text descriptions. The team trained Craft to recognize elements from the classic cartoon by feeding it more than 25,000 three-second clips, each of which included descriptions of who was in the scene and what was happening. From there, the AI only needed a line or two referencing a scene to stitch together characters, backgrounds and props.
Tomi Engdahl says:
Michael Jordan:
A look at the current state of AI, what “AI” refers to, and how focusing on the “human-imitative” aspects of AI ignores opportunities in other research areas
Artificial Intelligence — The Revolution Hasn’t Happened Yet
https://medium.com/@mijordan3/artificial-intelligence-the-revolution-hasnt-happened-yet-5e1d5812e1e7
Artificial Intelligence (AI) is the mantra of the current era. The phrase is intoned by technologists, academicians, journalists and venture capitalists alike. As with many phrases that cross over from technical academic fields into general circulation, there is significant misunderstanding accompanying the use of the phrase. But this is not the classical case of the public not understanding the scientists — here the scientists are often as befuddled as the public. The idea that our era is somehow seeing the emergence of an intelligence in silicon that rivals our own entertains all of us — enthralling us and frightening us in equal measure. And, unfortunately, it distracts us.
Tomi Engdahl says:
AI Photo Translation | Two Minute Papers #243
https://www.youtube.com/watch?v=XcxzKLrCpyk
The paper “Toward Multimodal Image-to-Image Translation” and its source code is available here: https://junyanz.github.io/BicycleGAN/
Tomi Engdahl says:
Lost in Space shows a long-running problem with stories about AI
https://www.theverge.com/2018/4/24/17275856/lost-in-space-netflix-ai-artificial-intelligence-iron-giant
Artificial intelligence is a useful metaphor for humanity, but the way the show simplifies the symbolism does it no favors
Lost in Space is one of many properties that use robots as a way of supporting and mirroring stories about human growth. The way characters choose to treat artificial intelligences is often a leading indicator of how the audience is meant to perceive them, and how their characters will develop. Will, for instance, is clearly a central protagonist, as he immediately refers to the robot as “him” instead of “it,” a person rather than an object. Everyone else takes some time to adjust.
Lost in Space’s AI storyline should feel familiar to anyone even remotely interested in science fiction. The Iron Giant is likely the most straightforward parallel, as it also follows something of a “boy and his dog” structure.
The exploration of AI is a rich narrative field, because so much about it is still a mystery. The kind of AI that populates movies and TV is still far from being developed, and humanity is only beginning to reckon with the ethics and implications of created intelligence.
Blade Runner is likely the best-known example of digging deep into the field, as well as one of the best-executed.
Katsuhiro Otomo’s 2001 animated film Metropolis treads similar territory in terms of using AI to explore how people treat each other — and those they perceive to be “other” — in the pursuit of what they want.
Using AI to parallel and reflect stories about human growth is a common gambit, and it’s easy to see why: they’re literally and metaphorically human surrogates, offering a lens we use to assess how we treat anyone different from ourselves. But the plot thread needs sustained attention and commitment to work, as necessitated by the place of AI as mirrors for their human counterparts. People are more complicated than just being good or evil
Tomi Engdahl says:
Machine Learning Invades Embedded Applications
http://www.electronicdesign.com/industrial-automation/machine-learning-invades-embedded-applications?NL=ED-003&Issue=ED-003_20180425_ED-003_186&sfvc4enews=42&cl=article_1_b&utm_rid=CPG05000002750211&utm_campaign=16860&utm_medium=email&elq2=258f6266130a48b386dd45762e9a9ddb
Machine-learning applications on the edge are becoming more common and taking advantage of existing hardware.
Two things have moved deep-neural-network-based (DNN) machine learning (ML) from research to mainstream. The first is improved computing power, especially general-purpose GPU (GPGPU) improvements. The second is wider distribution of ML software, especially open-source software.
Tomi Engdahl says:
Chipmakers seek new edge
http://www.ecns.cn/business/2018/04-24/300159.shtml
Chinese companies well placed to offer solutions tailored to AI, cloud-based IoT
Artificial intelligence and the cloud-based internet of things are two major areas where China’s homegrown chips have a good chance of competing with global players, industry experts said.
“In these two areas, we are roughly at the same position compared with the United States,” said Zhang Jianfeng, chief technology officer of Alibaba Group Holding Ltd.
The remarks came after the internet giant announced on Friday its decision to buy out local chipmaker Hangzhou C-Sky Microsystems to help boost the nation’s self-sufficiency in the sector.
“In light of the ongoing intelligence wave, companies who own enough data and run crucial AI-backed applications would have a competitive edge in producing smart chips,” Zhang said.
Founded in 2001, C-Sky claims to be the only embedded CPU volume provider in China with its own instruction set architecture. The company has thus far shipped 700 million chips globally, said Li Chunqiang, the firm’s vice-general manager.
“Thanks to Alibaba’s rich experience in application scenarios, we are in a good position to deeply integrate technology with real-life industrial needs to take chip design to the next level,” said C-Sky’s Li.
It is because many real-world AI applications-from recognizing objects in images to understanding human speech-require a combination of different kinds of neural networks with different numbers of layers.
According to a national action plan on AI from 2018 to 2020, China has set a target to be able to mass produce neural network processing chips, robots that will make accomplishing daily tasks easier for disabled people, and machine learning that will help radiologists read X-ray scans.
Tomi Engdahl says:
Google Researchers Have a New Alternative to Traditional Neural Networks
Say hello to the capsule network.
https://www.technologyreview.com/the-download/609297/google-researchers-have-a-new-alternative-to-traditional-neural-networks/?utm_campaign=technology_review&utm_source=facebook.com&utm_medium=social
MIT Technology Review Menu
The Download
What’s up in emerging technology
The Download
What’s up in emerging technology
November 1, 2017
Is there a new way to give machines brains?
Google Researchers Have a New Alternative to Traditional Neural Networks
Say hello to the capsule network.
AI has enjoyed huge growth in the past few years, and much of that success is owed to deep neural networks, which provide the smarts behind impressive tricks like image recognition. But there is growing concern that some of the fundamental principles that have made those systems so successful may not be able to overcome the major problems facing AI—perhaps the biggest of which is a need for huge quantities of data from which to learn (for a deep dive on this, check out our feature “Is AI Riding a One-Trick Pony?”).
Google’s Geoff Hinton appears to be among those fretting about AI’s future. As Wired reports, Hinton has unveiled a new take on traditional neural networks that he calls capsule networks. In a pair of new papers—one published on the arXIv, the other on OpenReview—Hinton and a handful of colleagues explain how they work.
Their approach uses small groups of neurons, collectively known as capsules, which are organized into layers to identify things in video or images. When several capsules in one layer agree on having detected something, they activate a capsule at a higher level—and so on, until the network is able to make a judgment about what it sees. Each of those capsules is designed to detect a specific feature in an image in such a way that it can recognize them in different scenarios, like from varying angles.
Tomi Engdahl says:
IBM launches open-source library for securing AI systems
https://www.zdnet.com/article/ibm-launches-open-source-library-for-securing-ai-systems/
The framework-agnostic software library contains attacks, defenses, and benchmarks for securing artificial intelligence systems.
Tomi Engdahl says:
Automation, Machine Learning, and AI: Is the Sysadmin Becoming Obsolete?
https://blog.paessler.com/automation-machine-learning-and-ai-is-the-sysadmin-becoming-obsolete?utm_source=facebook&utm_medium=cpc&utm_campaign=Burda-Blog-Global&utm_content=machinelearning