New approaches for embedded development

The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.

Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.

Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.

Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.

Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.

Traditional approaches

C/C++

Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.

C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.

And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.

Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.

Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.

Java

Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones,) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.

Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.

Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.

 

Increasing trends

Databases

Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.

There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).

SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).

If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.

RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.

Software quality

Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind

Safety

Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?

IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.

If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.

Security

The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.

Developing Secure Embedded Software white paper  explains why some commonly used approaches to security typically fail:

MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY

Many organizations are only now becoming aware of the need to incorporate security into their software development lifecycle.

Some techniques for building security to embedded systems:

Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed

Flood of new languages

Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device.  There are also many other relevant languages

One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages ​​(Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again.  Scripting languages ​​are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.

Scripting languages ​​have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used.
High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.

There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.

Bash and other shell scipting

Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).

One of the most useful tools when developing from within a Linux environment is the use of shell scripting. Scripting can help aid in setting up environment variables, performing repetitive and complex tasks and ensuring that errors are kept to a minimum. Since scripts are ran from within the terminal, any command or function that can be performed manually from a terminal can also be automated!

The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.

There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.

Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.

Lua

Lua is a lightweight  cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.

Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.

There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.

PHP

PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.

Python

Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming,  Many other paradigms are supported using extensions, including design by contract and logic programming.

Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.

Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.

Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.

For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriateExamples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.

Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.

JavaScript and node.js

JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases!  There is huge community of developers. JavaScript is a high-level language.

Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.

Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.

It doesn’t require a complicated IDE; all you really need is a terminal.

JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.

JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!

JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications

JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!

Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.

Future trends

According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:

One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.

Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.

This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.

Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.

But the shift will happen as hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot.

 

Sources

Most important information sources:

New approaches to dominate in embedded development

A New Approach for Distributed Computing in Embedded Systems

New Approaches to Systems Engineering and Embedded Software Development

Lua (programming language)

Embracing Java for the Internet of Things

Node.js

Wikipedia Node.js

Writing Shell Scripts

Embedded Linux – Shell Scripting 101

Embedded Linux – Shell Scripting 102

Embedding Other Languages in BASH Scripts

PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog

PHP

Python (programming language)

JavaScript: The Perfect Language for the Internet of Things (IoT)

Node.js for Embedded Systems

Embedded Python

MicroPython – Embedded Pytho

Anyone using Python for embedded projects?

Telit Programming Python

JavaScript: The Perfect Language for the Internet of Things (IoT)

MICROCONTROLLERS AND NODE.JS, NATURALLY

Node.js for Embedded Systems

Why node.js?

Node.JS Appliances on Embedded Linux Devices

The smartest way to program smart things: Node.js

Embedded Software Can Kill But Are We Designing Safely?

DEVELOPING SECURE EMBEDDED SOFTWARE

 

 

 

1,458 Comments

  1. Tomi Engdahl says:

    This is why linux is so good in embedded

    Linux dominates in embedded systems, but the way the different platforms threaten fragmented. Fortunately, help has been developed Yocto architecture that allows applications to move to another with minimal changes on iron

    When the complexity of embedded systems has increased, accelerating the development of the project defining the key factor is changed. Low-cost, full-featured development cards, complete software ecosystem and the importance of high-quality technical support has grown substantially. Ten years ago, manufacturers were able to provide customers with simple development packages for micro-controllers and embedded processors with very little software support. In general, the client application was such that all of the software was developed in-house.

    Now, modern telecommunications applications, user interface and remote control requirements will enable the development of embedded system software only on their own very difficult. A lot of the code must be integrated from other sources. This means extra work for developers need to learn how the various software components are compatible, and these elements may have to be because of this possibly relocate. As a result, there is increased demand for platforms that provide adequate infrastructure for pre-usable form.

    Open source technology is one solution to this problem. It brings a lot of opportunities for developers of embedded systems

    The key to this integration is the Linux operating system developed around an open-source platforms.

    Linux is a highly scalable, the software infrastructure is growing rapidly, so OEM companies to develop, debug and tune planning and fast and get them to market quickly. There are open source tools that can used to maintain the code and worry about version control, as well as graphical development environments for embedded systems that require advanced now, for example, capacitive touch screens that utilize user interfaces.

    However, the Linux environment is complex.

    Yocto Project was founded by embedded hardware and software companies in 2010 in response to the growing fragmentation of Linux in embedded applications. Instead of one monolithic version developed, which would be difficult to tune a variety of popular applications in embedded platform requirements Yocto modular supports customization, the laminated (layered) through architecture, which is designed to minimize incompatibilities between different configurations.

    Yocton configurability based on BitBake tool. It is a project tool, which uses meta-data files except for the kernel, the system and its related applications, application software configuration of the image of the final system.

    Yocto-based software development transition can be successful with a single insertion of the build version of lines of code, which defines the right software package BSP (board support package).

    And using the Qt application frameworks such as the it is possible to further expand this porting. For example, the major part of the application development can be done on your desktop PC and transferred directly to the Qt vehicle in front in an embedded system with minimal code changes. This allows the user interface for prototyping extensively before final installation is complete.

    However, the Yocto is not a complete solution for developing embedded systems, because it lacks an integrated development environment (IDE). This is where the assistance will be the second open source solution for Eclipse.

    Source: http://www.etn.fi/index.php/tekniset-artikkelit-2/5865-taman-takia-linux-on-niin-hyva-sulautetuissa

    Reply
  2. Tomi Engdahl says:

    Reports of the death of the RTOS have been exaggerated
    https://www.mentor.com/embedded-software/blog/post/reports-of-the-death-of-the-rtos-have-been-exaggerated-cb12f58e-3baf-438b-a1f8-6ddc875d7df0?contactid=1&PC=L&c=2017_02_28_esd_newsletter_update_v2

    Crystal ball gazing is, I feel, commonly a foolhardy activity. So often, I have heard so-called experts making complete idiots of themselves with their perspectives on a future that seemed unlikely at the time and turns out to be completely wrong in every detail. The world of embedded software is no different. Every few years a new fashionable technology is talked about everywhere, with predictions of the world changing completely, but it never quite happens.

    There are, it seems to me, three “alternative”, or at least contentious, “facts” to consider:

    1. the idea of an RTOS is anachronous, as we now have very fast, powerful processors that just get the job done
    2. the provision of OS products by chip vendors is definitely a good thing for developers
    3. using open source OSes puts all the work/risk on the developers

    Taking these in turn:

    The market for commercial RTOS products is now around 30 years old, but similar technology has been in use somewhat longer.

    Fast forward to 2017 and things have not necessarily changed. Many systems still have limited resources. They may be considerably more powerful than anything in the 1980s, but embedded systems still often have just enough memory and CPU power to get the job done. Mostly these constraints are driven by the need to minimise power consumption and production cost. An RTOS still gives a means to optimise resource usage.

    There are a number of mainstream, widely-used RTOS products on the market, which are supplied and supported by independent software companies, who are not tied to any specific silicon. Mentor Embedded’s Nucleus is a very good example.

    Reuse of existing IP and skills is very wise, where that is optimal. If you have a choice between reuse of hardware design or software, which should you choose? The answer is easy: the software is by far the bigger job, so this IP/expertise is most valuable. Reusing software that ties you to specific silicon is not an acceptable constraint.

    Many people favor the use open source operating system products. Although they should never be considered to be “free” [as I wrote about recently], they may be a good choice for many applications. Although a “roll your own” approach may be taken – i.e. download the code and perform all the integration etc. yourself – this is rarely a wise move, as many vendors can supply all the support required to successfully deploy the chosen OS and mitigate risks.

    Reply
  3. Tomi Engdahl says:

    The deep blue, I mean, the deep Azure sky before me
    https://www.mentor.com/embedded-software/blog/post/the-deep-blue-i-mean-the-deep-azure-sky-before-me-6add59a5-e276-4408-8a4b-30b67cde748e?contactid=1&PC=L&c=2017_02_28_esd_newsletter_update_v2

    Businesses are indeed implementing various IoT systems and collecting data from the devices in those systems. Of course, they’ve been doing this for some time now, but today’s technology enablement and business pressures are pushing them to collect more data, and to use that data in advanced analytics for functions like predictive or prescriptive maintenance – and eventually for machine learning. At the basis of these systems are smart devices. One keynote presenter at ARC made a specific point that the intelligent factories of the future would not be possible without these smart devices, which provide the data and information that enable the advanced analytics. So, it all starts with smart devices.

    Regarding the commercial cloud, it’s very clear that Microsoft Azure is the choice for both plant operators and the manufacturers of the equipment. Azure was mentioned many times in keynotes and sessions. In talking to one veteran editor about our Azure strategy, he simply said “everybody here is using Azure.” While talking about Azure to one of Mentor’s current customers, he told me that “Azure is the only cloud for industrial businesses.”

    One element of our investment is to integrate the Microsoft Azure software development kits (SDKs) with our Mentor Embedded Linux and Nucleus real-time operating system (RTOS) platforms. This integration provides device manufacturers and their downstream customers with integrated and intrinsic connectivity to the Azure cloud.

    Once connectivity is established, data can be pushed seamlessly from the smart edge device to the Azure IoT hub. This connectivity makes the data available to the massive breadth of Microsoft’s cloud services, which can then be leveraged by customer-specific advanced analytics and cloud applications. Why do device manufacturers care? Well, it gets back to the competitive situation: they need to focus their scarce resources on differentiated functionality, reduced risk, and how to get to market quickly.

    In summary, Mentor’s platforms integrated with Microsoft Azure SDKs combined with our ability to provide deeply embedded device information to customers makes the smart device even smarter. Because Microsoft Azure is the cloud of choice for the industrial automation market, we are strategically aligned to help our customers and their downstream customers be more successful in the realization phase of their digitalized systems and business models.

    Reply
  4. Tomi Engdahl says:

    LibManuvr
    A multi-platform asynchronous operating system wrapper.
    https://hackaday.io/project/4795-libmanuvr

    LibManuvr was intended to make it easier to write asynchronous, distributed, cross-platform IoT applications.

    It was meant to wrap various RTOS’s, and itself function as the hardware abstraction layer if no operating system is involved.

    The latest capability to be added to ManuvrOS is a uniform cryptographic layer with test-coverage. I have plans to make this asynchronous and pluggable-at-run-time to facilitate non-blocking hardware access for TPM modules and secure storage.

    Mainline repo:

    https://github.com/Manuvr/ManuvrOS

    Reply
  5. Tomi Engdahl says:

    ASIL D Requires Precision
    While the highest level of automotive safety requires precision in many ways, the path there is still fuzzy.
    http://semiengineering.com/asil-d-requires-precision/

    It seems the entire world is abuzz with the excitement surround autonomous driving, and while more driver assist features are added to new vehicles all the time, this is tempered by the fact that there is still much work to be done when it comes to safety.

    For developers across the automotive ecosystem, safety comes down to the Automotive Safety Integrity Level (ASIL) risk classification scheme as defined by the ISO 26262 – Functional Safety for Road Vehicles standard.

    ASIL D — the abbreviation of Automotive Safety Integrity Level D, which refers to the highest classification of initial hazard (injury risk) defined within ISO 26262 — is required for Level 5 full autonomous driving. And this level of stringency demands precision in design of hardware and software.

    Reply
  6. Tomi Engdahl says:

    Debug Is About To Get Really Interesting Again
    Running trace and in-field debug through I/O is long overdue.
    http://semiengineering.com/debug-is-about-to-get-really-interesting-again/

    One of the great unheralded chapters in the history of electronics design is debug. After all, where there have been designs, there have been bugs. And there was debug, engaged in an epic wrestling match with faults, bugs and errors to determine which would prevail.

    The typical development flow was to write your code in ASM or C and get it compiled, linked and located so that you ended up with a HEX file for the ROM image.

    You plugged the EPROMs back in to the target, powered it up and to see if your program worked. If your program didn’t function as expected, you had some options to debug your code:

    Code inspection. You would walk through your code staring long and hard at it looking for errors. Some people still use this approach, especially those who view the use of any debugging tool as a failure of programming skill.
    LEDs (Stare at them. Really). You could determine the path through your code by modifying the code to signal a state at significant places in the code. You could then just look at the LEDs to see the progress (or often lack of progress) through your code.
    On-target monitor: Step through your code at the assembly level. Remember assembly code?
    In-circuit emulator: The ultimate debug tool, if you could afford it.

    Over the decades, debug circuitry migrated on chip, making things a little easier, a little faster. But external trace started to strain with design complexity and increasing CPU clock speeds. Through the 1990s, various techniques were used to maintain the effectiveness of trace-based system, such as compressing the trace data, which could then be transported more easily off chip.

    In the early 2000s, ARM introduced the Embedded Trace Buffer (ETB), accessible via JTAG and configurable in size to hold the trace data. This solved the issue of having to provide a relatively high speed off-chip trace port (though still nowhere near core clock speed) at the expense of using silicon area in the SoC.

    Then came the ARM Embedded Logic Analyser (ELA), which enabled access to complex on-chip breakpoints, triggers and trace with access to internal SoC signals.

    Today, we’re on the cusp of a new era in debug, one in which engineers can do debug and trace over functional I/O.

    Tipping point
    Anders Lundgren, Product Manager at IAR Systems, put it best: “We are about to see new ways to access debug capabilities, from JTAG or no debug connectivity, to debug and trace protocols running over existing interfaces,” he said in an interview.

    ARM has announced CoreSight SoC-600, which implements the latest ARM debug and trace architecture to provide developers with high-throughput trace and in-field debug accessibility through existing functional I/O.

    Robert Redfield, director business development at Green Hills Software, noted that it’s not uncommon for customers to have a successful development campaign, but later run into system performance issues. Developers often mistakenly think they have large-scale performance analysis capabilities with ETM (Embedded Trace Macrocell) and ETB (Embedded Trace Buffer). This is not the case. ARM has now resolved this, with higher trace bandwidth capabilities from the new CoreSight SoC-600 IP.

    ARM CoreSight SoC-600
    https://developer.arm.com/products/system-ip/coresight-debug-and-trace/coresight-components/coresight-soc-600

    The new Debug Access Port (DAP) architecture introduces standard APB connectivity between Debug Port (DP) and Access Port (AP), making it possible to have multiple DPs connected to multiple APs.

    Reply
  7. Tomi Engdahl says:

    Why TSN is so Important for Embedded This Year
    http://forums.ni.com/t5/NI-Blog/Why-TSN-is-so-Important-for-Embedded-This-Year/ba-p/3585075?cid=Facebook-70131000001RoznAAC-Northern_Region-SF_TSNImp_EW

    A roadblock for the IIoT: non-converged networks

    The solve: time-sensitive networking (TSN)

    Didier: We worked on a way to converge the two successfully, using TSN – which allowed IT and OT data flows to flow together in a converged network undisturbed.

    Three primary capabilities of TSN

    Precise time synchronization: accomplished via a standard of IEEE 802.1as, based on 1588, the precision time protocol that’s been around for about 10 years.

    Traffic scheduling: Combining time synchronization with traffic management, to deliver latency guarantees. Each switch identifies the time-critical data and places it in a special queue. The pitch then forwards each of these packets at very specific times.

    System configuration: Taking a page out of software-defined networking (SDN), this abstracts the network configuration from the end application configuration.

    Reply
  8. Tomi Engdahl says:

    Can this help embedded engineers keep pace with IoT challenges?
    http://www.electropages.com/2017/03/can-this-help-embedded-engineers-keep-pace-with-iot-challenges/?utm_campaign=&utm_source=newsletter&utm_medium=email&utm_term=article&utm_content=Can+this+help+embedded+engineers+keep+pace+with+IoT+challenges%3F

    We’re all subjected to the relentless hyperbole expounding the theory that the Internet of Things (IoT) will revolutionise connectivity and make the world a better, more intelligent and efficient place to be in.

    Whether or not you believe all or some of that there is no denying that the embedded hard and software technologies that can make this apparent epidemic of connectivityitus a reality are seeing massive market growth with predictions the market size is expected to exceed US$260 billion by 2023. Prime drivers of that growth being the IoT, medical, automotive and domestic applications.

    However, it’s not all sunshine and rainbows for the design engineers that have to make the technology work in those applications. The fundamental problem is that engineering organsiations are struggling to keep pace with the escalating rate of IoT design demands.

    Engineers are constantly faced with questions when it comes to choosing components. Are they compatible with the same OS and how should they be configured? Will they interoperate and will combining certain embedded products meet validation and certification requirements?

    Getting these answers right is essential if embedded projects involving IDE, compilers, debuggers, trace tools, test tools, debug and flash programming hardware, target operating system are going to be successfully concluded.

    So can the news that a bunch of companies in the embedded tools industry have got together to form the Embedded Tools Alliance (ETA) bring much needed design solace to pressurised embedded system engineers?

    Well according to new kid on the embedded block, ETA, the embedded marketplace is fragmented with a huge number of suppliers. Some large companies try to offer every component required. This approach stagnates innovation says ETA and provides limited choice and doesn’t allow customers to choose best-in-class solutions to address their project’s specific needs. Strong words.

    Reply
  9. Tomi Engdahl says:

    Security Teams Need to Understand How Developers Tools Work
    http://www.securityweek.com/security-teams-need-understand-how-developers-tools-work

    Understanding Development Work Practices Allows Security Teams to Speak to Developers Using Terms They Understand

    Buckminster Fuller famously said that giving people a tool will shape the way they think. Similarly, when it comes to development teams, understanding how development tools work can provide a valuable window into the developers thought process. Security teams can use these insights to better advance their agendas and get vulnerabilities detected and fixed faster.

    Security teams understand the risk associated with fielding vulnerable applications, but they need the support of the DevOps team to build secure applications and address identified security issues.

    How Do Developers Track Their Workload?

    Developers typically track their work load in defect tracking or change management systems such as Atlassian JIRA, Bugzilla, HPE Application Lifecycle Management (ALM) and IBM ClearCase.

    A key difference between security and development teams is that security professionals care about vulnerabilities and developers care about bugs. The critical point for security teams to understand is that developers will likely not care about vulnerabilities until those vulnerabilities are being tracked in in their bug tracking system.

    For security teams to make progress in addressing vulnerabilities, their first priority should involve getting the vulnerabilities they care about translated into defects or changing requests that the development team will track. This typically requires a conversation between security teams and a representative from the development team, but crossing this boundary helps to remove friction from the remediation process.

    Where Do Developers Spend Most of Their Time?

    Developers spend a tremendous amount of their time in their Integrated Development Environments (IDEs). This is where they write and test code and save code updates back to version tracking systems. Common IDEs include Microsoft Visual Studio, Jetbrains IntelliJ, and Eclipse. The objective for security teams is to get information about application security integrated into these environments to further reduce friction from the remediation process. If developers can track down the location of vulnerabilities in code and receive guidance on addressing these vulnerabilities without leaving their IDE, they can fix vulnerabilities faster.

    How Do Developers Test Their Code?

    A critical shift that occurs as development teams move to embrace Agile methodologies and DevOps practices is that toward automated testing.

    A common unit testing toolset is the xUnit framework and a common tool for building functional and regression test suites for web applications is Selenium. Security representatives would do well to look at the sort of automated verification approaches that development teams are using and look for opportunities to extend those checks to involve security.

    How Do Developers Automate and Orchestrate Common Processes?

    There are several steps typically involved in a development team taking the latest results from their development, turning that into a new software build, and then determining if that build is of an acceptable level of quality to consider releasing. Development teams use automation servers to coordinate the continuous integration, deployment, and delivery processes with common examples such as Jenkins and Atlassian Bamboo.

    integrating application security testing into CI/CD pipelines can be a huge win for security teams looking to decrease the number of vulnerabilities that get deployed to production.

    What Metrics Do Developers Track and How Do They Track Them?

    Many development team metrics – such as the time required to fix bugs – can be reported by the defect tracking systems. Some teams use additional tracking systems such as SonarQube to track code-level measurements like technical debt and defect density.

    Reply
  10. Tomi Engdahl says:

    Linux dominates the embedded

    Embedded World, Nürnberg – from the inside of most embedded devices found in today Finno-oriented operating system. Linux’s position is clearly reflected in the Nuremberg exhibition halls.

    Even a few years ago Linux was not considered best choice in many applications that require real-time. Now real time Linux is offered dozens of companies at the fair.

    Linux has the advantage of platform versatility. on top of the kernel can be built as well as general-purpose control solutions that very closely for a specific application tailor-made solution.

    Another advantage of Linux is security. Linux bends well for applications where security – for example, storage and protection of public keys – have been made to the device level.

    Linux kernel fineness is precisely the openness and flexibility. Linus Torvalds is sometimes said that the core is likely to have to bring more security features, but this will not be made in such a way that the kernel development in some way to slow down or compromised.

    Currently, the implementation of information security will either Linux vendors and suppliers responsibility. This is not an obstacle, but above all the opportunity. Intel is now part of Wind Rover’s new industrial control platform Titanium Control is a good example of this.

    Titanium allows businesses to upgrade to 20 years old control system to meet current needs. The model devices / nodes control is OpenStack-stack, counting takes place in the Wind Rivers Industrial Grade Linux and Data is stored in the cloud. This means in practice Linux all the different levels.

    Wind River’s strategy director Gareth Noyes, this is the only way to put the old control systems to support the IoT world.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=6014&via=n&datum=2017-03-16_15:15:15&mottagare=30929

    Reply
  11. Tomi Engdahl says:

    IoT applications require some development expertise that is significantly different from those required in a traditional embedded application. For example, the UI will typically reside on a mobile device rather than on the device itself. Most obviously, the device will need to connect to an IoT platform, which in turn will collect and analyze data.

    Reply
  12. Tomi Engdahl says:

    Sequential Programming Considered Harmful?
    Russ Miller wants computer science students to think in parallel from the get-go
    http://spectrum.ieee.org/at-work/education/sequential-programming-considered-harmful

    Reply
  13. Tomi Engdahl says:

    IoT Edge Design Demands A New Approach
    http://semiengineering.com/iot-edge-design-demands-a-new-approach/

    A low-cost proof-of-concept is necessary in designing IoT edge devices.

    A new breed of designers has arrived that is leveraging the advances in sensing technology to build the intelligent systems at the edge of the IoT.

    These systems play in every space: on your body, at home, the car or bus that you take to work, and the cities, factories, office buildings, or farms that you work. The energy that you consume and how you travel, by air, land, or sea, all have IoT edge solutions being developed. And, space probes, telescopes, and satellites explore the far edges of the universe.

    The widely-dispersed edge of the IoT and the thousands of small, innovative design teams working there are enabling the rapid development of the IoT.

    Who are the new breed of designers? They work in small teams, collaborate online, and they require affordable design tools that are easy to and quickly produce results. Their goal is to deliver a functioning device to their stakeholders while spending as little money as possible to get there. Many work for companies that don’t have millions of dollars for traditional design tools, don’t have the time or desire to deal with the overhead of a central CAD department, work in a small company with very limited resources, or are one of the many new startups in this space. These teams all have one thing in common: they require the capability to develop a proof-of-concept for system validation in order to capitalize on this enormous opportunity. Even with the huge potential, the edge is very cost-sensitive, requiring a very low-cost proof-of-concept.

    ARM offers the DesignStart portal that allows designers fast and easy access to a trial selection of ARM products without charge. In addition, Mentor Graphics provides the Tanner EDA design tools for free evaluation and ARM offers approved design partners for SoC development help.

    For your project, the portal offers the ARM Cortex-M0 processor that you can download and use for design and simulation without charge. This is the ideal solution to your rapid proof-of-concept project. The ARM Cortex-M0 is a low-power 32-bit processor with a small footprint.

    ARM DESIGNSTART
    https://www.arm.com/develop/designstart?utm_campaign=201703-designstartutm_source=mentorutm_medium=SemiEngArticleutm_term=designstart

    Reply
  14. Tomi Engdahl says:

    Powerful State Machine Design Tool
    https://www.eeweb.com/news/powerful-state-machine-design-tool

    IAR Systems® unveiled a new version of the state machine design tool IAR Visual State™. IAR Visual State is a set of tools for designing, testing and implementing embedded applications based on state machines. Developers use IAR Visual State to build their design from a high level, structure complex applications, step by step add functions in detail, and automatically generate code that is 100 percent consistent with the design.

    “The new version of IAR Visual State and the added variant handling feature is a perfect match for customers looking for simplicity and order in their large design projects, especially companies in the automotive industry focused on user interface designs such as car navigation systems and display audio solutions,” says Kiyofumi Uemura, Global Automotive Director, IAR Systems.

    Reply
  15. Tomi Engdahl says:

    CERT publishes deep-dive ‘don’t be stupid’ list for C++ coders
    Your hefty guide to avoiding the mistakes everyone makes
    https://www.theregister.co.uk/2017/03/23/cert_c_plus_plus_coding_standard/

    CERT has followed last year’s release of its secure C coding standard with a similar set of rules for C++.

    Carnegie-Mellon University’s announcement says the Software Engineering Institute (SEI) has put ten years into researching secure coding. The resulting SEI CERT C++ Coding Standard has 83 rules specific to features of C++ that aren’t in C.

    “This newly released C++ standard adds to our previously released C standard secure coding guidance for features that are unique to the C++ language. For example, this standard has guidance for object oriented programming and containers,” said CERT’s Robert Schiela, technical manager, Secure Coding, in the canned release. “It also contains guidance for features that were added to C++14, like lambda objects.”

    While specific to C++14, the guidelines in the standard can be applied to older versions, back to C++11.

    SEI CERT C++ Coding Standard
    https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=637

    The C++ rules and recommendations in this wiki are a work in progress and reflect the current thinking of the secure coding community. Because this is a development website, many pages are incomplete or contain errors. As rules and recommendations mature, they are published in report or book form as official releases. These releases are issued as dictated by the needs and interests of the secure software development community.

    The CERT C++ Coding Standard does not currently expose any recommendations

    Reply
  16. Tomi Engdahl says:

    Welcome to the Age of Continuous Innovation
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1331529&

    Modern applications now are comprised of pre-fab code snippets representing atomized functions delivered as microservices packaged within containers.

    Technology stories regularly focus on macro trends including the Internet of Things, Big Data, Mobile, Cloud Computing, and Artificial Intelligence. On their own, each of these trends leads to massive changes in how fast information flows and brings about new applications and services. Taken together, they constitute radical changes in information technology — we are witnessing a new information age.

    These trends and their underlying technologies are all a part of the Third Wave of Computing and the Fourth Industrial Revolution, both of which will fundamentally alter how we experience technology in our work and play, governance, and even our relationships. Information change has never been faster as exemplified by the practice of Continuous Integration / Continuous Delivery for new or updated applications that are developed and introduced at high velocity. Basically, this means that applications are constantly updated to meet new requirements. The days of Waterfall Development have long vanished, and even Agile Development techniques are dramatically morphing to meet new age needs. The emphasis is all about speed, agility, and digital transformation to improve a company’s performance through harnessing digital technologies.

    Modern applications now are comprised of pre-fab code snippets representing atomized functions delivered as microservices packaged within containers. These containers are deployed across “server-less” clusters using orchestration platforms that automate the capacity, scaling, patching, and administration of the infrastructure.

    Reply
  17. Tomi Engdahl says:

    Security vs. Quality: What’s the Difference?
    http://www.securityweek.com/security-vs-quality-what%E2%80%99s-difference

    Quality and security. Two words that share an interesting relationship and no small amount of confusion.

    What is certain is that both words are meaningful in the context of software. Quality essentially means that the software will execute according to its design and purpose. Security means that the software will not put data or computing systems at risk of unauthorized access. While quality seems to be easier to measure, both are somewhat subjective in their assessment.

    The real confusion comes when you consider the relationship between quality and security. Are they the same thing or is one a subset of the other? If I have quality, does that mean I’m also secure? Are quality problems also security problems or vice versa?

    Defining quality and security defects

    For those who take a holistic view of software design and development, quality and security issues both fall into the broad category of defects.

    Applying the definition of “defect”, the software malfunctioned or failed in its purpose. This would be a defect and would fall into the category of quality.

    Determining if the defect has a security component will take more digging. If I can demonstrate that exploiting this issue in some way to gain unauthorized access to data or the network, then this would also fall under the category of security. It is entirely possible that the defect may simply be a logic issue and, while potentially annoying, does not create an exploitable vulnerability.

    Based on my scenario above, we can then draw the conclusion that security is a subset of quality.

    Or maybe not.

    Clarifying the misunderstandings between quality and security

    A simple coding bug such as cross-site scripting (XSS) may counter our argument. The developer can code the software within adherence to the requirements and still make the code vulnerable to an XSS attack. The associated defect would be security related, but does reflect a defect from a quality point of view. Many would argue that a security vulnerability is a quality problem. I could easily get behind that line of reasoning, but others would invoke the stricter interpretation. This disabuses the notion that security is a subset of quality.

    Part of the misunderstanding between quality and security was that the two were functionally separated in traditional development shops, and the groups that owned them rarely interacted.

    Combining quality and security to enable the developer

    As development practices have evolved and agile methods continue to take root, the traditional quality and security silos have had to come down by necessity. Security is being integrated into the development process with the notion of enabling developers to build good security practices into their code. Similarly, the responsibility for quality is now shared by the developers. There is also higher awareness of the architecture, design, and requirements process and how that process affects quality and security.

    Broadening the quality and security perspective

    In the end, quality and security are critical components to a broader notion: software integrity.

    Ultimately, developers strive to develop the best software possible. This implies that defects—quality and security—should be minimalized, or, at best, eliminated. If we agree that quality and security problems are both a form of defect, then we must sufficiently address both to produce software of the highest integrity.

    Reply
  18. Tomi Engdahl says:

    Powerful State Machine Design Tool
    https://www.eeweb.com/news/powerful-state-machine-design-tool

    IAR Systems® unveiled a new version of the state machine design tool IAR Visual State™. IAR Visual State is a set of tools for designing, testing and implementing embedded applications based on state machines.

    Developers use IAR Visual State to build their design from a high level, structure complex applications, step by step add functions in detail, and automatically generate code that is 100 percent consistent with the design. This methodology can be extremely helpful when realizing large design projects for embedded applications, for example in the automotive industry. The tools also provide advanced formal verification, analysis and validation to ensure that the applications behave as intended.

    https://www.iar.com/

    Reply
  19. Tomi Engdahl says:

    https://www.ingenu.com/technology/

    RPMA, or Random Phase Multiple Access, technology is a combination of state of the art technologies designed specifically and exclusively for wireless machine-to-machine communication.

    https://www.ingenu.com/technology/rpma/how-rpma-works/

    LED Roadway Lighting NXT series luminaires are LED street lights that are remotely monitored, controlled, and automated over RPMA® (Random Phase Multiple Access) wireless connectivity. The street lights provide energy savings of more than 75% over traditional lighting solutions due to the combination of LED lights and RPMA-enabled automatic control, dimming, and alerting.

    RPMA’s two-way connectivity makes street light management easy. When maintenance issues arise, the luminaires send alerts through the RPMA network to the back office. Cities like Aruba can quickly control and automate lights from the main office on the fly or using automated plans making event planning, emergency response, and energy savings plans simple to implement.

    Because RPMA access points (AP) receive and understand over a thousand messages simultaneously, adjusting street lighting solutions during large events is seamless.

    Reply
  20. Tomi Engdahl says:

    Welcome to the Age of Continuous Innovation
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1331529&

    Modern applications now are comprised of pre-fab code snippets representing atomized functions delivered as microservices packaged within containers.

    Technology stories regularly focus on macro trends including the Internet of Things, Big Data, Mobile, Cloud Computing, and Artificial Intelligence. On their own, each of these trends leads to massive changes in how fast information flows and brings about new applications and services. Taken together, they constitute radical changes in information technology — we are witnessing a new information age.

    Modern applications now are comprised of pre-fab code snippets representing atomized functions delivered as microservices packaged within containers. These containers are deployed across “server-less” clusters using orchestration platforms that automate the capacity, scaling, patching, and administration of the infrastructure. Combined with a distributed computing architecture, this represents a massive paradigm shift in IT practice and infrastructure that has occurred in only the last several years in both application development and operations

    Reply
  21. Tomi Engdahl says:

    Lint for Shell Scripters
    http://hackaday.com/2017/03/29/lint-for-shell-scripters/

    It used to be one of the joys of writing embedded software was never having to deploy shell scripts. But now with platforms like the Raspberry Pi becoming very common, Linux shell scripts can be a big part of a system–even the whole system, in some cases. How do you know your shell script is error-free before you deploy it? Of course, nothing can catch all errors, but you might try ShellCheck.

    When you compile a C program, the compiler usually catches most of your gross errors. The shell, though, doesn’t look at everything until it runs which means you might have an error just waiting for the right path of an if statement or the wrong file name to occur. ShellCheck can help you identify those issues before deployment.

    https://www.shellcheck.net/#

    Reply
  22. Tomi Engdahl says:

    7 Tips for Developing Great APIs
    https://www.designnews.com/electronics-test/7-tips-developing-great-apis/58820449056543?cid=nl.x.dn14.edt.aud.dn.20170407

    The embedded software industry is changing and that change is requiring developers to start working at higher levels of abstraction which requires designing and creating application programming interfaces that allow software to be reused. Embedded design

    1. Make it an Iterative Process
    2. Examine More than One Microcontroller Datasheet
    3. Use No More than 10 Interfaces Per Module
    4. Test Pre-Conditions and Post-Conditions
    5. Logical Naming Conventions
    6. Provide a Method for Extending the Interface
    7. Build Interrupt Handling into the API

    Reply
  23. Tomi Engdahl says:

    Embedded Systems Designers Are Creating the Internet of Dangerous Things
    A study from the Barr Group gives low marks to safety and security practices used by embedded systems engineers.
    https://www.designnews.com/electronics-test/embedded-systems-designers-are-creating-internet-dangerous-things/119133372156572?cid=nl.x.dn14.edt.aud.dn.20170407

    Reply
  24. Tomi Engdahl says:

    FreeRTOS Gets Class
    http://hackaday.com/2017/04/08/freertos-gets-class/

    [Michael Becker] has been using FreeRTOS for about seven years. He decided to start adding some features and has a very interesting C++ class wrapper for the OS available.

    Real Time Operating Systems (RTOS) add functionality for single-thread microcontrollers to run multiple programs at the same time without threatening the firmware developer’s sanity. This project adds C++ to the rest of the FreeRTOS benefits. We know that people have strong feelings one way or the other about using C++ in embedded systems. However, as the 24 demo projects illustrate, it is possible.

    One nice thing about the library is that it is carefully documented. A large number of examples don’t hurt either.

    https://michaelbecker.github.io/freertos-addons/

    Reply
  25. Tomi Engdahl says:

    PlatformIO and Visual Studio Take over the World
    http://hackaday.com/2017/04/07/platformio-and-visual-studio-take-over-the-world/

    In a recent post, I talked about using the “Blue Pill” STM32 module with the Arduino IDE. I’m not a big fan of the Arduino IDE, but I will admit it is simple to use which makes it good for simple things.

    It turns out, the Arduino IDE does a lot more than providing a bare-bones editor and launching a few command line tools. It also manages a very convoluted build process. The build process joins a lot of your files together, adds headers based on what it thinks you are doing, and generally compiles one big file, unless you’ve expressly included .cpp or .c files in your build.

    That means just copying your normal Arduino code (I hate to say sketch) doesn’t give you anything you can build with a normal compiler. While there are plenty of makefile-based solutions, there’s also a tool called PlatformIO that purports to be a general-purpose solution for building on lots of embedded platforms, including Arduino.

    Although PlatformIO claims to be an IDE, it really is a plugin for the open source Atom editor. However, it also has plugins for a lot of other IDEs. Interestingly enough, it even supports emacs. I know not everyone appreciates emacs, so I decided to investigate some of the other options. I’m not talking about VIM, either.

    I wound up experimenting with two IDEs: Atom and Microsoft Visual Studio Code.

    PlatformIO supports a staggering number of boards ranging from Arduino to ESP82666 to mBed boards to Raspberry Pi. It also supports different frameworks and IDEs. If you are like me and just like to be at the command line, you can use PlatformIO Core which is command line-driven.

    PlatformIO does too much. In theory, that’s the strength of it. I can write my code and not care how the mBed libraries are written or the Arduino tools munge my source code. I don’t even have to set up a tool chain because PlatformIO downloads everything I need the first time I use it.

    When that works it is really great. The problem is when it doesn’t.

    http://platformio.org/

    Reply
  26. Tomi Engdahl says:

    What’s The Difference Between De Jure And De Facto Standards?
    http://electronicdesign.com/embedded/what-s-difference-between-de-jure-and-de-facto-standards?code=UM_Classics04117&utm_rid=CPG05000002750211&utm_campaign=10545&utm_medium=email&elq2=f67511956ff04faba5913471ea0105d8

    De Jure Versus De Facto

    De jure standards, or standards according to law, are endorsed by a formal standards organization. The organization ratifies each standard through its official procedures and gives the standard its stamp of approval.

    De facto standards, or standards in actuality, are adopted widely by an industry and its customers. They are also known as market-driven standards. These standards arise when a critical mass simply likes them well enough to collectively use them. Market-driven standards can become de jure standards if they are approved through a formal standards organization.

    Formal standards organizations that create de jure standards have well-documented processes that must be followed. The processes can seem complex or even rigid. But they are necessary to ensure things like repeatability, quality, and safety. The standards organizations themselves may undergo periodic audits.

    Organizations that develop de jure standards are open for all interested parties to participate. Anyone with a material interest can become a member of a standards committee within these organizations. Consensus is a necessary ingredient. Different organizations have different membership rules and definitions of consensus.

    Because of the processes involved, de jure standards can be slow to produce.

    De facto standards are brought about in a variety of ways. They can be closed or open, controlled or uncontrolled, owned by a few or by many, available to everyone or only to approved users. De facto standards can include proprietary and open standards alike.

    Closed proprietary standards are owned by a single company. Only that company’s customers and partners are allowed to use them. Competitors are banned from implementing products that use closed proprietary standards. As a result, they greatly reduce interoperability.

    Open proprietary standards also are owned by a single company, yet the company allows anyone to use them. Interoperability is enabled with open proprietary standards. There is usually some kind of license involved and possibly a fee that must be reasonable and non-discriminatory (RAND).

    Open Source/Environment

    Open-source standards benefit from a general desire to make the standard successful. If individuals purposely try to damage the standard, their input will not be included in future versions. Because open-source standards are readily available with few restrictions, there is a risk of “forking.” The standard could diverge if people modify it into different forks to suit their separate products. The managing person or entity oversees the open-source standard’s evolution to maintain its integrity.

    There are hybrids of these models out there.

    Reply
  27. Tomi Engdahl says:

    How embedded Linux accelerates IoT development
    https://opensource.com/article/17/3/embedded-linux-iot-ecosystem?sc_cid=7016000000127ECAAY

    You’ll find that the quickest way to build components of an IoT ecosystem is to use embedded Linux, whether you’re augmenting existing devices or designing a new device or system from the beginning. Embedded Linux shares the same source code base as desktop Linux, but it is coupled with different user interface tools and other high-level components. The base of the system is essentially the same.

    Reply
  28. Tomi Engdahl says:

    Supporting CPU Plus FPGA
    http://semiengineering.com/supporting-cpu-plus-fpga/

    Experts at the table, part 3: Partitioning, security issues, verification and field upgradeability.

    Reply
  29. Tomi Engdahl says:

    Forth on tiny microcontrollers:

    Forth: The Hacker’s Language
    http://hackaday.com/2017/01/27/forth-the-hackers-language/

    Forth is what you’d get if Python slept with Assembly Language: interactive, expressive, and without syntactical baggage, but still very close to the metal. Is it a high-level language or a low-level language? Yes! Or rather, it’s the shortest path from one to the other. You can, and must, peek and poke directly into memory in Forth, but you can also build up a body of higher-level code fast enough that you won’t mind. In my opinion, this combination of live coding and proximity to the hardware makes Forth great for exploring new microcontrollers or working them into your projects. It’s a fun language to write a hardware abstraction layer in.

    But Forth is also like a high-wire act; if C gives you enough rope to hang yourself, Forth is a flamethrower crawling with cobras. There is no type checking, no scope, and no separation of data and code.

    If you want a compiler to worry about code safety for you, go see Rust, Ada, or Java. You will not find it here. Forth is about simplicity and flexibility.

    Being simple and flexible also means being extensible. Almost nothing is included with most Forth systems by default. If you like object-oriented style programming, for instance, Gforth comes with no fewer than three different object frameworks, and you get to choose whichever suits your problem or your style best.

    The Sweet Spot

    But enough philosophical crap about Forth. If you want that, you can read it elsewhere in copious quantities. (See the glossary below.) There are three reasons that Forth more interesting to the hardware hacker right now than ever before. The first reason is that Forth was developed for the computers of the late 1970s and early 1980s, and this level of power and sophistication is just about what you find in every $3 microcontroller on the market right now. The other two reasons are intertwined, but revolve around one particular Forth implementation.

    Mecrisp-Stellaris

    There are a million Forths, and each one is a special snowflake. We’ve covered Forth for the AVRs, Forth on ARMs, and most recently Forth on an ESP8266.

    Moving Forth with Mecrisp-Stellaris and Embello
    http://hackaday.com/2017/04/19/moving-forth-with-mecrisp-stellaris-and-embello/

    Reply
  30. Tomi Engdahl says:

    Browsing Forth
    http://hackaday.com/2017/01/04/browsing-forth/

    Forth has a strong following among embedded developers. There are a couple of reasons for that. Almost any computer can run Forth, even very small CPUs that would be a poor candidate for running programs written in C, much less host a full-blown development environment. At its core, Forth is very simple. Parse a word, look the word up in a dictionary. The dictionary either points to some machine language code or some more Forth words. Arguments and other things are generally carried on a stack. A lot of higher-level Forth constructs can be expressed in Forth, so if your Forth system reaches a certain level of maturity, it can suddenly become very powerful if you have enough memory to absorb those definitions.

    If you want to experiment with Forth, you probably want to start learning it on a PC. There are several you can install, including gForth (the GNU offering).

    js forth fun!
    https://brendanator.github.io/jsForth/

    Reply
  31. Tomi Engdahl says:

    Interactive ESP8266 Development with PunyForth
    http://hackaday.com/2016/12/23/interactive-esp8266-development-with-punyforth/

    Forth is one of those interesting languages that has a cult-like following. If you’ve never looked into it, its strength is that it is dead simple to put on most CPUs, yet it is very powerful and productive. There are two main principles that make this possible. First, parsing is easy because any sequence of non-space characters makes up a legitimate Forth word. So while words like “double” and “solve” are legal Forth words, so is “#$#” if that’s what you want to define.

    The other thing that makes Forth both simple and powerful is that it is stack-based.

    [Zeroflag] created PunyForth–a Forth-like language for the ESP8266. You can also run PunyForth for cross development purposes on Linux (including the Raspberry Pi). The system isn’t quite proper Forth, but it is close enough that if you know Forth, you’ll have no trouble.

    Forth inspired programming language for the ESP8266
    https://github.com/zeroflag/punyforth

    Reply
  32. Tomi Engdahl says:

    Embedded System Development has been around for decades. Recently the Internet of Things (IoT), spanning across most application domains, has contributing to a renaissance of development flows and techniques to bring embedded systems to market fastest. Besides time to market, in the past considerations for performance, power and cost were key development drivers, in the age of IoT they are now joined by other priorities like security, connectivity and in-field upgradeability.

    Integrated verification, smartly combining formal verification, simulation at the transaction and classic register transfer level (RTL), emulation and FPGA based prototyping has become a key requirements. Given the varying development needs for edge nodes, hubs, networks and servers, trying to meet varying priorities across multiple application domains, the vehicles for verification and software development need to be extremely flexible and require close interaction.

    Emulation and FPGA based prototyping have long been available to accelerate speeds, extending the range of verification into hardware/software verification, software development and system validation. In the era of IoT, embedded software development clearly has become the long pole in the tent, gating time to product delivery and with that time to revenue. While Emulation is based on its availability often used for lower level software development, it has always been somewhat speed limited. In contrast FPGA based prototyping generally achieves the appropriate performance that satisfies the needs of software developers. However, time to prototype, due to the largely manual optimizations required and the need to re-write the RTL to make it FPGA friendly, has traditionally been long, often taking months, depending on the size of the hardware portion of the embedded system.

    Cadence Protium S1, together with the Cadence Palladium Z1 Enterprise Emulation Platform, directly addresses these challenges and revolutionizes bring-up time by an average of 80% from months to weeks or even days.

    One key challenge for FPGA based prototyping of embedded systems is modeling of memories. ASIC memories are usually very different from the FPGA-internal memories as they typically more than 2 ports and have different WRITE_ENABLE characteristics. Therefore, to make this work in the FPGA environment, users need to add multiplexers to implement a multi-port behavior

    Source: http://etn.fi/index.php?option=com_content&view=article&id=6224&via=n&datum=2017-04-26_16:08:29&mottagare=30929

    Reply
  33. Tomi Engdahl says:

    Don’t Keep Making the Same Embedded Development Mistakes
    https://www.designnews.com/electronics-test/don-t-keep-making-same-embedded-development-mistakes/61798276956654?cid=nl.x.dn14.edt.aud.dn.20170427.tst004t

    Because the study of software programming seldom includes case studies, many engineers are doomed to keep repeating the same mistakes.

    Because the study of software programming seldom includes case studies, many engineers are doomed to keep repeating the same mistakes, an embedded development expert will tell attendees at the upcoming Embedded Systems Conference (ESC) in Boston.

    David Nadler, founder of Nadler & Associates , says that in his role as a consultant, he has seen those repetitive mistakes end up costing money. “If you don’t plan correctly, it can cause considerable excitement,” he recently told Design News. “You have missed deadlines, lots of yelling, and a very unpleasant time for all.”

    Still, most engineers are never exposed to the benefits of case studies, which is why the mistakes keep happening. “If you go to the Harvard Business School, you see case studies of companies that went off the rails,” he told Design News recently. “If you go to medical school, you look at case studies of diagnostic problems. But in software development, we don’t study cases of software development gone bad, and that’s to our detriment. So we end up making the same mistakes over and over again.”

    “We’ll walk through the techniques used to get the projects back on track,” he told us. “Then we’ll talk about how developers might want to use these techniques at the beginning of a project – before it goes off the rails.” Mostly, he said he will examine the problem of failing to plan for product test and verification, which may be the most common of all.

    Nadler contends that all embedded developers benefit from an examination of case studies, especially those who’ve “picked up” software during their professional careers. “In the embedded world, a lot of people weren’t trained in software at all,” he said. “They pick it up after getting an electrical engineering degree, or a mechanical engineering degree, or a physics degree. But case studies also help people who are trained in software, because most curricula don’t teach that.”

    Reply
  34. Tomi Engdahl says:

    Android in Safety Critical Designs
    https://www.mentor.com/embedded-software/events/android-in-safety-critical-designs?contactid=1&PC=L&c=2017_04_27_esd_newsletter_update_v4

    The complexity and pace of heterogeneous System-on-Chip (SoC) architectures are accelerating at blinding speed.

    Running a single operating system on a single core or running SMP-capable operating systems on homogenous multicore processors is no longer a challenge for today’s embedded designer. Many safety critical designs today require rich user interfaces. You might want to consolidate a UI with a safety certified device component. Android is often chosen for it’s UI a capabilities, ease of development, and the Google Play store features for adding and updating applications.

    Reply
  35. Tomi Engdahl says:

    Rusty ARM
    http://hackaday.com/2017/05/03/rusty-arm/

    You’ve probably heard that Rust is a systems programming language that has quite the following growing. It purports to be fast like C, but has features like guaranteed memory and thread safety, generics, and it prevents segmentation faults. Sounds like just the thing for an embedded system, right? [Jorge Aparicio] was frustrated because his CPU of choice, an STM32 ARM Cortex-M didn’t have native support for Rust.

    Apparently, you can easily bind C functions into a Rust program but that wasn’t what he was after. So he set out to build pure Rust programs that could access the device’s hardware and he documented the effort.

    Not only does the post show you the tools you need and the software versions, but using OpenOCD, [Jorge] even managed to do some debugging. The technique seems to pretty generally applicable, too, as he says he’s done the same trick on six different controllers from three different vendors with no problem.

    Rust your ARM microcontroller!
    http://blog.japaric.io/quickstart/

    Reply
  36. Tomi Engdahl says:

    Embedded FPGA — A New System-Level Programming Paradigm
    Why eFPGAs are both essential and inevitable.
    http://semiengineering.com/embedded-fpga-a-new-system-level-programming-paradigm/

    The current public debate on the future of the semiconductor industry has turned to discussions about a growing selection of technologies that, rather than obsessing on further process geometry shrinks, focuses instead on new system architectures and better use of available silicon through new concepts in circuit, device, and packaging design. Embedded FPGA is the latest offering that promises to be far more than simply a ʹbetter mousetrap.ʹ The emergence of embedded FPGA is, in fact, not only essential at this juncture of the micro‐ electronics history, but also inevitable.

    Reply
  37. Tomi Engdahl says:

    Don’t Keep Making the Same Embedded Development Mistakes
    https://www.designnews.com/electronics-test/don-t-keep-making-same-embedded-development-mistakes/61798276956654?cid=nl.x.dn14.edt.aud.dn.20170503.tst004t

    Because the study of software programming seldom includes case studies, many engineers are doomed to keep repeating the same mistakes.

    Because the study of software programming seldom includes case studies, many engineers are doomed to keep repeating the same mistakes, an embedded development expert will tell attendees at the upcoming Embedded Systems Conference (ESC) in Boston.

    Reply
  38. Tomi Engdahl says:

    The Rise Of Parallelism
    http://semiengineering.com/the-rise-of-parallelism/

    After decades of failing to live up to expectations, massively parallel systems are getting serious attention.

    Parallel computing is an idea whose time has finally come, but not for the obvious reasons.

    Parallelism is a computer science concept that is older Moore’s Law. In fact, it first appeared in print in a 1958 IBM research memo, in which John Cocke, a mathematician, and Daniel Slotnick, a computer scientist, discussed parallelism in numerical calculations.

    Since then, parallelism has become widely used inside of corporate data centers, primarily because of the initial success of databases that were built to parse the computing across different processors. In the 1990s, entire business processes were layered across these underpinnings as ERP—enterprise resource planning applications. Parallelism also found a ready market in image and video processing, and in scientific calculations.

    But for the most part, it bypassed the rest of the computing world. As the mainframe evolved into the PC and ultimately into the smartphone and tablet/phablet, it made only limited gains. Multithreading was added into applications as more cores were added. Still, those extra cores saw only limited use, and not for lack of trying. Parallel programming languages came and went, extra cores sat idle, and software compilers and programmers still approached problems serially rather than in parallel.

    The tide is turning, though, for several reasons. First, performance gains due to Moore’s Law are slowing. It’s becoming much harder to turn up the clock frequency on smaller transistors

    Second, and perhaps more important, the “next big things” are new markets with heavy mathematical underpinnings—virtual/augmented reality, cloud computing, embedded vision, neural networks/artificial intelligence, and some IoT applications. Unlike personal computing applications, all of these are be built using algorithms that can be parsed across multiple cores or processing elements, and all of them require the kind of performance that used to be considered the realm of supercomputing.

    Third, the infrastructure for using heterogeneous components to work in unison is developed enough for sharing processing across multiple compute elements. That includes on-chip and off-chip networks; real, virtual and proxy caching; and new memory types and configurations that can handle more data per millisecond.

    Reply
  39. Tomi Engdahl says:

    The new languages ​​threaten java and C

    Java and C continue to be on the top of the list, based on search engine results, on the popularity of programming languages ​​in Tioba’s May list. However, both have lost their popularity over the past year.

    n May, the java is still a clear number one. It’s more than twice as popular as C, which in turn is almost twice as popular as C ++. Python has now become the fourth most popular in C #.

    The new rankings are sixth out of Visual Basic .NET as well as the eighth out of the top 10 list. Google’s Go-language rose by the end of the year, from 42th to 16th.

    Source: http://www.etn.fi/index.php/13-news/6271-uudet-kielet-uhkaavat-javaa-ja-c-ta

    More:
    TIOBE Index for May 2017
    May Headline: the pack is closing in on Java and C
    https://www.tiobe.com/tiobe-index/

    Reply
  40. Tomi Engdahl says:

    Using Modern C++ Techniques with Arduino
    http://hackaday.com/2017/05/05/using-modern-c-techniques-with-arduino/

    C++ has been quickly modernizing itself over the last few years. Starting with the introduction of C++11, the language has made a huge step forward and things have changed under the hood. To the average Arduino user, some of this is irrelevant, maybe most of it, but the language still gives us some nice features that we can take advantage of as we program our microcontrollers.

    Modern C++ allows us to write cleaner, more concise code, and make the code we write more reusable. The following are some techniques using new features of C++ that don’t add memory overhead, reduce speed, or increase size because they’re all handled by the compiler. Using these features of the language you no longer have to worry about specifying a 16-bit variable, calling the wrong function with NULL, or peppering your constructors with initializations. The old ways are still available and you can still use them, but at the very least, after reading this you’ll be more aware of the newer features as we start to see them roll out in Arduino code.

    Reply
  41. Tomi Engdahl says:

    OWASP Proposes New Vulnerabilities for 2017 Top 10
    http://www.securityweek.com/owasp-proposes-new-vulnerabilities-2017-top-10

    The Open Web Application Security Project (OWASP) announced on Monday the first release candidate for the 2017 OWASP Top 10, which proposes two new vulnerability categories.

    The new categories proposed for OWASP Top 10 – 2017 are “insufficient attack detection and prevention” and “unprotected APIs.”

    Reply
  42. Tomi Engdahl says:

    Where’s the LoJack for Embedded Systems Security?
    https://www.designnews.com/cyber-security/wheres-lojack-embedded-systems-security/54750155156765?cid=nl.x.dn14.edt.aud.dn.20170510.tst004t

    Michael Barr, CTO of the Barr Group told an audience at ESC Boston 2017 embedded systems have become a battlefield of cyberattacks and someone needs to do for embedded systems security what LoJack did for automotive.

    You might remember the commercials for LoJack – the aftermarket tracking system for catching car thieves. By tracking your vehicle and sending the information directly to the police LoJack not only enabled law enforcement to recover stolen vehicles, but to also discover the location of chop shops and other locations of illegal activity. It worked so well the company, started in the late 1980s, is still around today, in the age of GPS.

    Now Michael Barr, CTO of the Barr Group, is calling for the same thing in embedded systems security.

    The concept, which Barr semi-seriously referred to as “LoHack” to a keynote audience at the Embedded Systems Conference (ESC) 2017 in Boston, would externalize embedded security, moving it away from developers’ internal systems and onto a cloud-based service. “Remote hacks require [network] packet exchange,” he said. “If we could have cloud-based data traffic analysis and learning algorithms in place we could have device makers watching out and learning about attacks on their devices to a system that updates the security of all our networks.”

    Barr Group estimates 60% of new products being developed will have Internet connectivity (meaning some sort of Internet of Things functionality). However, 22% of those surveyed said they have zero requirements related to security; 37% said they have no coding standards or do not enforce their coding standard; 36% use no type of static analysis tool; and 48% do not even bother to encrypt their communications over the Internet.

    Barr pointed to the recent Mirai malware attacks as an example.

    It all adds up to an IoT peppered with security holes – leading Barr to explain that we ought to be calling it the IoDT – the Internet of Dangerous Things. “We’re living in a scary word, a dangerous world, and an interesting time to be an embedded systems developer,” Barr said. From retail to healthcare applications and beyond developers are told adding intelligence to devices is a surefire way to add value both for the consumer and the company (not to mention increase profits). But in the process of adding connectivity designers have opened the door to all manner of cyberattacks – some that are even life threatening. One only need look as far as recent cases showing the

    According to the Barr Group’s survey 25% of respondents reported their embedded systems project could injure or be life threatening. “What’s going wrong?” Barr asked. “We’re living in a world where attacks are increasing, but we should be living in a world where these systems are benefitting us.”

    For Barr the IoT is already growing too unruly for cybersecurity to remain in silos. But the challenge remains finding solutions to secure all of the types of processors and connection protocols available to IoT devices – not to mention staying one step ahead of malicious hackers. “Security is an arms race, attackers are always getting stronger,” he said.

    Barr said that developers – “designers of dangerous things” – must be ever mindful of their ethical duty to pay attention to cybersecurity. ‘The number one things we’re going to do is not ignore security anymore, especcially when we’re designing dangerous things,” he said while also asking developers to adopt bug-reducing software best practices and to use use cryptography where appropriate.

    Barr also advocated that engineers adopt an approach of practicing “defense in depth.” The idea is that security should be layered so that if one system fails, another system picks up on a breach or error. “You have think like this at each layer. What kind I do at each layer to add additional layers of security so there is no one weak link.”

    “ You have to think like that at every layer. You have to think who would attack, why would they attack, and what kind of motivations would they have.”

    Reply
  43. Tomi Engdahl says:

    Industry Needs to Rethink its Approach to MCU Technology
    http://intelligentsystemssource.com/industry-needs-to-rethink-its-approach-to-mcu-technology/

    The objective of this article is to look at the gap that has opened up between what embedded engineers are getting from modern microcontrollers and what their designs now require. It will show that the generic platform approach taken by semiconductor manufacturers is, in a growing number of cases, not up to the job As a result, many engineers are at serious risk of being marginalized.

    Microcontroller units (MCUs) effectively form the foundation upon which the vast majority of modern embedded system designs are constructed – providing engineers with a combination of flexibility, cost effectiveness and reasonable strong performance. Thanks to such attributes they have been able to achieve staggering unit sales (with nearly 23 billion devices being shipped last year, according to IC Insights). As the MCU market has matured, it has become increasingly dependent on a small number of widely used generic architectures. However, this is in almost complete opposition to the demands that are now emerging within certain parts of the embedded market, where markedly increased data throughputs are being witnessed. There is a current overabundance of multi-functional ‘one size fits all’ solutions, when what is actually needed instead is more application-optimized devices. Something therefore needs to be done rapidly to redress the balance before the situation gets worse.

    A case in point is in system designs where there are large amounts of multimedia data need to be dealt with. Modern general purpose MCUs are poorly equipped for such tasks and can often struggle to cope.

    Providing sufficient I/Os on MCUs is another area that is often not adequately attended to, despite the wide variety of different interface technologies that are now present in embedded designs. For example, USB implementation on MCUs is not normally that easy to carry out, as the software development aspect tends to be insufficiently covered. Consequently, embedded engineers often don’t have the support they need. Also MCUs will usually only have USB device (not USB host) functionality incorporated.

    It should be recognized that, in general, the I/O capabilities of most MCUs currently on the market do not extent far enough. Offering a great breadth of connectivity will become increasingly important in the future. In particular, there is likely to be a need for groupings of I/Os that will make the MCU more suited to dealing with specific application types.

    MCUs are the only route available to engineers of course – there is the system-on-chip (SoC) option to consider. In contrast to MCUs, these offer a more optimized solution, with elevated performance parameters, plus smaller footprint and long term cost advantages. There are, however, a multitude of other issues that substantially impact on their appeal. The upfront financial investment, the engineering effort and the time involved in SoC development all need to be factored in. To justify taking this course of action either there needs to be absolutely confidence about demand for high unit volumes, or that no change to the design will be required for an extended period of time. Even then, there are associated risks. If a bug is found, then it could take time to rectify with this, leading to holds up in the release of the end product. For such reasons, going for an off-the-shelf device is still likely to be favorable.

    Reply
  44. Tomi Engdahl says:

    The Next Level of Embedded Software Development
    http://electronics-know-how.com/article/2486/the-next-level-of-embedded-software-development

    With the rapid expansion of complex technology into everyday life, the importance of software is growing exponentially. This complimentary webinar presented by Siemens PLM Software will show how embedded software development can evolve successfully through a unified Application Lifecycle Management (ALM) — to be unlike your development experience before.

    It will be exposed how real-time collaboration and comprehensive end-to-end traceability from requirements to code and testing will help your team manage and resolve the most complex systems within multiple product variants. Moreover, it’ll be demonstrated how modern development methodologies like Agile can be applied, and compliance with industry standards and regulations can be met effectively, for instance, ISO, IEC, FDA, FAA and many more.

    Reply
  45. Tomi Engdahl says:

    Embedded FPGA, The Ultimate Accelerator
    How embedded FPGAs compare to discrete FPGAs.
    http://semiengineering.com/embedded-fpga-the-ultimate-accelerator/

    An embedded FPGA (eFPGA) is an IP core that you integrate into your ASIC or SoC to get the benefits of programmable logic without the cost, but with better latency, throughput, and power characteristics. With an eFPGA, you define the quantity of look-up-tables (LUTs), registers, embedded memory, and DSP blocks. You can also control the aspect ratio, number of I/O ports, making tradeoffs between power and performance.

    Reply
  46. Tomi Engdahl says:

    The Risks and Rewards of Open Platform Firmware
    Can you build a product using open platform hardware? Yes, if you understand the risks.
    https://www.designnews.com/electronics-test/risks-and-rewards-open-platform-firmware/44266721456766?cid=nl.x.dn14.edt.aud.dn.20170512.tst004t

    Open-source hardware is great for a lot of things. It gives students and educators a great learning platform, and it’s the perfect solution for all sorts of DIY projects . But can you design a commercial product around open source?

    You can if you understand the risks and take the proper security precautions, particularly when it comes to your firmware.

    Speaking at the 2017 Embedded Systems Conference (ESC) in Boston Brian Richardson, a technical evangelist for Intel, praised open hardware platforms for many reasons: they offer publicly available designs; they’re based on open-source concepts; and they encourage experimentation, new features, and new designs. The DIY and Maker community has already heavily embraced hobbyist boards like the Raspberry Pi and Arduino, and there are other products on the market as well such as the MinnowBoard and Intel’s own Galileo Board .

    “On an open hardware platform the firmware is made available primarily for debugging and hacking,” Richardson told the audience. “It ships with unsigned binary firmware images because as a maker if we signed binary it doesn’t do you any good. It also assumes updates are run by a developer – and hopefully not a hacker.”

    The trouble comes, Richardson said, because the platform identifiers are not unique. If a developer uses GitHub or some other open-source repository to get a GUID for a platform that means everyone else can get and use the same one as well, even people with bad intentions.

    There are also problems inherent in the way firmware itself operates. “Firmware initializes hardware, establishes root-of-trust, then hands things off to OS … which creates an opportunity for someone else,”

    So how do you deploy products based on open designs without creating a BlackHat presentation waiting to happen?

    The first step Richardson said is to build for release – that is, make a product look like it is proprietary, and keep people from knowing you used open source. “At the very least don’t advertise so someone can’t find it on GitHub,” Richardson said, also strongly suggesting that designers remove the debug features and change the default identifiers on their open source hardware.

    The other big key is in UEFI itself and providing secure field updates to firmware. “You really want to have firmware update in the field,” Richardson said. “The risk is someone can drop the wrong thing on the platform, such as hacked firmware or a slight variation that could brick a product by accident. The reward is if there’s a bug or security hole on the platform you can patch it.”

    “If I trust the firmware then we can let the firmware be the root of trust,” Richardson said. “If you can’t trust version 1 of your firmware not to be exploited you have a bigger problem than anyone can help you with.”

    Ultimately it will be up to developers to decide if using open source is the right move. With the open-source hardware space growing and companies even beginning to offer open-source SoCs , it’s likely that a lot more designers, particularly at the DIY and startup level, will be opting to leverage some sort of open source hardware and software to help bring their product to market.

    Reply
  47. Tomi Engdahl says:

    Microcontroller Load Meter Tells You How Hard It’s Currently Working
    http://hackaday.com/2017/05/13/microcontroller-load-meter-tells-you-how-hard-its-currently-working/

    Writing code for embedded applications can be difficult. There are all sorts of problems you can run into – race conditions, conflicting peripherals, unexpected program flow – any of these can cause havoc with your project. One thing that can really mess things up is if your microcontroller is getting stuck on a routine – without the right debugging hardware and software, this can be a tricky one to spot. [Terry] developed a microcontroller load meter just for this purpose.

    It’s a simple setup – a routine named loadmeter-task on the microcontroller sends a train of pulses to a mechanical ammeter. The ammeter is then adjusted with a trimpot to read “0” when the chip is unloaded. As other tasks steal CPU time, there’s less time for loadmeter-task to send its pulses, so the meter falls to the left.

    Overall it’s a quick and easy bit of code you could add to any project with a spare GPIO pin

    Loadmeter
    http://128.199.141.78/instrument-mcu.html

    Reply
  48. Tomi Engdahl says:

    Behold, auto-completing Android bug reports – because you’re not very thorough
    People can’t be bothered to recount crashes, so machines are here to help
    https://www.theregister.co.uk/2017/05/15/autocompleting_bug_reports_because_youre_not_very_thorough/

    Auto-completion systems that attempt to finish your sentences when typing text messages or search queries can be a mixed blessing. Often, they save time. But they can also get in the way when they make incorrect guesses about intended input.

    In the context of software bug reporting, however, auto-completion – adding additional information to bug report filings – doesn’t have much of a downside.

    augmenting bug reports with additional data gleaned through static analysis, dynamic analysis, and auto-completion prompts can make the bugs in Android apps easier to reproduce and fix.

    Software bugs cost the US economy somewhere between $22.2 to $59.5 billion annually, according to a 2002 study conducted by the Department of Commerce’s National Institute of Standards and Technology. And since then, software has become far more widespread. So there’s ample incentive to produce higher quality code.

    The paper, “Auto-completing Bug Reports for Android Applications,” explains that bug tracking systems such as Bugzilla, Mantis, the Google Code Issue Tracker, the GitHub Issue Tracker, and products like JIRA depend mainly on bug descriptions written by people – unstructured natural language.

    Auto-completing Bug Reports for Android Applications
    https://arxiv.org/pdf/1705.04016.pdf

    Reply
  49. Tomi Engdahl says:

    Use hardware for protection

    Everything starts with trust, once belonged to the German bank’s advertising log. The more value the data is or the more complicated the system is, the harder the confidence is to secure. The same goes for embedded systems.

    The number of embedded devices increases exponentially. It is accompanied by the increase in the risks brought by these devices, in particular through the integration of network connections. As a result of each new integration layer, the number of threats and threats increases with Industry 4.0 devices. More threats emerge as soon as core activities are moved to the cloud because cloud devices have no physical security.

    So far, mobile operators have not had scalable security tools to verify that data and EDP systems have been kept confidential and intact. As a result, cloud computing components must be protected. To ensure the compatibility of different devices, the computing platform’s security is based on defined standards.

    Three security levels

    Security can be created on three different levels.
    - Only at the software level: Anchoring security methods to the operating system is practically free, but only provides limited protection.
    - At Software and Device Levels: A reliable driving environment, consisting of software and hardware, provides biscuit-level security at low cost.
    - Software and physically tamper-proof hardware: The permanently installed security element implemented with the hardware and equipped with the encryption algorithm requires an investment but guarantees the highest level of protection.

    There are several computer technologies available for the latter:
    - The Trusted Platform Module (TPM) is a trusted device system with a stored key.
    - The Trusted Network Connect (TNC) interface defines a controlled access to the system, which always protects you against terminal equipment.
    - Self-Encrypting Drive (SED) refers to the blocking level.
    - The PC terminal, the Mobile and Vehicle applications are based on profiles conforming to the specifications of the TPM 2.0 library.

    The solution also requires a reliable hardware with connections to different platforms. This must also include embedded systems such as smartphones, cars, clouds, virtual machines, servers, desktops, laptops, tablets, and many other devices.

    security Processes

    First, the quality of the security depends on the corresponding processes. Ideally, such a process works iteratively: it starts with an analysis of attacks and threats that defines the security objectives and methods. These serve as the basis on which to design and develop secure environments, as well as a security lab where the company can develop a range of security-certified tests to ensure the safe production and personalization of end products.

    Safety Anchors

    System security is defined by the key to encrypting and decrypting sensitive data. If this key is hacked or cloned, all security provided by it will be lost. This means that key processing throughout the product life cycle – including production – is critical. Three security anchors ensure key security: key safe storage, encryption protection and key handling. In other words, who has access to the key and when, and by what means. These anchors can be implemented differently in the system, either as software components in the operating system or on the device level as separate hardware systems in which the security functions are expanded.

    Although isolation and / or encryption of all data and systems provides a very high level of security, this approach can not generally be implemented or recommended. For example, a server can not be encrypted where business partners require access to open data. In addition, sufficient transparency is required to update the system software and firmware.

    A typical, secure industrial control system could be this: a wireless sensor with built-in encryption module is connected to the control unit by one-way authentication. The control unit is equipped with an authentication circuit, which is a device module with a stored key. With this key, the control unit can check the authentication of the sensor with another one-way authentication to another connected device with the same encryption module as the sensor

    Source: http://www.etn.fi/index.php/tekniset-artikkelit/6315-suojaus-kannattaa-aina-tehda-raudalla

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*