New approaches for embedded development

The idea for this posting started when I read New approaches to dominate in embedded development article. Then I found some ther related articles and here is the result: long article.

Embedded devices, or embedded systems, are specialized computer systems that constitute components of larger electromechanical systems with which they interface. The advent of low-cost wireless connectivity is altering many things in embedded development: With a connection to the Internet, an embedded device can gain access to essentially unlimited processing power and memory in cloud service – and at the same time you need to worry about communication issues like breaks connections, latency and security issues.

Those issues are espcecially in the center of the development of popular Internet of Things device and adding connectivity to existing embedded systems. All this means that the whole nature of the embedded development effort is going to change. A new generation of programmers are already making more and more embedded systems. Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). Instead of trying to craft each design to optimize for cost, code size, and performance, the new generation wants to create application code that is separate from an underlying platform that handles all the routine details. Memory is cheap, so code size is only a minor issue in many applications.

Historically, a typical embedded system has been designed as a control-dominated system using only a state-oriented model, such as FSMs. However, the trend in embedded systems design in recent years has been towards highly distributed architectures with support for concurrency, data and control flow, and scalable distributed computations. For example computer networks, modern industrial control systems, electronics in modern car,Internet of Things system fall to this category. This implies that a different approach is necessary.

Companies are also marketing to embedded developers in new ways. Ultra-low cost development boards to woo makers, hobbyists, students, and entrepreneurs on a shoestring budget to a processor architecture for prototyping and experimentation have already become common.If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. As hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot. Leaders with Embedded systems development lifecycle management solutions speak out on new approaches available today in developing advanced products and systems.

Traditional approaches

C/C++

Tradionally embedded developers have been living and breathing C/C++. For a variety of reasons, the vast majority of embedded toolchains are designed to support C as the primary language. If you want to write embedded software for more than just a few hobbyist platforms, your going to need to learn C. Very many embedded systems operating systems, including Linux Kernel, are written using C language. C can be translated very easily and literally to assembly, which allows programmers to do low level things without the restrictions of assembly. When you need to optimize for cost, code size, and performance the typical choice of language is C. Still C is today used for maximum efficiency instead of C++.

C++ is very much alike C, with more features, and lots of good stuff, while not having many drawbacks, except fror it complexity. The had been for years suspicion C++ is somehow unsuitable for use in small embedded systems. At some time many 8- and 16-bit processors were lacking a C++ compiler, that may be a concern, but there are now 32-bit microcontrollers available for under a dollar supported by mature C++ compilers.Today C++ is used a lot more in embedded systems. There are many factors that may contribute to this, including more powerful processors, more challenging applications, and more familiarity with object-oriented languages.

And if you use suitable C++ subset for coding, you can make applications that work even on quite tiny processors, let the Arduino system be an example of that: You’re writing in C/C++, using a library of functions with a fairly consistent API. There is no “Arduino language” and your “.ino” files are three lines away from being standard C++.

Today C++ has not displaced C. Both of the languages are widely used, sometimes even within one system – for example in embedded Linux system that runs C++ application. When you write a C or C++ programs for modern Embedded Linux you typically use GCC compiler toolchain to do compilation and make file to manage compilation process.

Most organization put considerable focus on software quality, but software security is different. When the security is very much talked about topic todays embedded systems, the security of the programs written using C/C++ becomes sometimes a debated subject. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security. The truth is that majority of today’s Internet connected systems have their networking fuctionality written using C even of the actual application layer is written using some other methods.

Java

Java is a general-purpose computer programming language that is concurrent, class-based and object-oriented.The language derives much of its syntax from C and C++, but it has fewer low-level facilities than either of them. Java is intended to let application developers “write once, run anywhere” (WORA), meaning that compiled Java code can run on all platforms that support Java without the need for recompilation.Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of computer architecture. Java is one of the most popular programming languages in use, particularly for client-server web applications. In addition to those it is widely used in mobile phones (Java apps in feature phones,) and some embedded applications. Some common examples include SIM cards, VOIP phones, Blu-ray Disc players, televisions, utility meters, healthcare gateways, industrial controls, and countless other devices.

Some experts point out that Java is still a viable option for IoT programming. Think of the industrial Internet as the merger of embedded software development and the enterprise. In that area, Java has a number of key advantages: first is skills – there are lots of Java developers out there, and that is an important factor when selecting technology. Second is maturity and stability – when you have devices which are going to be remotely managed and provisioned for a decade, Java’s stability and care about backwards compatibility become very important. Third is the scale of the Java ecosystem – thousands of companies already base their business on Java, ranging from Gemalto using JavaCard on their SIM cards to the largest of the enterprise software vendors.

Although in the past some differences existed between embedded Java and traditional PC based Java solutions, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (consumers, industrial, white goods, healthcare, metering, smart markets in general,…) Java for embedded devices (Java Embedded) is generally integrated by the device manufacturers. It is NOT available for download or installation by consumers. Originally Java was tightly controlled by Sun (now Oracle), but in 2007 Sun relicensed most of its Java technologies under the GNU General Public License. Others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (browser plugin for applets).

My feelings with Java is that if your embedded systems platform supports Java and you know hot to code Java, then it could be a good tool. If your platform does not have ready Java support, adding it could be quite a bit of work.

 

Increasing trends

Databases

Embedded databases are coming more and more to the embedded devices. If you look under the hood of any connected embedded consumer or mobile device, in addition to the OS you will find a variety of middleware applications. One of the most important and most ubiquitous of these is the embedded database. An embedded database system is a database management system (DBMS) which is tightly integrated with an application software that requires access to stored data, such that the database system is “hidden” from the application’s end-user and requires little or no ongoing maintenance.

There are many possible databases. First choice is what kind of database you need. The main choices are SQL databases and simpler key-storage databases (also called NoSQL).

SQLite is the Database chosen by virtually all mobile operating systems. For example Android and iOS ship with SQLite. It is also built into for example Firefox web browser. It is also often used with PHP. So SQLite is probably a pretty safe bet if you need relational database for an embedded system that needs to support SQL commands and does not need to store huge amounts of data (no need to modify database with millions of lines of data).

If you do not need relational database and you need very high performance, you need probably to look somewhere else.Berkeley DB (BDB) is a software library intended to provide a high-performance embedded database for key/value data. Berkeley DB is written in Cwith API bindings for many languages. BDB stores arbitrary key/data pairs as byte arrays. There also many other key/value database systems.

RTA (Run Time Access) gives easy runtime access to your program’s internal structures, arrays, and linked-lists as tables in a database. When using RTA, your UI programs think they are talking to a PostgreSQL database (PostgreSQL bindings for C and PHP work, as does command line tool psql), but instead of normal database file you are actually accessing internals of your software.

Software quality

Building quality into embedded software doesn’t happen by accident. Quality must be built-in from the beginning. Software startup checklist gives quality a head start article is a checklist for embedded software developers to make sure they kick-off their embedded software implementation phase the right way, with quality in mind

Safety

Traditional methods for achieving safety properties mostly originate from hardware-dominated systems. Nowdays more and more functionality is built using software – including safety critical functions. Software-intensive embedded systems require new approaches for safety. Embedded Software Can Kill But Are We Designing Safely?

IEC, FDA, FAA, NHTSA, SAE, IEEE, MISRA, and other professional agencies and societies work to create safety standards for engineering design. But are we following them? A survey of embedded design practices leads to some disturbing inferences about safety.Barr Group’s recent annual Embedded Systems Safety & Security Survey indicate that we all need to be concerned: Only 67 percent are designing to relevant safety standards, while 22 percent stated that they are not—and 11 percent did not even know if they were designing to a standard or not.

If you were the user of a safety-critical embedded device and learned that the designers had not followed best practices and safety standards in the design of the device, how worried would you be? I know I would be anxious, and quite frankly. This is quite disturbing.

Security

The advent of low-cost wireless connectivity is altering many things in embedded development – it has added to your list of worries need to worry about communication issues like breaks connections, latency and security issues. Understanding security is one thing; applying that understanding in a complete and consistent fashion to meet security goals is quite another. Embedded development presents the challenge of coding in a language that’s inherently insecure; and quality assurance does little to ensure security.

Developing Secure Embedded Software white paper  explains why some commonly used approaches to security typically fail:

MISCONCEPTION 1: SECURITY BY OBSCURITY IS A VALID STRATEGY
MISCONCEPTION 2: SECURITY FEATURES EQUAL SECURE SOFTWARE
MISCONCEPTION 3: RELIABILITY AND SAFETY EQUAL SECURITY
MISCONCEPTION 4: DEFENSIVE PROGRAMMING GUARANTEES SECURITY

Many organizations are only now becoming aware of the need to incorporate security into their software development lifecycle.

Some techniques for building security to embedded systems:

Use secure communications protocols and use VPN to secure communications
The use of Public Key Infrastructure (PKI) for boot-time and code authentication
Establishing a “chain of trust”
Process separation to partition critical code and memory spaces
Leveraging safety-certified code
Hardware enforced system partitioning with a trusted execution environment
Plan the system so that it can be easily and safely upgraded when needed

Flood of new languages

Rather than living and breathing C/C++, the new generation prefers more high-level, abstract languages (like Java, Python, JavaScript etc.). So there is a huge push to use interpreted and scripting also in embedded systems. Increased hardware performance on embedded devices combined with embedded Linux has made the use of many scripting languages good tools for implementing different parts of embedded applications (for example web user interface). Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device.  There are also many other relevant languages

One workable solution, especially for embedded Linux systems is that part of the activities organized by totetuettu is a C program instead of scripting languages ​​(Scripting). This enables editing operation simply script files by editing without the need to turn the whole system software again.  Scripting languages ​​are also tools that can be implemented, for example, a Web user interface more easily than with C / C ++ language. An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary.

Scripting languages ​​have been around for a couple of decades Linux and Unix server world standard tools. the proliferation of embedded Linux and resources to merge systems (memory, processor power) growth has made them a very viable tool for many embedded systems – for example, industrial systems, telecommunications equipment, IoT gateway, etc . Some of the command language is suitable for up well even in quite small embedded environments.
I have used with embedded systems successfully mm. Bash, AWK, PHP, Python and Lua scripting languages. It works really well and is really easy to make custom code quickly .It doesn’t require a complicated IDE; all you really need is a terminal – but if you want there are many IDEs that can be used.
High-level, dynamically typed languages, such as Python, Ruby and JavaScript. They’re easy—and even fun—to use. They lend themselves to code that easily can be reused and maintained.

There are some thing that needs to be considered when using scripting languages. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. But it is better off practicing “strong testing” than relying on strong typing. Other ownsides of these languages is that they tend to execute more slowly than static languages like C/C++, but for very many aplications they are more than adequate. Once you know your way around dynamic languages, as well the frameworks built in them, you get a sense of what runs quickly and what doesn’t.

Bash and other shell scipting

Shell commands are the native language of any Linux system. With the thousands of commands available for the command line user, how can you remember them all? The answer is, you don’t. The real power of the computer is its ability to do the work for you – the power of the shell script is the way to easily to automate things by writing scripts. Shell scripts are collections of Linux command line commands that are stored in a file. The shell can read this file and act on the commands as if they were typed at the keyboard.In addition to that shell also provides a variety of useful programming features that you are familar on other programming langauge (if, for, regex, etc..). Your scripts can be truly powerful. Creating a script extremely straight forward: It can be created by opening a separate editor such or you can do it through a terminal editor such as VI (or preferably some else more user friendly terminal editor). Many things on modern Linux systems rely on using scripts (for example starting and stopping different Linux services at right way).

One of the most useful tools when developing from within a Linux environment is the use of shell scripting. Scripting can help aid in setting up environment variables, performing repetitive and complex tasks and ensuring that errors are kept to a minimum. Since scripts are ran from within the terminal, any command or function that can be performed manually from a terminal can also be automated!

The most common type of shell script is a bash script. Bash is a commonly used scripting language for shell scripts. In BASH scripts (shell scripts written in BASH) users can use more than just BASH to write the script. There are commands that allow users to embed other scripting languages into a BASH script.

There are also other shells. For example many small embedded systems use BusyBox. BusyBox providesis software that provides several stripped-down Unix tools in a single executable file (more than 300 common command). It runs in a variety of POSIX environments such as Linux, Android and FreeeBSD. BusyBox become the de facto standard core user space toolset for embedded Linux devices and Linux distribution installers.

Shell scripting is a very powerful tool that I used a lot in Linux systems, both embedded systems and servers.

Lua

Lua is a lightweight  cross-platform multi-paradigm programming language designed primarily for embedded systems and clients. Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages. Lua is intended to be embedded into other applications, and provides a C API for this purpose.

Lua has found many uses in many fields. For example in video game development, Lua is widely used as a scripting language by game programmers. Wireshark network packet analyzer allows protocol dissectors and post-dissector taps to be written in Lua – this is a good way to analyze your custom protocols.

There are also many embedded applications. LuCI, the default web interface for OpenWrt, is written primarily in Lua. NodeMCU is an open source hardware platform, which can run Lua directly on the ESP8266 Wi-Fi SoC. I have tested NodeMcu and found it very nice system.

PHP

PHP is a server-side HTML embedded scripting language. It provides web developers with a full suite of tools for building dynamic websites but can also be used as a general-purpose programming language. Nowadays it is common to find embedded hardware devices, based on Raspberry Pi for instance, that are accessible via a network, run Linux and come with Apache and PHP installed on the device. So on such enviroment is a good idea to take advantage of those built-in features for the applications they are good – for building web user interface. PHP is often embedded into HTML code, or it can be used in combination with various web template systems, web content management system and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface (CGI) executable.

Python

Python is a widely used high-level, general-purpose, interpreted, dynamic programming language. Its design philosophy emphasizes code readability. Python interpreters are available for installation on many operating systems, allowing Python code execution on a wide variety of systems. Many operating systems include Python as a standard component; the language ships for example with most Linux distributions.

Python is a multi-paradigm programming language: object-oriented programming and structured programming are fully supported, and there are a number of language features which support functional programming and aspect-oriented programming,  Many other paradigms are supported using extensions, including design by contract and logic programming.

Python is a remarkably powerful dynamic programming language that is used in a wide variety of application domains. Since 2003, Python has consistently ranked in the top ten most popular programming languages as measured by the TIOBE Programming Community Index. Large organizations that make use of Python include Google, Yahoo!, CERN, NASA. Python is used successfully in thousands of real world business applications around globally, including many large and mission-critical systems such as YouTube.com and Google.com.

Python was designed to be highly extensible. Libraries like NumPy, SciPy and Matplotlib allow the effective use of Python in scientific computing. Python is intended to be a highly readable language. Python can also be embedded in existing applications and hasbeen successfully embedded in a number of software products as a scripting language. Python can serve as a scripting language for web applications, e.g., via mod_wsgi for the Apache web server.

Python can be used in embedded, small or minimal hardware devices. Some modern embedded devices have enough memory and a fast enough CPU to run a typical Linux-based environment, for example, and running CPython on such devices is mostly a matter of compilation (or cross-compilation) and tuning. Various efforts have been made to make CPython more usable for embedded applications.

For more limited embedded devices, a re-engineered or adapted version of CPython, might be appropriateExamples of such implementations include the following: PyMite, Tiny Python, Viper. Sometimes the embedded environment is just too restrictive to support a Python virtual machine. In such cases, various Python tools can be employed for prototyping, with the eventual application or system code being generated and deployed on the device. Also MicroPython and tinypy have been ported Python to various small microcontrollers and architectures. Real world applications include Telit GSM/GPRS modules that allow writing the controlling application directly in a high-level open-sourced language: Python.

Python on embedded platforms? It is quick to develop apps, quick to debug – really easy to make custom code quickly. Sometimes lack of static checking vs a regular compiler can cause problems to be thrown at run-time. To avoid those try to have 100% test coverage. pychecker is a very useful too also which will catch quite a lot of common errors. The only downsides for embedded work is that sometimes python can be slow and sometimes it uses a lot of memory (relatively speaking). An empirical study found scripting languages (such as Python) more productive than conventional languages (such as C and Java) for a programming problem involving string manipulation and search in a dictionary. Memory consumption was often “better than Java and not much worse than C or C++”.

JavaScript and node.js

JavaScript is a very popular high-level language. Love it or hate it, JavaScript is a popular programming language for many, mainly because it’s so incredibly easy to learn. JavaScript’s reputation for providing users with beautiful, interactive websites isn’t where its usefulness ends. Nowadays, it’s also used to create mobile applications, cross-platform desktop software, and thanks to Node.js, it’s even capable of creating and running servers and databases!  There is huge community of developers. JavaScript is a high-level language.

Its event-driven architecture fits perfectly with how the world operates – we live in an event-driven world. This event-driven modality is also efficient when it comes to sensors.

Regardless of the obvious benefits, there is still, understandably, some debate as to whether JavaScript is really up to the task to replace traditional C/C++ software in Internet connected embedded systems.

It doesn’t require a complicated IDE; all you really need is a terminal.

JavaScript is a high-level language. While this usually means that it’s more human-readable and therefore more user-friendly, the downside is that this can also make it somewhat slower. Being slower definitely means that it may not be suitable for situations where timing and speed are critical.

JavaScript is already in embedded boards. You can run JavaScipt on Raspberry Pi and BeagleBone. There are also severa other popular JavaScript-enabled development boards to help get you started: The Espruino is a small microcontroller that runs JavaScript. The Tessel 2 is a development board that comes with integrated wi-fi, an ethernet port, two USB ports, and companion source library downloadable via the Node Package Manager. The Kinoma Create, dubbed the “JavaScript powered Internet of Things construction kit.”The best part is that, depending on the needs of your device, you can even compile your JavaScript code into C!

JavaScript for embedded systems is still in its infancy, but we suspect that some major advancements are on the horizon.We for example see a surprising amount of projects using Node.js.Node.js is an open-source, cross-platform runtime environment for developing server-side Web applications. Node.js has an event-driven architecture capable of asynchronous I/O that allows highly scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. The runtime environment interprets JavaScript using Google‘s V8 JavaScript engine.Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionality. Node.js’ package ecosystem, npm, is the largest ecosystem of open source libraries in the world. Modern desktop IDEs provide editing and debugging features specifically for Node.js applications

JXcore is a fork of Node.js targeting mobile devices and IoTs. JXcore is a framework for developing applications for mobile and embedded devices using JavaScript and leveraging the Node ecosystem (110,000 modules and counting)!

Why is it worth exploring node.js development in an embedded environment? JavaScript is a widely known language that was designed to deal with user interaction in a browser.The reasons to use Node.js for hardware are simple: it’s standardized, event driven, and has very high productivity: it’s dynamically typed, which makes it faster to write — perfectly suited for getting a hardware prototype out the door. For building a complete end-to-end IoT system, JavaScript is very portable programming system. Typically an IoT projects require “things” to communicate with other “things” or applications. The huge number of modules available in Node.js makes it easier to generate interfaces – For example, the HTTP module allows you to create easily an HTTP server that can easily map the GET method specific URLs to your software function calls. If your embedded platform has ready made Node.js support available, you should definately consider using it.

Future trends

According to New approaches to dominate in embedded development article there will be several camps of embedded development in the future:

One camp will be the traditional embedded developer, working as always to craft designs for specific applications that require the fine tuning. These are most likely to be high-performance, low-volume systems or else fixed-function, high-volume systems where cost is everything.

Another camp might be the embedded developer who is creating a platform on which other developers will build applications. These platforms might be general-purpose designs like the Arduino, or specialty designs such as a virtual PLC system.

This third camp is likely to become huge: Traditional embedded development cannot produce new designs in the quantities and at the rate needed to deliver the 50 billion IoT devices predicted by 2020.

Transition will take time. The enviroment is different than computer and mobile world. There are too many application areas with too widely varying requirements for a one-size-fits-all platform to arise.

But the shift will happen as hardware becomes powerful and cheap enough that the inefficiencies of platform-based products become moot.

 

Sources

Most important information sources:

New approaches to dominate in embedded development

A New Approach for Distributed Computing in Embedded Systems

New Approaches to Systems Engineering and Embedded Software Development

Lua (programming language)

Embracing Java for the Internet of Things

Node.js

Wikipedia Node.js

Writing Shell Scripts

Embedded Linux – Shell Scripting 101

Embedded Linux – Shell Scripting 102

Embedding Other Languages in BASH Scripts

PHP Integration with Embedded Hardware Device Sensors – PHP Classes blog

PHP

Python (programming language)

JavaScript: The Perfect Language for the Internet of Things (IoT)

Node.js for Embedded Systems

Embedded Python

MicroPython – Embedded Pytho

Anyone using Python for embedded projects?

Telit Programming Python

JavaScript: The Perfect Language for the Internet of Things (IoT)

MICROCONTROLLERS AND NODE.JS, NATURALLY

Node.js for Embedded Systems

Why node.js?

Node.JS Appliances on Embedded Linux Devices

The smartest way to program smart things: Node.js

Embedded Software Can Kill But Are We Designing Safely?

DEVELOPING SECURE EMBEDDED SOFTWARE

 

 

 

1,418 Comments

  1. Tomi Engdahl says:

    Over 95% of embedded-system code today is written in C or C++. But the world is changing, says Altera’s Ron Wilson, and language preferences will change with it.

    Source: http://semiengineering.com/blog-review-march-16/

    Is Tomorrow’s Embedded-Systems Programming Language Still C?
    http://systemdesign.altera.com/tomorrows-embedded-systems-programming-language-still-c/

    What is the best language in which to code your next project? If you are an embedded-system designer, that question has always been a bit silly. You will use, C—or if you are trying to impress management, C disguised as C++. Perhaps a few critical code fragments will be written in assembly language. But according to a recent study by the Barr Group, over 95 percent of embedded-system code today is written in C or C++.

    And yet, the world is changing. New coders, new challenges, and new architectures are loosening C’s hold—some would say C’s cold, dead grip—on embedded software. According to one recent study the fastest-growing language for embedded computing is Python, and there are many more candidates in the race as well. These languages still make up a tiny minority of code. But increasingly, the programmer who clings to C/C++ risks sounding like the assembly-code expert of 20 years ago: their way generates faster, more compact, and more reliable code. So why change?

    What would drive a skilled programmer to change? What languages could credibly become important in embedded systems? And most important, what issues would a new, multilingual world create?

    One major driving force is the flow of programmers into the embedded world from other pursuits. The most obvious of these is the entry of recent graduates. Not long ago, a recent grad would have been introduced to programming in a C course, and would have done most of her projects in C or C++. Not any more. “Now, the majority of computer science curricula use Python as their introductory language,” observes Intel software engineering manager David

    Other influences are growing as well. Use of Android as a platform for connected or user-friendly embedded designs opened the door to Android’s native language, Java. At the other extreme on the complexity scale, hobby developers migrating in through robotics, drones, or similar small projects often come from an Arduino or Raspberry-Pi background. Their experience may be in highly compact, simple program-generator environments or small-footprint languages like B#.

    The pervasiveness of talk about the Internet of Things (IoT) is also having an influence, bringing Web developers into the conversation. If the external interface of an embedded system is a RESTful Web presence, they ask, shouldn’t the programming language be JavaScript, or its server-side relative Node.js?

    The momentum for a choice like Node.js is partly cultural, but also architectural.

    Strong Motivations

    So why don’t these new folks come in, sit down, and learn C? “The real motivation is developer productivity,” Stewart says. Critics of C have long argued that the language is slow to write, error-prone, subject to unexpected hardware dependencies, and often indecipherable to anyone but the original coder. None of these attributes is a productivity feature, and all militate against the greatest source of productivity gains, design reuse.

    In contrast, many more recent languages take steps to promote both rapid learning and effective code reuse. While nearly all languages today owe at lease something to C’s highly-compressed syntax, now the emphasis has swung back to readability rather than minimum character count.

    Two other important attributes work for ease of reuse in modern languages. One, the more controversial, is dynamic typing. When you use a variable, the interpreter—virtually all these server-side languages are interpreted rather than compiled—determines the current data type of the value you pass to the expression. Then the interpreter selects the appropriate operation for evaluating the expression with that type of data. This relieves the programmer of worry over whether the function he wants to call expects integer or real arguments. But embedded programmers and code reliability experts are quick to point out that dynamic typing is inherently inefficient at run-time and can lead to genuinely weird consequences—intentional or otherwise.

    The other attribute is a prejudice toward modularity. It is sometimes said that Python programming is actually not programming at all, but scripting: stringing together calls to functions written in C by someone else.

    These attributes—readability, in-line documentation, dynamic typing, and heavy reuse of functions, have catalyzed an explosion of ecosystems in the open-source world. Programmers instinctively look in giant open-source libraries such as npm (for Node.js), PyPI (for Python), or Rubygems.org (for Ruby) to find functions they can use.

    The Downside

    With so many benefits, there have to be issues. And the new languages contending for space in embedded computing offer many. There are lots of good reasons why change hasn’t swept the industry yet.

    The most obvious problem with most of these languages is that they are interpreted, not compiled. That means a substantial run-time package, including the interpreter itself, its working storage, overhead for dynamic typing, run-time libraries, and so on, has to fit in the embedded system. In principle all this can be quite compact: some Java virtual machines fit into tens of kilobytes. But Node.js, Python, and similar languages from the server side need their space. A Python virtual machine not stripped down below the level of real compatibility is likely to consume several megabytes, before you add your code.

    Then there is the matter of performance.

    Run-time efficiency is not an impossible obstacle, though. One way to improve it is to use a just-in-time (JiT) compiler. As the name implies, a JiT compiler works in parallel with the interpreter

    In addition, many functions called by the programs were originally written in C, and are called through a foreign function interface. Heavily-used functions may run at compiled-C speed for the simple reason that they are compiled C code.

    Another difficulty with languages from the server side is the absence of constructs for dealing with the real world. There is no provision for real-time deadlines, or for I/O beyond network and storage in server environments. This shortcoming gets addressed in several ways.

    Most obviously, the Android environment encapsulates Java code in an almost hardware-independent abstraction: a virtual machine with graphics, touch screens, audio and video, multiple networks, and physical sensors.

    Languages like Python require a different approach. Since the CPython interpreter runs on Linux, it can in principle be run on any embedded Linux system with sufficient speed and physical memory.

    Security presents further problems. Many safety and reliability standards discourage or ban use of open-source code that has not been formally proven or exhaustively tested. Such restrictions can make module reuse impossible, or so complex as to be only marginally productive.

    Another possibility might be a set of language-specific interpreters producing common intermediate code for a JiT compiler

    If these things are coming, what is a skilled embedded programmer to do? You could explore languages from the Web-programming, server, or even hobbyist worlds. You could try developing a module in both C++ and an interpreted language on your next project. There would be learning-curve time, but the extra effort might count as independent parallel development—a best practice for reliability.

    Reply
  2. Tomi Engdahl says:

    The Zerynth Framework: programming IoT with Python
    http://www.open-electronics.org/the-zerynth-framework-programming-iot-with-python/

    ZERYNTH, formerly known as VIPER, is a software suite used for the programming of interactive items, that are ready for the cloud and the Internet of Things. ZERYNTH enables the development in Python, on the most widespread prototyping platforms, and by using paradigms and features that are typical of a high level programming.

    A part of the difficulties hindering the development of the IoT lies in the “linguistic” barriers between man (who wants to prototype his ideas in a simple and flexible way) and hardware (that requires specific instructions). The real problem is that it would require “advanced” information technology competences, far too advanced for the market these electronic boards refer to. On the other hand, if we want interactive items, we have to try hard to create devices that are capable of executing many activities at the same time. This requires the learning of paradigms such as the real-time, the interrupts, the callbacks, that are not very simple to learn and manage.

    ZERYNTH, available for the download as open source: is a multiplatform (Linux, Windows and Mac) work suite that enables the programming of the greatest part of the 32-bit boards actually available on the market: from the professional boards used in the industrial field to the most well-known prototyping boards for hobbyists, such as Arduino DUE, UDOO, Particle and ST Nucleo.

    In detail, ZERYNTH is composed of:

    • ZERYNTH STUDIO: a multiplatform and browser-based development environment, with cloud synchronization and storage of the projects;

    • ZERYNTH VM: a Virtual Real-Time Machine for 32-bit microcontrollers, written in Python 3, with multi-threading support.

    It is compatible with all the boards upon the 32-bit ARM chip, such as Arduino Due, UDOO, Particle, STNucleo.

    • the ZERYNTH Library: a set of modules including the CC3000 of Spark Core’s Wi-Fi and Adafruit’s Wi-Fi shield, the Adafruit/Sparkfun thermal printer, the NeoPixel LED ring, the RTTL smart melody player, a signals library of the Streams kind, as well as the TCP and UDP protocols.

    • the ZERYNTH APP: an app, available for both iOS and Android, that acts as an interface to drive the boards that have been programmed by means of ZERYNTH, without having to use switches or potentiometers. The command interface can be customized for each single project, since technically the app is a HTML client that displays the templates defined by the Python scripts inside the ZERYNTH Objects’ memory.

    Zerynth Studio is a powerful IDE for embedded programming in Python that enables the IoT
    Free to download, Free to use
    http://www.zerynth.com/zerynth-studio/

    Reply
  3. Tomi Engdahl says:

    Programming with Rust
    http://hackaday.com/2015/12/18/programming-with-rust/

    Do hardware hackers need a new programming language? Your first answer might be no, but hold off a bit until you hear about a new language called Rust before you decide for sure.

    Although some people use more abstract languages in some embedded systems, it is no secret that for real-time systems, device driver development, and other similar tasks, you want a language that doesn’t obscure underlying details or generate code that’s difficult to reason about (like, for example, garbage collection). It is possible to use special techniques (like the Real-Time Java Specification) to help languages, but in the general case a lean language is still what most programmers reach for when you have to program bare metal.

    Even C++, which is very popular, obscures some details if you use things like virtual functions (a controversial subject) although it is workable. It is attractive to get the benefit of modern programming tools even if it does conceal some of the underlying code more than straight C.

    That’s where Rust comes in. I could describe what Rust attempts to achieve, but it is probably easier to just quote the first part of the Rust documentation:

    Rust is a systems programming language focused on three goals: safety, speed, and concurrency. It maintains these goals without having a garbage collector, making it a useful language for a number of use cases other languages aren’t good at: embedding in other languages, programs with specific space and time requirements, and writing low-level code, like device drivers and operating systems. It improves on current languages targeting this space by having a number of compile-time safety checks that produce no runtime overhead, while eliminating all data races. Rust also aims to achieve ‘zero-cost abstractions’ even though some of these abstractions feel like those of a high-level language. Even then, Rust still allows precise control like a low-level language would.

    Rusty Hardware

    But, wait. I mentioned hardware hacker language, right? Since Rust targets Linux, it is usable with the many single board computers that run Linux. There is an unofficial repository that handles several ARM-based boards to make it easy to put Rust on those computers.

    If you want to see an example of Rust on embedded hardware, [Andy Grove] (not the one from Intel) recently posted a hello world LED blinking example using Rust and a Beaglebone Black (see the video below). He also found a crate (a Rust library, more or less) to do digital I/O.

    Redox OS
    http://www.redox-os.org/

    Redox is a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications.

    Reply
  4. Tomi Engdahl says:

    Ownership is Theft: Experiences Building an Embedded OS in Rust
    http://iot.stanford.edu/pubs/levy-tock-plos15.pdf

    Reply
  5. Tomi Engdahl says:

    How to automate measurements with Python
    http://www.edn.com/design/test-and-measurement/4441692/How-to-automate-measurements-with-Python?_mc=NL_EDN_EDT_EDN_today_20160329&cid=NL_EDN_EDT_EDN_today_20160329&elqTrackId=14e99bf8763b48a79a5e2bbb95938ecd&elq=1b2811b958064289a8358457e5615069&elqaid=31530&elqat=1&elqCampaignId=27556

    As a system and application engineer, I’ve saved countless hours by automating measurements with software such as LabVIEW. Although I’ve used it to build measurement applications, I’ve started to replace LabVIEW with Python for basic lab measurements where I don’t need develop an easy-to-use GUI for others to use.

    Lines 1 to 3 import libraries that contain methods used later in the code:

    Numpy is a package used for scientific computing. In this example, Numpy is used to generate the array of output-current values.
    Pandas (a library for data manipulation and analysis) creates a very powerful data structure to store the results of our measurements.
    Visa is the PyVISA library that we use to control our instruments.
    Time is a handy library that we need to generate some time delays.

    Data analysis and plotting with Pyplot

    Pyplot is a module of the Python’s Matplotlib library that contains plenty of methods to plot graphs. Better still, the methods have been designed to be almost identical to MATLAB’s.

    Python is an excellent choice to automate your laboratory setup and avoid tedious hours of measurements because it is simple to use, easy to understand, and extremely flexible and powerful. LabVIEW is, however, still the king of the GUI.

    How to automate measurements with Python
    http://www.edn.com/design/test-and-measurement/4441692/4/How-to-automate-measurements-with-Python

    What is Python and why use it?

    Python is an interpreted, object-oriented, high-level programming language with dynamic semantic. Since its first release in 1991, Python has grown in popularity and it is now used in a wide range of applications; it is one of the most commonly taught programming languages in major universities and online courses. What makes Python such a great “first” programming language are its simplicity and easy-to-learn syntax and readability (some say “it is written in plain English”), all combined with great versatility and capabilities.

    Don’t think that Python is “just” a good teaching or academic language with little or no professional applications. On the contrary, Python is used heavily in web applications and data analysis by many top organizations such as Google, Yahoo and NASA. It is a very attractive language for rapid application development and it can be used to automate complex electronic instruments and make data collection more efficient.

    Python’s advantages are not limited to its ease of use. Python scripts can be run cross-platform on any major operative system, as long as the free Python interpreter is installed. Python is also extremely powerful and is extensively used for data analysis and complex mathematical calculations.

    Why consider Python for laboratory automation? Most of the test setups I implement are quite simple: 95% of the time the task involves measuring one or more signals (such as voltage, current, or temperature) at different times, or over a set of values of another independent variable. Implementing this requires little more than looping through your independent variable, acquiring the signals, and finally saving the data for further analysis. It is really simple to do this in Python, thanks to its straightforward, no-nonsense grammar and its useful, handy libraries.

    In addition, a Python script is very easy to modify

    Reply
  6. Tomi Engdahl says:

    Eclipse tackles Java API for IoT
    http://www.edn.com/electronics-blogs/eye-on-iot-/4441640/Eclipse-tackles-Java-API-for-IoT?_mc=NL_EDN_EDT_EDN_weekly_20160324&cid=NL_EDN_EDT_EDN_weekly_20160324&elqTrackId=edc95b01744f4c47a2a49095d90ffae0&elq=b7f356127460453c9e14b347d510fe95&elqaid=31479&elqat=1&elqCampaignId=27508

    The Eclipse Edje Open Source IoT Project, announced at EclipseCon last week, will define a set of application programming interfaces (APIs) for resource-constrained devices that provide the basic services essential to IoT applications. It aims to deliver a standard library that forms a hardware abstraction layer (HAL) for key microcontroller functions such as GPIO, PWMs, LCDs, UARTs, and the like. The project will initially utilize code contributions from MicroEJ but welcomes and encourages new contributors to work through the Eclipse Foundation.

    Edje
    https://projects.eclipse.org/projects/iot.edje

    The edge devices connected to the Cloud that constitute the Internet of Things (IoT) require support for building blocks, standards and frameworks like those provided by the Eclipse Foundation projects: Californium, Paho, Leshan, Kura, Mihini, etc.
    Because of the large deployment of Java technology in the Cloud, on the PC, mobile and server sides, most projects above are implemented in Java technology.

    Deploying these technologies on embedded devices requires a scalable IoT software platform that can support the hardware foundations of the IoT: microcontrollers (MCU). MCU delivered by companies like STMicroelectronics, NXP+Freescale, Renesas, Atmel, Microchip, etc. are small low-cost low-power 32-bit processors designed for running software in resource-constraint environments: low memory (typically KB), flash (typically MB) and frequency (typically MHz).

    The goal of the Edje project is to define a standard high-level Java API called Hardware Abstraction Layer (HAL) for accessing hardware features delivered by microcontrollers such as GPIO, DAC, ADC, PWM, MEMS, UART, CAN, Network, LCD, etc. that can directly connect to native libraries, drivers and board support packages provided by silicon vendors with their evaluation kits.

    Reply
  7. Tomi Engdahl says:

    How to automate measurements with Python
    http://www.edn.com/design/test-and-measurement/4441692/How-to-automate-measurements-with-Python?_mc=NL_EDN_EDT_EDN_weekly_20160331&cid=NL_EDN_EDT_EDN_weekly_20160331&elqTrackId=8f98be3f632d4b66a27bb760a097c3e6&elq=52a83f9c6d8141269d1a81ef262a6cf2&elqaid=31615&elqat=1&elqCampaignId=27591

    s a system and application engineer, I’ve saved countless hours by automating measurements with software such as LabVIEW. Although I’ve used it to build measurement applications, I’ve started to replace LabVIEW with Python for basic lab measurements where I don’t need develop an easy-to-use GUI for others to use. When I just need to quickly take some measurements, Python lets me save them in an easy-to-read format and plot them.

    Reply
  8. Tomi Engdahl says:

    Intel Ups The Dev Board Ante With The Quark D2000
    http://hackaday.com/2016/03/31/intel-ups-the-dev-board-ante-with-the-quark-d2000/

    Intel have a developer board that is new to the market, based on their Quark (formerly “Mint Valley”) D2000 low-power x86 microcontroller. This is a micropower 32-bit processor running at 32MHz, and with 32kB of Flash and 8kB of RAM. It’s roughly equivalent to a Pentium-class processor without the x87 FPU, and it has the usual impressive array of built-in microcontroller peripherals and I/O choices.

    The board has an Arduino-compatible shield footprint, an FTDI chip for USB connectivity, a compass, acceleration, and temperature sensor chip, and a coin cell holder with micropower switching regulator. Intel provide their own System Studio For Microcontrollers dev environment, based around the familiar Eclipse IDE.

    Best of all is the price, under $15 from an assortment of the usual large electronics wholesalers.

    This board joins a throng of others in the low-cost microcontroller development board space, each of which will have attributes that its manufacturers will hope make it stand out.

    Intel® Quark™ Microcontroller D2000
    http://www.intel.com/content/www/us/en/embedded/products/quark/mcu/d2000/overview.html

    Formerly Mint Valley

    The Intel® Quark™ microcontroller D2000, is a low power, battery-operated, 32-bit microcontroller with a more robust instruction set than other entry-level microcontrollers. The first x86-based Intel® Quark™ microcontroller, Intel® Quark™ microcontroller D2000 also increases input/output options over other entry-level microcontrollers. Within its small footprint, the Intel® Quark™ microcontroller D2000 includes an Intel® Quark™ ultra-low-power core running at 32 MHz, with 32 KB integrated flash and 8 KB SRAM.

    Intel® System Studio for Microcontrollers
    https://software.intel.com/en-us/intel-system-studio-microcontrollers

    Development Environment for Intel® Quark™ Microcontroller Software Developers

    Intel® System Studio for Microcontrollers, an Eclipse*-integrated software suite, is designed specifically to empower Intel® Quark™ microcontroller developers to create fast, intelligent things.

    The Internet of Things (IoT) is the big growth wave in tech—from smart cities, homes, and classrooms to energy management, wearable devices, and much more. The Intel Quark microcontroller family extends intelligent computing to a new spectrum of devices requiring low power consumption for sensor input and data actuation applications.

    Reply
  9. Tomi Engdahl says:

    Tcl is frequently used for embedded systems.
    Where is Tcl Embedded? edit

    Cisco IOS
    User Interface is a Tcl interpreter
    F5 Networks
    iRules scripts are Tcl scripts
    COMPANY: TiVo Inc.
    Tcl is embedded in TiVo’s digital video recorder system TiVO. See also TivoWeb
    VMD – Visual Molecular Dynamics

    Source: http://wiki.tcl.tk/3123

    Reply
  10. Tomi Engdahl says:

    Fixing C

    http://www.eetimes.com/author.asp?section_id=182&doc_id=1329425&

    What would you change if you could fix one thing about the C language?

    If you’re an old-timer you’ve probably written code in a large number of languages that have many different underlying philosophies.

    C is basically assembly with massive productivity improvements. C++ is, in my opinion, rather full of complexity and dark spaces where one is wise not to tread. Java is an interesting approach to OO but garbage collection and a lack of pointers make it inappropriate for most embedded work.

    C has been the lingua franca of this business for many years. Survey data shows it has most of the embedded space’s market share, and that the numbers haven’t changed much over the years.

    More modern aspirants like Python just haven’t caught on. C is going to be a major force for a very long time.

    Any language can be abused. But comparing typical C programs to, say, Ada code and one does wonder why we seem to be so terse. And the language itself is terse.

    Fixing C

    http://www.embedded.com/electronics-blogs/break-points/4441819/Fixing-C

    Suppose you could wave a magic wand and change just one aspect of C. What would that be and why?

    My vote is to get rid of the curly braces. All of us have suffered from bugs and compiler complaints from deeply-nested blocks of code where we get mixed up about how many closing braces are needed, and where they should be placed.

    I often put an indication of which brace is closing which block in the comments. But if the language were as I’ve indicated, the compiler could point out missing and mixed-up “end” statements.

    Reply
  11. Tomi Engdahl says:

    AdaCore’s SPARK Pro
    http://www.linuxjournal.com/content/adacores-spark-pro?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+linuxjournalcom+%28Linux+Journal+-+The+Original+Magazine+of+the+Linux+Community%29

    With this new version of the SPARK Pro toolset, AdaCore comes one step closer to its goal of making the writing of proven software both efficient and pleasant. As part of its new SPARK Pro 16 integrated development and verification environment, AdaCore further simplifies software engineers’ transition to greater reliance on static verification and formal proofs sans need for expertise in mathematical logic.

    SPARK Pro 16 also provides enhanced coverage of the SPARK 2014 language features and now supports the Ravenscar tasking profile, thus extending the benefits of formal verification methods to a safe subset of Ada 2012 concurrent programming features. This new SPARK Pro can generate counter-examples to verification conditions that cannot be proved,

    http://www.adacore.com/

    Reply
  12. Tomi Engdahl says:

    Speed development of IoT devices
    http://www.edn.com/electronics-blogs/embedded-basics/4441832/Speed-development-of-IoT-devices?_mc=NL_EDN_EDT_EDN_today_20160421&cid=NL_EDN_EDT_EDN_today_20160421&elqTrackId=9df5db51247f4bd9b4bbca5866c8d48f&elq=88c33febbb214fe39923a4668269f06d&elqaid=31950&elqat=1&elqCampaignId=27866

    For decades embedded systems have been built in nearly the exact same way, but the demands of market conditions, budgets, and technological advancements are rapidly transforming the way embedded systems are being built. The complexities and challenges of building an internet connected device, a potentially huge market that developers can no longer ignore, are quite staggering if a developer follows traditional design techniques. Below are a few ideas on how developers can rapidly develop internet connected devices.

    Idea #1 – Select an embedded platform
    Idea #2 – Adopt an alternative programming language
    Idea #3 – Leverage development kits
    Idea #4 – Use modules and frameworks
    Idea #5 – Don’t be afraid to push the envelope

    Final thoughts

    The on-set of the IoT era is proving to be exciting not only because of the creation of new products but also because of the new techniques becoming available to build those systems. The very way in which embedded systems are being built is beginning to change. Before long, the idea of writing a low level driver or middleware will be as foreign as it is to .NET developer.

    Reply
  13. Tomi Engdahl says:

    Ganssle’s presentation “Mars Ate My Spacecraft” recounted a number of embedded disasters, some with a common theme: “Tired engineers make mistakes.”

    Jacob Beningo’s session “Real-time software using Micro Python” explored the recent history of programming languages and how to get started with Micro Python.

    Source: http://www.edn.com/electronics-blogs/now-hear-this/4441893/2/ESC-Boston-2016-in-photos

    Reply
  14. Tomi Engdahl says:

    Finnish developed the open source “duct tape”

    Duktape may receive the following Finnish success of open source in the field. Developed by the Finnish Sami Vaaralas javascript engine to facilitate, inter alia, the development of applications IoT devices.

    Duktape allows for example c-language programs to be be extended with JavaScript

    Duktape is an open source JavaScript engine, which is used to IOT’s in addition to the gaming industry, and consumer electronics devices such as set-top boxes.

    “For example, in embedded IoT applications programming environments are tough, and they encode requires quite a lot of know-how. The programs are therefore often divided into two parts. The core program is typically written in C language and the majority of the actual functionality is done with JavaScript, which can be found in more talent ”

    IOT field is experiencing a tentative period and the talent is in short supply. Old-school C programming seems to be less on the market, those that would have also expertise in device side.

    Duktape advantage is that it can be embedded into a really small amount of memory. For example, Google’s V8 JavaScript engine does not always fit the many reasons for playing.

    “Samsung has its own Jerryscript. Samsung in due course evaluate Duktapea, but they ended up making their own engine. With them, there is little competition, ”

    Otherwise Vaarala has not decided what he wants in terms of trends and future growth Duktape do.

    Source: http://www.tivi.fi/Kaikki_uutiset/suomalainen-kehitti-avoimen-koodin-jeesusteipin-6552693

    Duktape
    http://duktape.org/

    Duktape is an embeddable Javascript engine, with a focus on portability and compact footprint.

    Duktape is easy to integrate into a C/C++ project: add duktape.c, duktape.h, and duk_config.h to your build, and use the Duktape API to call Ecmascript functions from C code and vice versa.

    Main features

    Embeddable, portable, compact: can run on platforms with 256kB flash and 64kB system RAM
    Ecmascript E5/E5.1 compliant, some features borrowed from Ecmascript E6
    Khronos/ES6 TypedArray and Node.js Buffer bindings
    Built-in debugger
    Built-in regular expression engine
    Built-in Unicode support
    Minimal platform dependencies
    Combined reference counting and mark-and-sweep garbage collection with finalization
    Custom features like coroutines, built-in logging framework, and built-in CommonJS-based module loading framework
    Property virtualization using a subset of Ecmascript E6 Proxy object
    Bytecode dump/load for caching compiled functions
    Liberal license (MIT)

    Reply
  15. Tomi Engdahl says:

    There are multiple Javascript engines targeting similar use cases as Duktape, at least:

    Espruino (MPL v2.0)
    JerryScript (Apache License v2.0)
    MuJS (Affero GPL)
    quad-wheel (MIT License)
    tiny-js (MIT license)
    v7 (GPL v2.0)

    Source: http://duktape.org/

    Reply
  16. Tomi Engdahl says:

    Don’t Pick a Programming Language Because It’s the ‘Most Profitable’
    http://motherboard.vice.com/read/dont-pick-a-programming-language-because-its-the-most-profitable-java-javascript-python?trk_source=recommended

    I’m not trying to nuke the general ideas of in-demand or popular programming languages—because those are things that exist. But they exist within a much larger ecosystem of talent, ability, and, ultimately, employability. Here are a few things that need to be kept in mind as you chew through another “most profitable” programming language list.
    0) Popular languages and in-demand languages != the future

    You’ll see Java at the top of many “most profitable” lists and it is for sure an in-demand skill.
    Java will be around for a while, but not so much by choice.

    0.5) People make money, not languages

    Java can be expected to be a part of a much larger skill set than what we might normally think of “coding,” e.g. software engineering, systems programming, etc. A professional engineer whose work involves Java is going to have a boatload more qualifications than “knowing Java,”

    1) Programming languages are tools

    Programming languages are not programming. They are tools used to program. Programming is a set of skills that are mostly language-independent. “Knowing Java” implies to precisely no one that you are a competent programmer.

    2) Programming languages depend on problems

    Even very general languages like Java fit some domains better than others. Programming is always just solving problems and programming languages all exist to solve some subset of all problems. This is why there are so many languages—some are better at solving certain types of problems than others.

    So, if not the “most profitable,” what language should you actually learn? Probably the one best suited to helping you learn other languages, e.g. the one that will teach you to actually program. That might be Python, or it could even be something with a very limited domain (or problem space), like Processing.

    Reply
  17. Tomi Engdahl says:

    Duktape is an embeddable Javascript engine, with a focus on portability and compact footprint.
    http://www.epanorama.net/newepa/2016/05/24/duktape/
    http://duktape.org/

    Reply
  18. Tomi Engdahl says:

    Learn Functional Reactive Programming on Your Arduino
    http://hackaday.com/2016/05/25/learn-functional-reactive-programming-on-your-arduino/

    Juniper is a functional reactive programming language for the Arduino platform. What that means is that you’ll be writing your code using anonymous functions, map/fold operations, recursion, and signals. It’s like taking the event-driven style that you should be programming in one step further; you write a=b+3 and when b changes, the compiler takes care of changing a automatically for you. (That’s the “reactive” part.)

    If you’re used to the first-do-this-then-do-that style of Arduino (and most C/C++) programming, this is going to be mind expanding. But we do notice that a lot of microcontroller code looks for changes in the environment, and then acts (more or less asynchronously) on that data.

    http://www.juniper-lang.org/tutorial.html

    Reply
  19. Tomi Engdahl says:

    Node.js on a satellite means anyone can be a space programmer
    https://reaktor.com/blog/node-js-satellite-means-anyone-can-space-programmer/

    Why is running Node.js on a satellite a small step for Reaktor but a giant leap for the satellite industry? The key is the ability to use JavaScript, one of the most popular programming languages in the world. Read on about how our little “Hello World” cubesat is about to make a big splash in the space industry.

    Reply
  20. Tomi Engdahl says:

    Getting Started with C++ in Embedded Systems
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329799

    C++ is available for most embedded targets, yet the adoption rate remains low. This webinar will provide practical information you can put to immediate use.

    sketches and libraries for Arduino microcontroller development systems, which I’m using in my hobby projects, tend to be written as a mixture of C and C++, and I often find myself scratching my head over a piece of C++ syntax.

    According to the event information: “C++ is available for most embedded targets, yet the adoption rate remains low. This webinar seeks to remedy that by providing super practical information you can put into practice immediately. We will move quickly from dispelling common C++ myths and identifying some key C++ benefits to a set of practical tips and tricks to help you put C++ to the most effective use in your ‘first month’ and ‘first year.’”

    Comments from http://www.eetimes.com/messages.asp?piddl_msgthreadid=49314&piddl_msgid=359627#msg_359627

    I think C is a terrific language for embedded systems. You can think of C as a “portable assembly language”

    I would agree with you on this 100%.

    I think the biggest danger to all programming is not commenting.

    IF the system you are writing for has the resources to handle the overhead of using C++, then it is a valid option to use, as the compilers have become much more efficent ( and reliable) over the last 5 years. But for a large number of engineers, the systems they are writing for do not have the needed resources to handle the C++ enviroment, so C is one of the best choices.

    I hear a lot of FUD in that response about the pitfalls of C++. Sounds like someone just does not want to part with their assembly code.

    One of the design tenants of C++ is that you should not have to pay for what you do not use. The two areas of concern for embedded system are exceptions and the Standard Template Library. Exceptions incur a runtime overhead to track exception handlers. Don’t catch any exceptions and that overhead is minmal. The danger of the STL is that there are lot of neat, complex objects wrapped up in very simple interfaces. There are some nice additions to the STL, like std::array. You can completely ignore the mre dangerous stuff without missing it.

    Templates can be very useful.

    Overall, I would much rather design with objects.

    There is simply nothing that can be done in C that cannot be done just as efficiently in C++ with better encapsulation and a cleaner design.

    My favorite C++ quote:

    C gives you enough rope to hang yourself. C++ gives you enough rope to tie up everyone in your neighborhood, rig a small ship, and still have enough left over to hang yourself from the yardarm.

    C++ has various dangers

    if you know what you’re doing you can write good embedded code in C++.
    you really need to know how a C++ compiler works to avoid the dangers and I think most embedded programmers are better off with the relative simplicity of C.

    Reply
  21. Tomi Engdahl says:

    Python’s role in developing real time embedded systems
    http://goo.gl/3J8t1b

    Python has become quite the trending program language over the last few years. Named after the famous Monty Python comedy group, the language is object oriented and interpreted (not compiled). This attribute has resulted in Python being adopted on platforms such as Linux and Windows, and on single board computers such as the Raspberry Pi. With such a wide and growing adoption, one might wonder if there is a place for Python in real-time embedded systems. There is. Below are five roles that developers may find Python playing in real-time embedded systems.

    Role #1 – Device control and debugging
    Role #2 – Automating testing
    Role #3 – Data analysis
    Role #4 – Real-time Software
    Role #5 – Learning object oriented programming

    Conclusions

    Students and engineers are becoming very familiar with the Python programming language. One might consider the Maker movement and the Raspberry Pi to be a few reasons it has moved up the list in popularity. But also, the language itself is flexible, easy to learn, and can be adapted to work within a microcontroller based environment. Developers thus shouldn’t be surprised when they see Python cropping up and beginning to play a role in embedded system development.

    Reply
  22. Tomi Engdahl says:

    Home> Community > Blogs > Embedded Basics blog
    Prototype to production: Running Python with Arduino
    http://www.edn.com/electronics-blogs/embedded-basics/4442231/Prototype-to-production–Running-Python-with-Arduino

    The label “Arduino” used to signify that the board being used contained an Atmel processor and used the Arduino footprints and software libraries. That is no longer the case. Nearly every microcontroller vendor has created a development kit based on the Arduino hardware footprint using their own microcontroller and software stack.

    The trick to running Python with Arduino is to find an Arduino compatible development kit that can run Python. This is really a question of finding out what Arduino boards have a Python port that can be easily used with them. No developer should want to port Python themselves for use on a microcontroller. As fun as porting Python would be, the endeavor would be quite time consuming. Surely there are other developers or projects where Python has been ported and open sourced.

    A look through the magic Google window reveals that there are not very many options. One of the few is a five-year-old open source development known as Pymite. – hasn’t been updated in over two years.

    A second option is Micro Python. Micro Python is an open source port for Python 3 that is optimized to run efficiently on a microcontroller.
    Micro Python has gathered some steam recently and is now supported on a number of different hardware platforms, including the CC3200, ESP8266, PIC16, and the STM32.

    But, do any of these microcontroller development kits support Micro Python out of the box? A review of the boards currently supporting Micro Python (including non-STM boards) reveals that only the NETDUINO_PLUS_2 and the OLIMEX_E407 have Arduino-compatible outputs.

    If neither of these first two options appeals to a developer, a third option for getting Python running with Arduino hardware footprint would be to create an adapter board between the PyBoard header footprints and the standard Arduino pin-out.

    Reply
  23. Tomi Engdahl says:

    Hacklet 114 – Python Powered Projects
    http://hackaday.com/2016/07/02/hacklet-114-python-powered-projects/

    Python is one of today’s most popular programming languages. It quite literally put the “Pi” in Raspberry Pi. Python’s history stretches back to the late 1980’s, when it was first written by Guido van Rossum. [Rossum] created Python as a hobby project over the 1989 Christmas holiday. He wanted a language that would appeal to Unix/C hackers. I’d say he was pretty successful in that endeavor. Hackers embraced Python, making it a top choice in their projects. This week’s Hacklet focuses on some of the best Python-powered projects on Hackaday.io.

    Next is [i.abdalkader] with OpenMV, his entry in the 2014 Hackaday Prize. [i.abdalkader’s] goal was to create “the Arduino of machine vision”.

    OpenMV
    Python-powered machine vision modules
    https://hackaday.io/project/1313-openmv

    Reply
  24. Tomi Engdahl says:

    Prototype to production: MicroPython under the hood
    http://www.edn.com/electronics-blogs/embedded-basics/4442357/Prototype-to-production—MicroPython-under-the-hood?_mc=NL_EDN_EDT_EDN_weekly_20160714&cid=NL_EDN_EDT_EDN_weekly_20160714&elqTrackId=e304088f3fd54c539849ce1bbdb13354&elq=ea27b4a0129f49909ac95a0a475279d0&elqaid=33083&elqat=1&elqCampaignId=28916

    In the last installment of this series, Running Python with Arduino, we examined off-the-shelf possibilities for running Python with Arduino hardware. The best opportunities among the alternatives involved using either a NETDUINO_PLUS_2, which has just gone end-of-life, or the OLIMEX_E407. Developers by no means are stuck with these two boards, but to really customize whatever the hardware that Micro Python will run on, developers need to understand a little bit about what is happening under-the-hood.

    The easiest way to understand Micro Python is to view the online open source github repository. A developer can browse the source and architecture without downloading the source just to become familiar with the code base.

    The main github Micro Python repository, shown in Figure 1, is basically broken up into three main categories:

    MCU Architectures (bare-arm, cc3200, esp8266, etc.)
    Common code (drivers, lib, py, etc.)
    Documentation (docs, README.md, LICENSE, etc.)

    Reply
  25. Tomi Engdahl says:

    Tcl: Still the King of Rapid Prototyping

    Tool Command Language (Tcl) is a powerful tool that’s been stress-tested for many years. The power of the graphical user interface (GUI) and its separation from the compute structures makes (Tcl) an amazing choice for prototyping. With a port of Tcl/Tk to Android, rapid prototyping can also be done on mobile. This paper explains seven reasons why the future of Tcl is bright.

    Tcl: Still the King of Rapid Prototyping
    https://assets.emediausa.com/research/7-reasons-the-future-of-tcl-is-bright-34332?lgid=3441165&mailing_id=1986792&engine_id=1&lsid=1&mailingContentID=36592&tfso=139365

    Reply
  26. Tomi Engdahl says:

    Understand firmware’s total cost

    http://www.edn.com/electronics-blogs/embedded-basics/4442394/Understand-firmwares-total-cost?_mc=NL_EDN_EDT_EDN_today_20160726&cid=NL_EDN_EDT_EDN_today_20160726&elqTrackId=43f6f35cac2a4eccb4d33fde7f9bc75c&elq=94c022fd1f6143ffb4733b15876f4a76&elqaid=33182&elqat=1&elqCampaignId=29011

    Innovation can be an exciting endeavor but on occasion management and developer decisions will optimistically estimate the cost implications for a project. That optimism can sometimes come from a short-sightedness or knowledge gap in understanding what is involved in the total cost of ownership for developing embedded software. Let’s look at the five major cost contributors that affect the total cost of ownership.

    Contributor #1 – Software licensing

    Contributor #2 – Software development

    Contributor #3 – Software maintenance

    Contributor #4 – Certifications

    Contributor #5 – Sales and marketing

    The total cost to own firmware is far larger than just the development costs. In order to truly understand the full investment necessary to be successful, companies and teams need to expand their considerations and understand how software licensing, certifications, and even the maintenance cycle will affect their return on investment. Without all these pieces the story is incomplete and the chances for a product’s financial success may be drastically reduced.

    Reply
  27. Tomi Engdahl says:

    Companies prefer the old programming languages

    Recent research reveals that companies continue to seek experts ‘old’ languages: Topping the list are Java, Python and C.

    The results showed that all coders companies want to recruit Java experts. Python coders sent nearly 90 per cent of enterprises and C preference will rise to 70 per cent.

    Source: http://etn.fi/index.php?option=com_content&view=article&id=4800:yritykset-suosivat-vanhoja-ohjelmointikielia&catid=13&Itemid=101

    Reply
  28. Tomi Engdahl says:

    Prototype to Production: Hello World
    http://www.edn.com/electronics-blogs/embedded-basics/4442579/Prototype-to-Production–Hello-World?_mc=NL_EDN_EDT_EDN_today_20160824&cid=NL_EDN_EDT_EDN_today_20160824&elqTrackId=1395332c02084ce7839a619a01624a9b&elq=275b0d76d8b54f69ae6fd1597b7ee574&elqaid=33565&elqat=1&elqCampaignId=29341

    So far in this series we have examined Arduino hardware and MicroPython software but haven’t actually written any code. In this installment, we are going to write a basic Python script that will serve as our “Hello World” and our “blinky” starting apps and explore what these typically mean to software developers and electrical engineers.

    First, “Hello World” has been the typical first application written by software developers when they are starting a new project or when they are learning a new language. The whole point is that the “Hello World” is the simplest application that can be written and lets the developer know that:

    The basic program syntax is correct
    They can successfully compile the program
    The program runs and outputs the desired message

    Creating a “Hello World” application using MicroPython on the Netduino Plus 2 is straightforward. First, a developer needs to create a local copy for the main.py script. Next, since Micro Python has already handled the low level details for a developer, the following code can be used to print our message over the USB terminal:

    print(“Hello World!”)

    It’s that simple in Python!

    The “Hello World” program works great for most high-level programming languages being used on a general computing device.

    Reply
  29. Tomi Engdahl says:

    Embedded Evolution
    http://semiengineering.com/embedded-evolution/

    When a car is hacked or data stolen, who gets blamed? Normally the software, but it is hardware that really created the problem.

    Fast forward through a couple of decades (OK, three) and things are very different. Processing power is almost unlimited, if you are willing to accept multi-core architectures. On-chip memory exists by the gigabyte, and even off chip memory takes almost no space at all. So much can be integrated into a single die that the need for off-chip components is a fraction of what it used to be. But within those changes we have transferred a large part of the problem from hardware to software.

    Sure, they have almost unlimited horsepower and memory and huge libraries of software laying around for them to pick up and integrate, but they are probably less equipped to deal with some of the challenges they face today than they were 30 years ago. Ask a software engineer how long a task will take and it is unlikely that he can tell you the maximum time. He probably cannot even tell you an average time with a reasonable level of confidence.

    What we have created is a mess. We could have provided most of those gains in a much more controlled manner if we had imposed some changes or restrictions on software.

    Message-passing systems could have been much faster and a small amount of shared memory space would have eliminated the need for highly complex and costly cache coherence systems.

    The solution chosen was easy to implement at that time and appeared to have minimal impact. Both of those decisions turned out to be anything but easy or cheap. They have brought us to where we are today, with processing systems that are very difficult to utilize well, with insecure systems that allow one task to spy on another, with high overheads in silicon and a lack of tools for software engineers that would enable quality software to be produced.

    We are again trying to fix those flaws with yet more bandages—a little bit of hardware and some more software to try and recreate what could have been done in the first place. Hypervisors are placing restrictions on software that they cannot share memory. And while today that is at a coarse level of granularity, how long will it take before the industry agrees this was good and should be propagated further through the system? When will hardware do what it should have done from the beginning and create secure memory architectures and multi-processor cores implemented with high-speed message-passing systems?

    Sure, it would take time before the software engineering community was ready to fully take advantage of these new capabilities. But let’s face it, they still cannot use the “features” we gave them 20 years ago. It’s time for the hardware industry to accept they are the ones responsible for data breaches and hacked cars and stop blaming the software. It is time to start designing complete systems, rather than relying on the wall between hardware and software to make half the job easier at the expense of the other half.

    Reply
  30. Tomi Engdahl says:

    Home> Community > Blogs > Embedded Basics blog
    Prototype to production: An industrial HVAC monitor using shields
    http://www.edn.com/electronics-blogs/embedded-basics/4442634/-Prototype-to-Production–an-industrial-HVAC-monitor-using-shields?_mc=NL_EDN_EDT_EDN_today_20160906&cid=NL_EDN_EDT_EDN_today_20160906&elqTrackId=ada8a255f14e4b00bc9e5c204ed2c8d3&elq=e2e89628158e403eb4edaea993dc260b&elqaid=33706&elqat=1&elqCampaignId=29472

    he quickest and easiest way to prove out a design idea is to utilize Arduino shields. Before we select the shields that we need to cobble together, though, it makes sense to scribble out a few basic design requirements out. These basic requirements should be a subset of the general requirements recommended in Prototype to Production – High level project requirements.

    First, there are a few different potential maintenance and failures modes for a furnace that we should consider monitoring.

    Second, the device should transmit the status for these measurements via the Ethernet to a server that can then perform cloud-based analytics and notifications. Third, a short range communication interface such as Bluetooth or USB should be used to setup and calibrate the device. Finally, the device will need to interact with the thermostat control lines to determine what state the furnace is being placed into, so that the device can properly determine whether the blower is in low or high mode or if it is in heat or cool mode.

    Given these general requirements for the monitor, we can now develop a high level hardware block diagram that identifies the major components requiring considered when building a rapid prototype

    Now for the fun part: selecting hardware shields to play with! Because a goal of this rapid prototyping project is to use Arduino and Micro Python, the Netduino Plus 2 makes the most sense for the microcontroller board. The specs for the Netduino include 5 volt tolerant pins and power and the board comes with USB and Ethernet, which saves a few additional shields.

    When creating a rapid prototype, the idea is to find as much ready-made hardware as possible so that we can prove out that the concept (for monitoring a furnace, in this case) is actually viable.

    Keep in mind that the block diagram has been left generic with regard to the actual communication interfaces required. It lists just about every common IoT radio that one might be interested in using, but for the proof-of-concept it’s not necessary to purchase hardware for each one. The best approach at this stage is to implement the only the major core features as cheaply and quickly as possible. If the idea fails, then we’ve successfully failed quickly with minimal investment and can move on to more hopeful and fruitful ideas.

    Reply
  31. Tomi Engdahl says:

    Prototype to Production: Arduino for the Professional
    http://www.eetimes.com/author.asp?section_id=36&doc_id=1329708&_mc=RSS_EET_EDT&hootPostID=54f9891e55fea2b65f2eb0e5360baa2c

    Jacob Beningo looks at how the humble Arduino can benefit a professional development effort.

    Despite its popularity among hobbyists and electronics enthusiasts, the Arduino has become infamous among professional embedded systems developers. I must admit that for the longest time I also viewed the Arduino as so simple it was nearly useless for professional developers. But I have changed my mind.

    The Arduino hardware platform
    One of the most powerful aspects of the Arduino for professional developers is the hardware ecosystem that supports it. Every Arduino board and derivative has a standard hardware interface that allows custom designed electronics to be stacked on top of the processor board to flesh out the prototype of an embedded system. The custom electronic boards, known as shields as probably most developers are aware, can literally have any type of electronics onboard such as motor drivers, sensors, actuators, LEDs or whatever the application needs may be. The popularity of Arduino among hobbyists has greatly benefited embedded system professionals because the result has been a wide variety of Arduino shields for nearly every application imaginable available off the shelf.

    Professional developers can also leverage the Arduino hardware platform to interface with commercial devices of interest. Using available shields for CAN, SPI, RS-485, Ethernet, and other equipment interfaces it’s possible to perform rapid prototyping activities for proof-of-concepts or one-off customer demos.

    Prototype to production: Arduino for the professional
    http://www.edn.com/electronics-blogs/embedded-basics/4442018/Prototype-to-production–Arduino-for-the-professional

    The Arduino shield interface is designed for low cost, low pin count microcontrollers, which can potentially be an issue for professional embedded systems developers needing more. Microcontroller companies have tried to resolve this issue by creating development boards for their more powerful processors while following the footprint for an Arduino shield. They then expanded the headers for additional functionality. By expanding their headers in the same way, developers can build their own custom shields for these enhanced development boards that utilize the extra functionality. Yet they can still also purchase off-the-shelf Arduino shields that remain compatible with the development board.

    The Arduino software platform
    The Arduino is more than hardware; it’s a complete hardware and software prototyping system. Its software development environment and libraries leave much to be desired from a professional developer’s point of view, but it is still useful to get a basic understanding of how Arduino handles software development.
    First, a developer examining the Arduino website — arduino.cc — will discover that there is some really strange language going on when it is talking about software. Arduino has invented a concept for the general public known as sketching, which to a professional developer is “writing code”. A sketch is really nothing more than a software project but the terminology sketch comes from the fact that Arduino was originally developed as a rapid prototyping tool for individuals who knew little to nothing about software or electronics, artists for example.

    Never heard of the Arduino programming language? That is because the Arduino programming language is actually nothing more than C/C++. The “Arduino language” as they refer to it is actually just a collection of libraries that provide a consistent set of APIs for controlling microcontroller peripherals.

    Conclusions
    Professional developers can leverage the Arduino ecosystem to rapidly prototype and prove out an embedded system concept. Existing Arduino libraries can be used for quick and dirty development but many developers will find the software development environment wanting and will likely choose to use their own development tools and environments. Despite the professional deficiencies in the software platform, though, the use of the Arduino shields and hardware environments offer a great opportunity to help accelerate development through the use of readily available shields. Just don’t forget that Arduino is meant for rapid prototyping rather than developing production-intent systems.

    Reply
  32. Tomi Engdahl says:

    Reflections on a Career in Engineering
    http://www.eetimes.com/author.asp?section_id=182&doc_id=1330487&

    From a world without embedded to it being everywhere. What a career!

    A 17th century farmer’s life was likely no different from that of an agrarian several thousand years earlier. For most of human history one’s life was pretty much like that of one’s great-grandparents.

    Until now.

    The industrial revolution took place yesterday, considering humankind’s long presence on this planet. Starting in the mid-18th century it moved people from the land into factories and created all sorts of consumer goodies. But people remained largely poor; I read (somewhere) that in 1810 94% of the world’s population lived in extreme poverty compared with 10% today. (Interestingly, in absolute numbers, roughly the same number of people remain in that unenviable lot today and two centuries ago).

    In my family, in the course of three generations, this technology has gone from unattainable to routine. Except that today’s devices don’t resemble those of 1910, 1950, or even 1980 at all; they are battery-powered computers that just a few years ago would have been impossible to imagine.

    http://www.embedded.com/electronics-blogs/break-points/4442717/Reflections-on-a-career

    One of our early 8008 products required 4KB of program space (in 16 EPROM chips!), yet it did floating-point linear regressions and took data in real time at tens of microseconds rates.

    That 8008 was about $650 each (just for the chip) in today’s dollars.

    Now, 40+ years later, programs are often megabytes in size. Microcontrollers offer complete computers, memory and all, on a single chip for less than a buck. The $5000 five MB hard disk (with a removable 14” platter) we used in the early 70s has been replaced by a $50 terabyte drive.

    How things have changed!

    Yet many professions have not. One of my brothers sells jewelry to stores. He claims that the business is just like it was four decades ago, except that the number of stores has declined due to on-line shopping. Another is a philosopher who uses modern tools to expound on ancient ideas.

    Electronic engineering is a field where change is the only constant. Some wags claim the field is reinvented every two years, a silly notion considering just how much of our knowledge base remains unchanged. Maxwell’s Laws, Kirchhoff, De Morgan, transistor theory (at least most of it) and so many other subjects foundational to our work are pretty much the same as in our college years. But the technology itself evolves at a dizzyingly pace. The mainframe shown above could now exist on a single fleck of silicon no bigger than a fingernail. Instead of costing millions, today they are so cheap they’re used as giveaways.

    It’s hard to point to any bit of our tech that is unchanged. The lowly resistor is now an 0302 thin-film device. Supercaps offer farads of capacitance. Where a four-layer PCB was once unimaginable, today it’s not rare to see layers stacked tens deep. Buried vias? Who would have dreamed of such a structure 40 years ago?

    Embedded software has changed as well. In the 1970s it was all done in assembly language. C and C++ are now (by far) the dominant languages. One could argue that C took over around 1990 and has stagnated since, but the firmware ecosystem is nothing like it was a years ago. Today one can get software components like GUIs, filesystems and much more as robust and reasonably-priced packages. Static analyzers find bugs automatically while other tools will generate unit tests. Where we used to use paper tape for mass storage when developing code, patching binaries rather than reassembling to save time, now fantastic IDEs can graphically show what tasks execute when, or capture trace data from a processor furiously executing 100 million instructions per second.

    Embedded systems have always suffered from being the neglected kid in town; all the tech glory goes to iPads and PCs.

    We’re at a singular point in history, at least embedded history. Today cheap yet very powerful 32-bit MCUs that include vast amounts of memory and astonishing arrays of peripherals, coupled with their extremely low power needs and widely available communications I/O and infrastructure, are redefining the nature of our business. And don’t forget the huge number of sensors now available – a gyro used to be big, power-hungry and expensive. Now you can get one for a couple of bucks.

    Reply
  33. Tomi Engdahl says:

    Prototype to production: Inputs, outputs and analog measurements
    http://www.edn.com/electronics-blogs/embedded-basics/4442708/Prototype-to-Production–Inputs–outputs-and-analog-measurements?_mc=NL_EDN_EDT_EDN_today_20160920&cid=NL_EDN_EDT_EDN_today_20160920&elqTrackId=37b550af7dfb4a328575db1705a78fab&elq=a5a404063f664f878e0ff392386ea81c&elqaid=33936&elqat=1&elqCampaignId=29668

    No matter the control system that we may be designing, whether it’s a HVAC monitor, motor controller or simple IoT sensor node, the system must control IO. This includes reading and writing digital IO as well as being able to take analog measurements. As it turns out, getting our Micro Python board to read and write IO and read analog values is quick and easy.

    In Micro Python you can individually set up GPIO pins by creating an object that you assign the board pin designation.

    The pin designations are located within pyb.Pin.board. For an Arduino based board, the board pin designation would be D0 to D13. A developer could easily modify the board numbers, however, by modifying the Micro Python source code for, say, a custom board ranging from D0 to D60, or to provide completely different designations if they wanted to.

    Once you’ve created the object, call the init method and provide the desired configuration information for the pin. For example, in order to initialize pin D8 as an output with the internal pull up resistors disabled, a developer would code the following:

    import pyb
    # Create and Configure D8 as an output
    Pin_D8 = pyb.Pin.board.D8
    Pin_D8.init(pyb.Pin.OUT_PP, pyb.Pin.PULL_NONE, -1)

    Configuring inputs

    There is more than one way to configure a pin to perform its function within Micro Python. A developer can use pyb.Pin to specify the pin designation and direction for a pin object. For example, setting D0 as an input can be done using the following code:

    import pyb
    # Configure Pin_D0 as an input
    Pin_D0 = pyb.Pin(‘D0′, pyb.Pin.IN)

    A developer simply passes the desired analog pin into the pyb.ADC method as shown below:

    # Create an adc object for pin A0
    A0 = pyb.ADC(pyb.Pin.board.A0)

    Production considerations

    We have looked at some very simple cases for controlling GPIO and reading analog values but in many circumstances these basic functions are the basic building blocks for many systems. However, the robustness for the system depends on whether these functions are able to do their job properly. So what are a few things that a developer may do to help make this code production-ready? A few examples that come immediately to mind include:

    Include testing to verify that the desired output state is achieved and maintained (yes bit flips do happen!)
    Apply a digital filter to the input pins to reduce noise and false triggers
    Validate that the change rate for an analog signal makes sense (can the value really go from full scale to zero in a single cycle?)
    Monitor for stuck inputs and signals (should something be changing but it is not?)
    Use try/except to catch any unexpected errors and handle them gracefully (much more on this later when we examine I2C)

    Reply
  34. Tomi Engdahl says:

    Most embedded to become IoT
    http://www.edn.com/electronics-blogs/about-embedded/4442702/Most-embedded-to-become-IoT?_mc=NL_EDN_EDT_EDN_today_20160922&cid=NL_EDN_EDT_EDN_today_20160922&elqTrackId=59e5b799653344ff899d8589e199c352&elq=2fafbca6168c428da1ed96e55806b383&elqaid=33985&elqat=1&elqCampaignId=29708

    Many IoT applications aim at battery-powered operation, but the development that goes into reducing processor power benefits nearly every embedded application.

    But the connectivity inherent in IoT designs brings with it a host of requirements not present (or at least not as significant) in traditional embedded. Adding network connectivity to an embedded system involves much more than simply attaching a radio chip to the embedded processor. With a connection to the network the embedded system becomes exposed to the entire world, with all the associated risks. So, development of IoT-targeted processors must spend considerable energy to add and improve security.

    have the ability to encrypt and decrypt data on the fly so that the contents of external memory are protected. Most IoT designs will need such features, but probably not most traditional embedded designs. The resources spent on developing these features, though, are not available to refine traditional microcontrollers. So the traditional developer is beginning to lose ground to a related, but diverging, set of market needs.

    At the same time, many traditional embedded applications are beginning to take note of the advantages connectivity brings. An industrial control system becomes much more valuable to the end user if the information it collects while doing its job can also be subjected to in-depth analysis or massively archived. Providing these capabilities requires a massive design upgrade in a stand-alone design, but the addition of connectivity provides virtually unlimited resources for analysis and storage at a much lower cost. Applications that were not practical with traditional embedded design become almost simple with connectivity – and the cloud resources it brings to bear – added into the mix.

    So what does all this mean to the traditional embedded developer? Two things, in my opinion. One is that stand-alone embedded system designs are going to decline both in market and performance, while connected embedded systems (i.e., IoT designs) are going to become dominant. The applications where including connectivity isn’t cost-effective are going to become fewer and further between. The other implication is a consequence of the first. Embedded developers need to embrace connectivity and learn to deal with its challenges, or find themselves increasingly marginalized.

    Reply
  35. Tomi Engdahl says:

    Kevin Hartnett / Quanta Magazine:
    DARPA prevented hackers from taking control of an unmanned drone using “formal methods”, a technique that can verify whether programs are error-free — In the summer of 2015 a team of hackers attempted to take control of an unmanned military helicopter known as Little Bird.

    Hacker-Proof Code Confirmed
    https://www.quantamagazine.org/20160920-formal-verification-creates-hacker-proof-code/

    Computer scientists can prove certain programs to be error-free with the same certainty that mathematicians prove theorems. The advances are being used to secure everything from unmanned drones to the internet.

    In the summer of 2015 a team of hackers attempted to take control of an unmanned military helicopter known as Little Bird. The helicopter, which is similar to the piloted version long-favored for U.S. special operations missions, was stationed at a Boeing facility in Arizona. The hackers had a head start: At the time they began the operation, they already had access to one part of the drone’s computer system.

    When the project started, a “Red Team” of hackers could have taken over the helicopter almost as easily as it could break into your home Wi-Fi. But in the intervening months, engineers from the Defense Advanced Research Projects Agency (DARPA) had implemented a new kind of security mechanism — a software system that couldn’t be commandeered. Key parts of Little Bird’s computer system were unhackable with existing technology, its code as trustworthy as a mathematical proof. Even though the Red Team was given six weeks with the drone and more access to its computing network than genuine bad actors could ever expect to attain, they failed to crack Little Bird’s defenses.

    “They were not able to break out and disrupt the operation in any way,” said Kathleen Fisher, a professor of computer science at Tufts University and the founding program manager of the High-Assurance Cyber Military Systems (HACMS) project. “That result made all of DARPA stand up and say, oh my goodness, we can actually use this technology in systems we care about.”

    The technology that repelled the hackers was a style of software programming known as formal verification.

    “You’re writing down a mathematical formula that describes the program’s behavior and using some sort of proof checker that’s going to check the correctness of that statement,” said Bryan Parno, who does research on formal verification and security at Microsoft Research.

    The aspiration to create formally verified software has existed nearly as long as the field of computer science. For a long time it seemed hopelessly out of reach, but advances over the past decade in so-called “formal methods” have inched the approach closer to mainstream practice. Today formal software verification is being explored in well-funded academic collaborations, the U.S. military and technology companies such as Microsoft and Amazon.

    Block-Based Security

    Between the lines it takes to write both the specification and the extra annotations needed to help the programming software reason about the code, a program that includes its formal verification information can be five times as long as a traditional program that was written to achieve the same end.

    This burden can be alleviated somewhat with the right tools — programming languages and proof-assistant programs designed to help software engineers construct bombproof code.

    Then came the internet, which did for coding errors what air travel did for the spread of infectious diseases: When every computer is connected to every other one, inconvenient but tolerable software bugs can lead to a cascade of security failures.

    “Here’s the thing we didn’t quite fully understand,” Appel said. “It’s that there are certain kinds of software that are outward-facing to all hackers in the internet, so that if there is a bug in that software, it might well be a security vulnerability.”

    By the time researchers began to understand the critical threats to computer security posed by the internet, program verification was ready for a comeback. To start, researchers had made big advances in the technology that undergirds formal methods: improvements in proof-assistant programs like Coq and Isabelle that support formal methods; the development of new logical systems (called dependent-type theories) that provide a framework for computers to reason about code; and improvements in what’s called “operational semantics” — in essence, a language that has the right words to express what a program is supposed to do.

    “If you start with an English-language specification, you’re inherently starting with an ambiguous specification,” said Jeannette Wing, corporate vice president at Microsoft Research. “Any natural language is inherently ambiguous. In a formal specification you’re writing down a precise specification based on mathematics to explain what it is you want the program to do.”

    The HACMS project illustrates how it’s possible to generate big security guarantees by specifying one small part of a computer system.

    The team also rewrote the software architecture, using what Fisher, the HACMS founding project manager, calls “high-assurance building blocks” — tools that allow programmers to prove the fidelity of their code. One of those verified building blocks comes with a proof guaranteeing that someone with access inside one partition won’t be able to escalate their privileges and get inside other partitions.

    Later the HACMS programmers installed this partitioned software on Little Bird.

    Verifying the Internet

    Security and reliability are the two main goals that motivate formal methods. And with each passing day the need for improvements in both is more apparent. In 2014 a small coding error that would have been caught by formal specification opened the way for the Heartbleed bug, which threatened to bring down the internet. A year later a pair of white-hat hackers confirmed perhaps the biggest fears we have about internet-connected cars when they successfully took control of someone else’s Jeep Cherokee.

    As the stakes rise, researchers in formal methods are pushing into more ambitious places.

    Over at Microsoft Research, software engineers have two ambitious formal verification projects underway. The first, named Everest, is to create a verified version of HTTPS, the protocol that secures web browsers and that Wing refers to as the “Achilles heel of the internet.”

    The second is to create verified specifications for complex cyber-physical systems such as drones. Here the challenge is considerable.

    Reply
  36. Tomi Engdahl says:

    Real time OS now to understand Java

    Java is arguably one of the most popular programming languages, despite the poor tietoturvamaineestaan. Intel is now owned by Wind River has made a big outreach java developers because of its very popular VxWorks real-time system now also supports Java-based application development.

    Wind River Micro Runtime environment has previously supported the C / C ++ – programming, now also includes a selection of java. Wind River, the IoT developers can use the Micro-driving environment planning applications written in Java. A clear advantage is that Java developers are millions around the world. Eclipse-based tools

    Source: http://etn.fi/index.php?option=com_content&view=article&id=5127:reaaliaikakayttojarjestelma-ymmartaa-nyt-javaa&catid=13&Itemid=101

    Reply
  37. Tomi Engdahl says:

    How High-Level Synthesis Was Used To Develop An Image-Processing IP Design From C++ Source Code
    http://semiengineering.com/how-high-level-synthesis-was-used-to-develop-an-image-processing-ip-design-from-c-source-code/

    Methodology helped accommodate late spec changes on aggressive delivery schedule.

    Imagine working long and hard on a design, only to learn that you need to add new (and more complex) functionality a few months before your targeted tapeout. How can you deliver the performance and capabilities expected in the same timeframe? For Bosch, high-level synthesis (HLS) provided the solution. In this paper, we will discuss how HLS technology enabled the team to meet an aggressive schedule on its image-processing IP without any changes to its existing floorplan.

    https://www.cadence.com/content/dam/cadence-www/global/en_US/documents/tools/digital-design-signoff/bosch-high-level-synthesis-wp.pdf

    Reply
  38. Tomi Engdahl says:

    Coding as a spectator sport
    http://www.edn.com/electronics-blogs/about-embedded/4442743/Coding-as-a-spectator-sport?_mc=NL_EDN_EDT_EDN_today_20160927&cid=NL_EDN_EDT_EDN_today_20160927&elqTrackId=0a6047e153714401b85d7ce8fa5adb59&elq=5bf020b22b2040de82eb81bcb16b2321&elqaid=34035&elqat=1&elqCampaignId=29755

    With the rise of low-cost platforms like the Arduino and growing interest in the Internet of Things (IoT), a new category of embedded developers is arising. Many of this new cadre are not coming from electronics or computer science backgrounds, but are forging their own approaches to development unbound by tradition or academic methods. Some are even learning to code not by study, but by watching others write code.

    One example of this online learning approach is Twitch TV. Although this site looks on the surface to only be a means of sharing video game excursions, there is more available if you dig a bit. In the Creative channels, search for Programming and you’ll come up with a list of videos on the topic. Some are recordings of presentations, while others are a kind of tutorial. The tutorials take the form of “looking over the shoulder” as the video creator narrates their activity. When originally created, these are streamed live, and have a chat line open for real-time question-and-answer. The recorded version then gets archived for latecomers’ use.

    Another learning resource that uses the same “over the shoulder” video format as Twitch is LiveCoding. Unlike Twitch, however, Live Coding focuses exclusively on coding. It is also more organized in its approach to offering instruction than Twitch. LiveCoding organizes its content by programming language (Java, Python, Ruby, C/C++, etc.), some with tens of thousands of videos available. Within each of those language categories, the site offers a choice of beginner, intermediate, or advanced level topics.

    There also seems to be a social aspect to this method of knowledge transfer. Alongside the streaming presentation there is a chat box, which allows real-time viewers to post comments, ask questions, and the like.

    Perhaps this approach is a natural extension of an increasingly online existence, or a way for developers who live and breathe coding to connect and interact with like-minded cohorts, but coding as a spectator sport simply doesn’t appeal to me. And I worry that participant are exchanging only random tidbits of information and failing to see things in an overall context and structure. Such fragmented knowledge transfer is fine for play and prototyping but, I fear, falls short of providing the kind of instruction needed to achieve reliable, production-ready design.

    Reply
  39. Tomi Engdahl says:

    Are Flawed Languages Creating Bad Software?
    https://developers.slashdot.org/story/16/10/01/2249256/are-flawed-languages-creating-bad-software

    “Most software, even critical system software, is insecure Swiss cheese held together with duct tape, bubble wrap, and bobby pins…” writes TechCrunch.

    Everything is terrible because the fundamental tools we use are, still, so flawed that when used they inevitably craft terrible things… Almost all software has been bug-ridden and insecure for so long that we have grown to think that this is the natural state of code. This learned helplessness is not correct. Everything does not have to be terrible…

    Learned helplessness and the languages of DAO
    https://techcrunch.com/2016/10/01/learned-helplessness-and-the-languages-of-dao/

    Everything is terrible. Most software, even critical system software, is insecure Swiss cheese held together with duct tape, bubble wrap, and bobby pins. See eg this week’s darkly funny post “How to Crash Systemd in One Tweet.” But it’s not just systemd, not just Linux, not just software; the whole industry is at fault. We have taught ourselves, wrongly, that there is no alternative.

    Systemd is an integral component of most Linux distributions, used to boot the system, among other things. Ayer found a very simple way to crash it

    Everything is terrible because the fundamental tools we use are, still, so flawed that when used they inevitably craft terrible things. This applies to software ranging from low-level components like systemd, to the cameras and other IoT devices recently press-ganged into massive DDoS attacks —
    — to high-level science-fictional abstractions like the $150 million Ethereum DAO catastrophe. Almost all software has been bug-ridden and insecure for so long that we have grown to think that this is the natural state of code. This learned helplessness is not correct. Everything does not have to be terrible.

    In principle, code can be proved correct with formal verification. This is a very difficult, time-consuming, and not-always-realistic thing to do; but when you’re talking about critical software, built for the long term, that conducts the operation of many millions of machines, or the investment of many millions of dollars, you should probably at least consider it.

    Less painful and rigorous, and hence more promising, is the langsec initiative:

    The Language-theoretic approach (LANGSEC) regards the Internet insecurity epidemic as a consequence of ad hoc programming of input handling at all layers of network stacks, and in other kinds of software stacks. LANGSEC posits that the only path to trustworthy software that takes untrusted inputs is treating all valid or expected inputs as a formal language, and the respective input-handling routines as a recognizer for that language.

    …which is moving steadily into the real world, and none too soon, via vectors such as the French security company Prevoty.

    As mentioned, programming languages themselves are a huge problem. Vast experience has shown us that it is unrealistic to expect programmers to write secure code in memory-unsafe languages. (Hence my “Death to C” post last year.)

    “However, I see improvement on the horizon. Go and Rust are compelling, safe languages for writing the type of systems software that has traditionally been written in C.”

    The best is the enemy of the good.

    Let’s move towards writing system code in better languages, first of all — this should improve security and speed. Let’s move towards formal specifications and verification of mission-critical code.

    And when we’re stuck with legacy code and legacy systems, which of course is still most of the time, let’s do our best to learn how make it incrementally better, by focusing on the basic precepts and foundations of programming

    I write this as large swathes of the industry are moving away from traditional programming and towards the various flavors of AI. How do we formally specify a convoluted neural network? How does langsec apply to the real-world data we feed to its inputs?

    Reply
  40. Tomi Engdahl says:

    How to Crash Systemd in One Tweet
    https://www.agwa.name/blog/post/how_to_crash_systemd_in_one_tweet

    The following command, when run as any user, will crash systemd

    The bug is remarkably banal. The above systemd-notify command sends a zero-length message to the world-accessible UNIX domain socket

    The immediate question raised by this bug is what kind of quality assurance process would allow such a simple bug to exist for over two years

    Systemd’s problems run far deeper than this one bug. Systemd is defective by design. Writing bug-free software is extremely difficult.

    In particular, any code that accepts messages from untrustworthy sources like systemd-notify should run in a dedicated process as a unprivileged user. The unprivileged process parses and validates messages before passing them along to the privileged process. This is called privilege separation and has been a best practice in security-aware software for over a decade. Systemd, by contrast, does text parsing on messages from untrusted sources, in C, running as root in PID 1.

    The Linux ecosystem has fallen behind other operating systems in writing secure and robust software. While Microsoft was hardening Windows and Apple was developing iOS, open source software became complacent. However, I see improvement on the horizon. Heartbleed and Shellshock were wake-up calls that have led to increased scrutiny of open source software. Go and Rust are compelling, safe languages for writing the type of systems software that has traditionally been written in C. Systemd is dangerous not only because it is introducing hundreds of thousands of lines of complex C code without any regard to longstanding security practices like privilege separation or fail-safe design, but because it is setting itself up to be irreplaceable. Systemd is far more than an init system: it is becoming a secondary operating system kernel, providing a log server, a device manager, a container manager, a login manager, a DHCP client, a DNS resolver, and an NTP client.

    Consider systemd’s DNS resolver. DNS is a complicated, security-sensitive protocol.

    It is not too late to stop this. Although almost every Linux distribution now uses systemd for their init system, init was a soft target for systemd because the systems they replaced were so bad. That’s not true for the other services which systemd is trying to replace such as network management, DNS, and NTP. Systemd offers very few compelling features over existing implementations, but does carry a large amount of risk.

    Reply
  41. Tomi Engdahl says:

    The Trust Burning Debug Cycle From Hell
    Is my untested code a menace to the team? It gets personal.
    http://semiengineering.com/the-trust-burning-debug-cycle-from-hell/

    As bad as The Trust Burning Debug Cycle From Hell sounds, it’s worse than you think. Most of us don’t realize it exists. In my first 10 years as a hardware developer I wrote code like it could never exist! But then came the realization. It’s a cycle that preys on us all. It tempts me constantly.

    Most of us in hardware development are used to seeing bugs as annoyances at a minimum, though more critically blocks and showstoppers. They grow in databases and become part of the checklist that stands between a team and its delivery objectives. Ultimately their damage materializes as schedule overrun. On the surface this makes sense because schedule is where the money is. But damage goes beyond the cost of schedule overruns.

    Measuring damage in terms of trust means I start asking myself questions like “do my colleagues, managers and clients trust me to do my job well?” No one ever wants the answer to that question to be ‘no’, me included.

    Unfortunately, trust is easy to burn. When verification engineers build VIP for other teams, we tend toward a code-n-release approach: very soon after the VIP code is written it’s released to users. We usually forego, or at least delay rigorous testing, thinking it’ll be more effective to resolve issues in situ. Users typically hit a lot of bugs. Every bug costs them debug time. Every frustrating debug minute is a burden that gives them the opportunity to question our ability to deliver.

    Depending on who you’re working with, a few bugs probably ends up being acceptable. When it becomes to more than a few though; when bugs appear as part of a cycle; when it’s a constant debug-fix-and-release, that’s when the questions start and users start losing trust.

    The good news is that early testing of VIP can help us build and keep trust with users.

    Or maybe I should say almost avoid it…

    …because the bad news is that even with the right techniques it can still be oh-so-tempting to hurry right past the testing and deliver straight to users. I hate to admit it, but for all the practicing, writing and time I’ve dedicated to TDD, there are times where I defer on quality under pressure.

    Worst of all, the decent code I do deliver using TDD is overshadowed by the buggy code I produce without.

    And peoples’ trust in me? Gone. At least that’s the way it feels. I’m disgusted with myself every time I try and save time at someone else’s expense.

    Reply
  42. Tomi Engdahl says:

    The Real Differences Between HW And SW
    http://semiengineering.com/the-real-differences-between-hw-and-sw/

    While many see processes and procedures for hardware design as being more advanced than software, there are some things that hardware design needs to learn from software.

    How many times have we heard people say that hardware and software do not speak the same language? The two often have different terms for essentially the same thing. What hardware calls constrained random test is what software people call fuzzing.

    He told me that requirements management and agile methods had defined the initial company development, but along the way they lost sight of their own requirements and decided that they needed to change. They also observed that many other companies in the industry faced the same issues.

    “Traceability is driving people to solutions beyond documents,” said Harris. “Certification and regulation are driving the need for traceability and making it important in a live sense.”

    By live, Harris is talking about the way in which agile has changed the software development process. “The one thing coming out of agile is that people want to be more iterative. That means they need processes that are more aligned and provide a seamless feedback loop. Software companies need the ability to move faster and to become more competitive, and they need to shift from a concentration on specification to one of requirements.”

    Could the term ‘requirements’ be used that differently between hardware and software?

    ’ When requirements come in like that, they tend to be not very different from specification items. The difference is the context within the project.”

    This means that most hardware is designed without a product focus. But what about standards such as ISO 26262 or DO-254, which are asking for requirements traceability? They don’t want to know that the unit has a DDR interface. They want to know that the bandwidth to memory meets a certain requirement, and they want to know how you demonstrated that to be the case.

    Concurrency also complicates the question. Most of the time, software provides only a single way in which functionality is implemented, but hardware is very different in that respect.

    “When you are writing hardware requirements you are often developing architectures in parallel,”

    Requirements tend to get very complex when you start looking at test case management. Making sure you have a test for every requirement is a lot of work—making sure you didn’t miss a case, or take into account process variability that can upset things. “Change management is important for many of these kinds of project,” says Rolufs. “Then they can track the impact of finding that a spec item cannot be met.”

    At the end of the day, hardware design and verification is about probability. How likely is it that a bug has escaped? How likely is it that I have accounted for worst case conditions? How likely is it that 95% of the parts produced will meet the spec?

    Semiconductor companies traditionally have not written requirements. They did not start off with a set of customer needs; rather they develop a chip that they think people will want. It is probably less likely today

    Reply
  43. Tomi Engdahl says:

    Prototype to production: Connecting to the cloud
    http://www.edn.com/electronics-blogs/embedded-basics/4442782/Prototype-to-production–Connecting-to-the-cloud?_mc=NL_EDN_EDT_EDN_today_20161011&cid=NL_EDN_EDT_EDN_today_20161011&elqTrackId=e38be64c16844c0fb83ba29c4a2a13bf&elq=88cece835f3042909dbb0b1389f87c6c&elqaid=34303&elqat=1&elqCampaignId=29932

    Connecting embedded systems to the internet is not by any means a new endeavor; engineers have been connecting embedded systems pretty much from the internet’s beginning. In the early days, these embedded systems were quite large and not very portable. They have become a whole lot smaller, and easier to set up, as our continuing project demonstrates.

    In Prototype to Production: An Industrial HVAC Monitor using Shields, I mentioned quite a few shields that I will be discussing in the coming months as we pull an example IoT device together. The first, and possibly the most interesting, type is a Sparkfun ESP8266 Wi-Fi shield. The ESP8266 has turned out to be quite the intriguing Wi-Fi module.

    The shield contains a Wi-Fi chip with an integrated microcontroller and TCP/IP stack produced by Espressif Systems. ESP8266 break-out boards and modules can cost as little as $5, so developers purchasing them in large quantities can undoubtedly get a great price and drastically simplify their Wi-Fi communication and system. Beyond the abstractions and cost benefits the shield offers, developers can even install Micro Python on it!

    Connecting the ESP8266 Wi-Fi shield to a microcontroller is easy. All that is needed is Vcc, Ground, and a Tx/Rx pair for serial communication.

    The ESP8266 uses a specialized AT command interface to get onto a network and communicate over TCP/IP.

    Communication with the shield is a great first step, but in order to really connect to the internet we need to adjust the radio mode and set the SSID and password.

    Getting a connection up and running isn’t too difficult. There is a big difference, however, between prototyping a connection and creating a production connection. Developers should be thinking through the reset procedure, for instance. What happens if the device resets? Can it detect the cause? What steps need to be taken to ensure the system is in a known and safe state? Developers also need to consider how many times to retry a connection and if a timeout is appropriate for the application. Further, a developer should think through a fault tree and how the system can recover from each type of fault. Does the device recover automatically or does it require human intervention?

    The ESP8266 is just a single example of how connecting an embedded system to the internet is becoming easier and easier. A quick search reveals that there are many modules on the market designed to ease this common system crutch. With them, developers can get on track to easily communicating with the cloud.

    Reply
  44. Tomi Engdahl says:

    Microsoft open-sources P language for IoT
    http://www.infoworld.com/article/3130998/application-development/microsoft-open-sources-p-language-for-iot.html

    The platform specializes in asynchronous programming for embedded systems, device drivers, and distributed services

    Microsoft’s P language, for asynchronous event-driven programming and the IoT (internet of things), has been open-sourced.

    Geared for embedded systems, device drivers, and distributed services, P is a domain-specific language the compiles to and interoperates with C, which itself has been commonly leveraged in embedded systems and the IoT. “The goal of P is to provide language primitives to succinctly and precisely capture protocols that are inherent to communication among components,” said Ethan Jackson and Shaz Qadeer of Microsoft, in a tutorial on the language.

    With P, modeling and programming are melded into a single activity. “Not only can a P program be compiled into executable code, but it can also be validated using systematic testing,” according to the language’s documentation on GitHub. “P has been used to implement and validate the USB device driver stack that ships with Microsoft Windows 8 and Windows Phone.”

    Microsoft has described P as offering “safe” event-driven programming. In their tutorial, Jackson and Qadeer say P programs have a computational model that features state machines communicating via messages, an approach commonly used in embedded, networked, and distributed systems.

    Microsoft Open-Sources P Language for Safe Async Event-Driven Programming
    https://www.infoq.com/news/2016/10/microsoft-p-language-opensourced

    Microsoft’s recently open-sourced P language aims to make it possible to write safe asynchronous event-driven programs on Linux, macOS, and Windows.

    Microsoft describes P as a domain-specific language to model communication between components of an asynchronous system, such as an embedded, networked, or distributed system. A P program is defined in terms of finite state machines than run concurrently. Each of them has an input queue, states, transitions, a machine-local store, and can send asynchronous messages to the others. A basic operation in P either updates a local store, sends a message, or creates new machines.

    According to Microsoft, P programs can be verified using model checking. This allows developers to ensure that a system is able to handle every event in a timely manner. For a P program to be responsive, its state machines must handle all event that can be dequeued in every state. Since this is not always practical, it is possible to defer the handling of some events. In such case, the language ensures that an event cannot be potentially deferred forever. To be able to verify the properties of a program, besides generating C code which can be then fed to a C compiler for execution, the P compiler will output a Zing model that can be used for systematic testing. Zing is an open source model checker for concurrent programs that is able to systematically explore all possible states of a model.

    Reply
  45. Tomi Engdahl says:

    P: Safe Asynchronous Event-Driven Programming
    https://www.microsoft.com/en-us/research/publication/p-safe-asynchronous-event-driven-programming/

    We describe the design and implementation of P, a domain-specific language to write asynchronous event driven code. P allows the programmer to specify the system as a collection of interacting state machines, which communicate with each other using events. P unifies modeling and programming into one activity for the programmer. Not only can a P program be compiled into executable code, but it can also be verified using model checking. P allows the programmer to specify the environment, used to “close” the system during model checking, as nondeterministic ghost machines. Ghost machines are erased during compilation to executable code; a type system ensures that the erasure is semantics preserving.

    Reply
  46. Tomi Engdahl says:

    Attack of the one-letter programming languages
    http://www.infoworld.com/article/2850461/application-development/attack-of-the-one-letter-programming-languages.html?page=2

    From D to R, these lesser-known languages tackle specific problems in ways worthy of a cult following

    Top 10 Internet of Things (IoT) Programming Languages
    http://blog.beautheme.com/top-10-internet-of-things-iot-programming-languages/

    Internet of things (IoT) trends has a huge attraction with the technology world. People will have interesting experiences when objects and physical devices have abilities to connect each other and brings the huge benefits. With any IoT projects, choosing the hardware is always so difficult, software either. In this article, we will introduce you the most 10 popular programming languages when you do a IoT project.

    Reply
  47. Tomi Engdahl says:

    Go, Robot, Go!
    Golang Powered Robotics
    https://gobot.io/

    Next generation robotics framework with support for 18 different platforms

    Gobot is a framework for robotics, physical computing, and the Internet of Things (IoT), written in the Go programming language

    Go
    https://golang.org/

    Go is an open source programming language that makes it easy to build simple, reliable, and efficient software.

    Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

*

*