All
Alexa Lambda Linux (ALL) Reference Design - IoT and ML at the edge.
Install / Use
/learn @goruck/AllREADME
NEW - ALL now supports Raspberry Pi 3 running Raspbian Buster + PREEMPT-RT patched kernel 4.19.59-rt23-v7+.
What is Alexa Lambda Linux (ALL) Project?
- ALL is an end-to-end HW/SW reference design created to enable quick prototyping and realization of the control and monitoring of things using Amazon’s Alexa.
- ALL includes Amazon’s Alexa Skills Kit and Lambda running in the cloud as well as a local server based on real-time Linux running on the Raspberry Pi.
- ALL also includes machine learning capabilities at the edge.
- A voice-controlled home security system was built from ALL as an early proof of concept.
- ALL was introduced in late 2015 at https://github.com/goruck/all.
ALL System Block Diagram

ALL Overview
Using voice to interface with devices and services around the home enables a rich and intuitive experience as shown by Amazon's huge successes with FireTV and Echo. However, with the exception of a few 3rd party point solutions such as WeMo, the voice user interface is generally not available in the home. One reason for this is the difficulty and unfamiliarity of the technologies required to enable voice control.
Alexa Lambda Linux (ALL) was developed to help accelerate this learning curve. ALL is a HW/SW reference design meant to enable quick prototyping and realization of the control and monitoring of things using Amazon's Alexa. A voice-controlled home security system was first built from the reference design as proof of concept which was later extended to support machine learning.
The README below describes the system, the main components, its design, and implementation. It is expected that people will find it useful in creating voice user interfaces of their own using Alexa, Lambda, Linux, and the Raspberry Pi.
Feature | Benefit ------------ | ------------- Integrated with Lambda and ASK | Quick bring-up of new voice controls Real-time Linux / user space app dev model | Low effort to control fast real-world events Raspberry PI, open source, AWS services | Low cost and quick deployment End to end SSL/TLS integration | Customer data security
Table of Contents
- Requirements and System Architecture
- Design and Implementation of the Main Components
- Machine Learning with ALL
- Development and Test environment
- Overall Hardware Design and Considerations
- Bill of Materials and Service Cost Considerations
- Licensing
- Contact Information
- Appendix
Requirements and System Architecture
Please see below the high-level requirements that the project had to meet.
- Low cost
- Extensible / reusable with low effort
- Secure
- Enable fast prototyping and development
- Include at least one real world application
Meeting these requirements would prove this project useful to a very wide variety of voice interface applications. In addition, implementing a non-trivial "real-world" application would show that the design is robust and capable. Hence the last requirement, which drove the implementation of a voice user interface on a standard home security system, the DSC Power832.
The system needs both cloud and (home-side) device components. Amazon's Alexa was selected as the cloud speech service and Amazon Web Services' Lambda was selected to handle the cloud side processing required to interface between Alexa and the home-side devices. Alexa is a good choice given that its already integrated with Lambda and has a variety of voice endpoints including Echo and FireTV and it costs nothing to develop voice applications via the Alexa Skills Kit. Lambda is ideal for quickly handling bursty processing loads, which is exactly what is needed to control things with voice. It also has a free tier under a certain amount of processing and above that its still very inexpensive. So, Alexa and Lambda are reasonable cloud choices given the requirements above.
The Raspberry Pi 2 was designated as the platform to develop the home-side device components( All also supports the Raspberry Pi 3.) The platform has a powerful CPU, plenty RAM, a wide variety of physical interfaces, has support for many OSs, and is inexpensive. It is possible to use an even more inexpensive platform like Arduino but given its lower capabilities vis-a-vis the Raspberry PI this would limit the types of home-side applications that could be developed. For example, use of GNU/Linux is desirable in this project for extensibility and rapid development reasons. Arduino isn't really capable of running Linux but the Pi is. The downside of using the Pi plus a high-level OS like vanilla Linux is that the system cannot process quickly changing events (i.e., in "real-time"). On the other hand, the Arduino running bare metal code is a very capable real-time machine. To be as extensible as possible, the system needs to support the development of real-time voice controlled applications and without complex device side architectures like using an Arduino to handle the fast events connected to a Pi to handle the complex events. Such an architecture would be inconsistent with the project requirements. Therefore real-time Linux was chosen as the OS on the Pi. However, this comes with the downsides of using a non-standard kernel and real-time programming is less straightforward than standard application development in Linux userspace.
In the reference design, the Pi's GPIOs are the primary physical interface to the devices around the home that are enabled with a voice UI. This allows maximum interface flexibility, and with using real-time Linux, the GPIO interface runs fast. The reference design enables GPIO reads and writes with less than 60 us latency, vanilla Linux at best can do about 20 ms. Of course, all the other physical interfaces (SPI, I2C, etc.) on the Pi are accessible in the reference design though the normal Linux methods.
The requirements and the analysis above drove a system architecture with the following components.
- Alexa Skills developed using the Alexa Skills Kit
- An AWS Lambda function in Node.js to handle the intent triggers from Alexa and send it back responses from the home device
- A home device built on Raspberry PI running real-time Linux with a server application developed in C running in userspace
- A hardware interface unit that handled the translation of the electrical signals between the Pi and the security system
- The DSC Power832 security system connected via its Keybus interface to the Pi's GPIOs via the interface unit.
Note that although the development of the architecture described above appears very waterfall-ish, the reality is that it took many iterations of architecture / design / test to arrive at the final system solution.
Design and Implementation of the Components
Alexa Skills
Creating Alexa Skills (which are essentially voice apps) is done by using the Alexa Skills Kit. An Amazon applications developer account is required to get access to the Alexa Skills Kit (ASK), one can be created one at https://developer.amazon.com/appsandservices. There's a getting started guide on the ASK site on how to create a new skill. The skill developed to control the alarm panel, named panel, used the example skill color as a starting point. Amazon makes the creation of a skill relatively easy but careful thinking through the voice interaction is required. The panel skill uses a mental model of attaching a voice command to every button on the alarm's keypad and an extra command to give the status of the system. The alarm system status is the state of the lights on the keypad (e.g., armed, bypass, etc). Four-digit code input is also supported, which is useful for a PIN. The Amazon skill development tool takes you through the following steps in creating the skill:
- Skill Information - Invocation Name
