Our has been architected very differently to most consumer smart home services. It has been designed with the best interests of the home owner in mind. Best in terms of privacy, best in terms of security, best in terms of performance, best in terms of the user experience and is driven by the need to improve your quality of life.
A fundamental principle of our HCS architecture is that the system should still function without an Internet connection or cloud services. Where cloud services add value, we would still use them (assuming our privacy concerns are met) but, their absence must be handled gracefully and not compromise the great user experience.
The architecture has also been chosen to be extensible and support a wide range of existing technologies and new ones as they emerge on the market. A key advantage of this approach is that we can integrate 'best of breed' products and services from third parties. This approach also enables us to try out new technologies and services, to assess their benefits and performance.
Our smart home architecture is based around a single logical system to enable whole home context but is physically distributed across a number of processors, to achieve optimal performance and the best possible user experience.
Most current smart home systems don't scale well, both from technical perspective and a user experience perspective. A simple example of this is the 300+ sensors in our current contextual smart home. If all of these were small battery powered, wireless devices our home would be over-run with little white boxes dotted about the place, which would look terrible. We would also need to change at least one or two batteries every day!
Our distributed architecture uses the concept of a single master HCS processor supported by a number of slave processors. The master effectively delegates functionality and features to a slave processor but all slave processors are obligated to report back all significant events, in the same way that a dumb sensor does. Each slave processor has a unique slave ID.
We have adopted a 'standard build' for our slave processors using our smart home building blocks approach and most of them are built around Arduino processors. This allows us to 'off load' a collection of sensors and their associated power and networking to a slave processor.
Slave processors mean our HCS can scale, with each one being able to support as many as 40 sensors and also process and cleanse sensor data locally, to filter out unwanted or insignificant events. They can also take action locally, to minimise latency and deliver a much better user experience. They can also support pretty much any functionality and ensure it is fully integrated with the rest of our smart home. This includes things like:
Slave processors also send back regular 'heartbeats' to the Home Control System, so that it can track their availability and up time. If a heartbeat is not received or a numeric sequence is broken, this signifies an issue.
Through our research, we have adopted is quite simple approach. On start up, each slave sends a 'Heartbeat' event with a numeric sequence value of zero. The zero value signifies a restart. From then on the numeric sequence increments, so the Home Control System can predict what should be expected next. When the sequence reaches 9999, it starts back at 1 again.
Although simple, this approach allows our Home Control System to keep track of the availability of slaves, track resets, etc. The duration expected between heartbeats is held in a configuration file.
Slave processors can also report unusual events as errors or warnings and these have a specific event type of 'Warning' or 'Error'. Typically, the code running on our slave processors is monitoring sensor performance and keeping track of how many and how often a sensor generates a bad reading. We have spent a lot of time optimising our code to ensure that dodgy sensors are picked up quickly and fail gracefully.