The uncertainty principle is an important element when building computer logic and probabilities can seriously affect conditional statements (rules) over time.
If you are reading this post series in chronological order, you may remember the story about that time when you almost called the police on your thirsty friend. As events were unfolding, you were getting more and more certain that the mysterious person in your home was not an intruder, but your friend Tom. Choosing between calling the police or going back to sleep was guided only by your belief.
This example may sound a little forced, but in reality, more and more software applications, especially in the IoT domain, require this sort of expression capabilities.
Assisted living systems are one of these, where good-enough rules enable people to live independently and hold on to their sense of dignity while caretakers can still make sure they’re safe. In other words, you can either go full CCTV on your grandma or rely on a couple of smart and non-intrusive rules to achieve the same level of safety, with a rules-based being easier to maintain and friendlier towards the assisted person.
Uncertainty is unavoidable, and an IoT rules engine should have a mechanism for accounting for it in the way it builds logic.
Noisy sensor data or even missing data is common in IoT applications where we often deal with wireless sensors which are fully dependent on the battery lifespan, intermittent network connectivity or with network outages making API endpoints unreachable.
Modeling the utility function relies on the engine’s capability of dealing with uncertainty. As we rank and define our preferences among alternative uncertain outcomes, we need rules where for the same outcome of an observation, different actions can be taken.
For even more advanced use cases, the rules engine should enable probabilistic reasoning, supporting logic building based on the likelihood of different outcomes for one given sensory output. Here are some IoT-specific examples:
- Avoid the situation where rules and actions are triggered on data which is too old: only use weather information in the rule if the weather API call hasn’t failed for the past 10 minutes. (In Belgium, we don’t trust weather forecasts that are longer than 10 minutes)
- Only send an SMS to the police if the security system believes with over 80% certainty that there is an intruder in the house. If certainty is over 50%, turn on the lights in the living room. If it is between 30-50%, send the SMS to the homeowner. That decision can further depend on time-of-day or day-of-week (utility).
You will recognize uncertainty and probabilistic reasoning as concepts regularly dealt with under the general umbrella of AI technologies. We only talk about them in the context where they can be used to help automation developers model the world in a declarative way.
Arguably, other AI technologies, such as swarm intelligence algorithms or reinforced learning tools may also lead to actions (and be perceived as rule-generators) but they do not enable declarative modeling. Reinforcement learning is the training of machine learning models to make a sequence of decisions on their own, while swarm intelligence is composition of many individuals agents that coordinate using decentralized control and self-organization based on some very simple rules.
Other AI technologies still, such as supervised and unsupervised machine learning algorithms are out of the scope of automation but very useful as inputs for the decision engine.
As more and more applications make use of these tools, we need to somehow express these uncertainties in our applications as well.
We argue in this white paper that dealing with uncertainty is one of the biggest challenges that can be solved by using the right rules engine. There are two other major ones – one is brought about by the time dimension and the other by the complexity of logic itself.
This is a small extract from one of our white papers, which you can download over here.