A benchmark for evaluating rule engines

A rules engine is a software tool that enables developers to model the world in a declarative way. Rules engines are powerful automation machines that come in various shapes and flavors. Different types of engines were built to address different problems and some have overlapping functionality. It can be difficult to figure out which type of rules engine best suits your needs.

We Looked at Seven Rules Engine Capabilities

To assist in your evaluation, we've established a benchmark consisting of seven key rules engine capabilities. We then assessed various common types of rules engine technologies, including our own Waylay rules engine - by scoring each one against this benchmark.

When evaluating a new tool, it's important to consider three key factors: its power (the depth of its functionality), its ease of use (the complexity of its interface), and its ability to support your future needs (aligned with your growth and potential feature requirements). For a rules engine, specifically, assessing its depth of functionality means examining its ability to handle complex logic, manage time-dependent scenarios, and address uncertainty effectively.

Ease of use can be evaluated by considering factors such as the clarity of the rule's intent, the availability of a visual representation for the logic being built, and the ease of simulating, testing, and debugging rules. To assess the engine's readiness to evolve with your business, consider how well it adapts to changes, how easily it can be extended and integrated with third-party systems, and its ability to scale effectively.

7 key criteria to evaluate Rules Engine

Technology criteria

01

Modeling complex logic

Real world is complex: what you need is Turing complete rules engine.

The engine should support:
  • Combining multiple non-binary outcomes of functions (observations) in the rule, beyond Boolean true/false states.
  • Dealing with majority voting conditions in the rule.
  • Handling conditional executions of functions based on the outcomes of previous observations.
02

Modeling time

Time adds complexity.

The engine should support:
  • Dealing with the past (handling expired or soon-to-expire information).
  • Dealing with the present (combining asynchronous and synchronous information).
  • Dealing with the future (forecasting for prediction and anomaly detection).

03

Modeling uncertainty

Uncertainty is unavoidable.

The engine should support:
  • Dealing with noisy sensor data and missing data.
  • Dealing with unstable wireless sensors, fully dependent on battery lifespan.
  • Dealing with intermittent network connectivity or network outages.
  • Probabilistic reasoning

Implementation criteria

04

Explainability

The engine should be explainable, allowing users to understand why rules are fired and to identify and correct errors. The engine’s internal complexity should not come in the way of its users being able to easily test, simulate and debug that complexity. Users also require a high level of understanding and transparency into decisions with inherent risk.

05

Adaptability

The engine should be flexible enough to support both commercial and technical changes with minimum friction, such as changing customer requirements or changes in APIs. In order to account for future growth, the rule engine should be easily extendable and capable to support integration with external systems.

06

Operability

The engine should be operationally scalable. When deploying applications with many thousands or possibly millions of rules running in parallel, the engine should effectively manage the large volumes, by supporting templating, versioning, searchability, bulk upgrades and rules analytics.

07

Scalability

The engine should provide a good initial framework and abstractions for distributed computing to enable easy sharding. Sharding refers to components that can be horizontally partitioned, which enables linear scaling – deploying “n” times the same component leads to “n” times improved performance.

Download the full benchmark

Complete with extensive definitions and examples for each of the seven evaluation criteria.

Download the eBook
Binding Salesforce assets to IoT objects or physical assets by creating digital twin assets

Benchmark Results

  • Modeling Complex Logic
  • Modeling Time
  • Modeling Uncertainty
  • Explainability
  • Adaptability
  • Modeling Uncertainty
  • Scalability
  • Forward chaining engines
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • Condition Action Engines
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • Flow Processing Engines
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • Decision Trees/Tables
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • Stream Processing Engines
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • CEP Engines
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • Finite State Machines
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability

  • Waylay
  • Modeling Complex Logic

  • Modeling Time

  • Modeling Uncertainty

  • Explainability

  • Adaptability

  • Operability

  • Scalability