Introduction

In this article, I will explore two distinct approaches to leveraging Generative AI (GenAI) within the context of Waylay. The first approach can be considered "Classical," as it aligns with traditional perspectives on automation and control. This method allows us to design and explain the actions taken by the algorithm, providing a clear understanding of its behavior using GenAI (what we often call "rules explainer"). The second approach, which is both exciting and challenging, involves delegating automation to the AI. Here, the large language model (LLM) takes the reins to create a complete end-to-end automation scenario based on its reasoning, introducing a new level of autonomy and complexity.

"Humans" in Control - via explicit design of the automation intents in Waylay

First, we leverage GenAI to interpret the automation flows (Waylay templates) explicitly designed by experts in Waylay. In this scenario, the LLM can still be used in two different ways: either to "understand" the intent behind the rules or/and to identify the appropriate remedies for the issues at hand. It is important to note that the Waylay automation engine retains the control over how these rules, which encapsulate human intent, are defined.

"LLM" in Control - via delegating automation to LLM

In the latter part of this article, we will take a radically different approach by delegating reasoning and action-taking to the LLM, based on capabilities and functions that are made accessible to it along with user intent expressed in plain text. LLM Automation is aided via Waylay in several ways: by using pre-defined, out-of-the-box plugins to expose new capabilities to LLM, that require no coding from users; by providing simple means to add system prompts on top; and through the Waylay orchestration composer, which allows for segmenting different logical compositions into manageable pieces. These compositions can themselves be perceived as functions. Essentially, some of the functions presented to the LLM can be composite functions and workflows that the LLM will execute as it goes through the reasoning process. It is also possible that once the LLM has invoked all functions, its output can be fed into the next set of intents, forming the subsequent part of the other workflow.

Throughout this paper, the term "Waylay template" or "flow" is utilized in various contexts. It’s important to clarify that, in one instance, the Waylay template is employed to design automation flows or rules. In another context, Waylay serves as an orchestrator, connecting different large language models (LLMs) and retrieval-augmented generation systems (RAGs), and exposing functions on top of which LLM can reason. Here, the definition of rules is, to some extent, delegated to an LLM “reasoner.”

Reason for this potential confusion comes from the fact that Waylay Engine is both a rule engine as well as an API/function orchestrator.

Use of LLM to interpret Waylay intents - Explainable auto-remedies - a fusion of causal modeling and GenAI

In this scenario, we use LLM to understand the intent behind the rules. In order to achieve this, Waylay has integrated real time data, causal graphs and outcome insights into the LLM fine tuning process. In the second phase, we combine historical records/manuals with the root cause to provide the right remedy - reducing the risk of human error, ensuring consistency that allows human workers to focus on more complex and strategic aspects of their jobs.

It is crucial to note that we utilize LLM in two distinct ways: first, to interpret a troubleshooting and automation logic crafted by experts and implemented in Waylay; and second, to search for appropriate solutions through documents, manuals, or case tools based on a well-defined problem identified by Waylay and properly understood by the LLM."

Example of this approach in two different domains:

Waylay GenAI in Industrials - the right remedy in any language
Fintech - Bots powered by GenAI: A Shift from Case Management Tools

Here, we present a template (that is provided as an external API towards other systems) that offers multiple ways to query the system. Depending on user inputs or API requests—whether they are troubleshooting inquiries or asset-related questions—after receiving the initial response, the template continues to query various RAGs for remedies or additional responses. It is crucial to note that each step in this flow can represent subflows that lead to further LLM/RAG invocations.

GenAI Template that powers Waylay DigitalTwin

Please note that the template above to some extent resembles a mix of PromptFlow and LangChain modeling, but we found the Waylay automation framework much more intuitive, easier and quicker to use to achieve similar goals. Moreover we can easily test every flow or API, post-process API's to parse the outcomes, and make flows that easily mix these API's with different LLMs or RAGs etc.. Each Waylay (LLM orchestration) template will have as an input text (question), array of previous messages (Q&A) (in case we want to support conversations) and the text that is produced as the output. That way the template can be directly embedded in any third party application or bot without any coding.

Since consecutive invocations of the same template might include the history of previous conversations, that means that in practice the same template can be used as either "one-off' API or embedded in the conversational bot that mimics longer interactions where past requests and answers are taken into account for further communication with the end user.

The actual output of this flow can be seen in the Waylay Digital Twin app as presented in this video:

Waylay Digital Twin app enhanced with GenAI

When it comes to tangible benefits of Waylay Digital Twin powered by GenAi, I suggest this blog for further reading Waylay GenAI for Field Service Operations Generates 120% in first Month in Production and 267% every month thereafter

Delegating automation to LLM (either via conversation bots and functions or simple API invocation)

In this scenario, Waylay is used to provide means for LLM to reason in two different ways, providing functions (plugins) to the bot, and simple system prompt entry to indicate to the LLM how these functions should be used. In this case, the actual Waylay template topology is slightly different (looks very much like star topology, with functions feeding the LLM), where you have a set of functions (which are either API's, plugs) or even subflows themselves exposed as functions towards the node that does the reasoning (in this case called invokeLLM).  Behind the hood, Waylay framework dynamically discovers capabilities of each plug (function) and attaches these descriptions directly into LLM. This together with the system prompt then is enough for the LLM to "reason and act". The outcome of the LLM is captured and either represented as the outcome to the end user, or can be used itself as a function to another LLM if that is needed (see that later).

One simple example can be found in this video , with few simple functions, and to showcase how easy it is to add this in Waylay.

Slightly more complicated use case is below, where we use different plugs from Waylay to provide chatBot conversational capabilities that allow the end user to query everything that is happening in Waylay, either in context of rules that have fired, reason that things have failed, alarms, values, metrics etc… This can be extended with any third party system if needed (CRM, email …)

Here is the example of the same template used for fintech demo Fintech - Bots powered by #GenAI

Automation on steroids - GenAI Powered Network & Service Management - Fulfillment

During DTW24 - Ignite, Waylay developed a system capable of fully automated network service provisioning and assurance based on simple, natural language intents (for which Waylay with its partners won the Outstanding Catalyst Award). It consists of three fully separated LLM flows that are composed together - in order to provide fully automated service provisioning use cases (as covered in this video). There is not a single place in this scenario where any explicit control/decision has been made by an external interface.

In this example, we showcase the solution that starts from a catalog of TMF 921 intent templates that can be instantiated based on natural language user requests. Each intent template has placeholders that are automatically populated with user-supplied information. In this video, we are featuring the IP/VPN SLA upgrade intent template that takes as input the site details and the desired SLA level. This process results in a complete TMF 921 intent payload.

Before further processing of this TMF 921 intent, our system needs to understand the intent's network domain. This understanding is facilitated by a coarse natural language ontology that helps the system comprehend the domain’s inventory structure and entity relationships. With this information, the second step in the fulfillment flow decomposes the TMF 921 intent into a TMF 640 service order. This step includes three Gen AI-powered sub-steps:

  1. Ontology Selection: The first sub-step involves selecting the appropriate ontology from the catalog.
  2. Resource Querying: The second sub-step uses the ontology’s entity information and an LLM to automatically query the inventory for the resources needed to fulfill the intent. These resources are converted into TMF 639 format to abstract the inventory’s nature for subsequent steps.
  3. Service Order Generation: The final sub-step generates the TMF 640 service order, utilizing a catalog of TMF 620 service order item templates to perform the order decomposition. The LLM selects the appropriate template based on the intent and ontology information, and populates the specifications as part of the larger TMF 640 service order using the retrieved inventory resources.

The third and final step in the fulfillment flow handles the provisioning of the TMF 640 service order. This provisioning can either be delegated to a dedicated provisioning service or performed via direct device access. In this example, we explore the latter: the flow includes a two-step process where an LLM first generates the network element commands based on the TMF 640 order details. Then, a second LLM uses RAG (Retrieve and Generate) to validate and correct this configuration based on a VOR (Voice of the Router) store of network element CLI guides. Finally, the configuration is pushed to the network element, with specifications such as router IP (in this case Cisco 7500 model), port, vendor, and version populated based on retrieved inventory data.

Fully automated network service provisioning and assurance based on natural language intents

You can see the entire end-to-end process demonstrated in this video featuring the Panamax Interactive Bot, which is built on top of Waylay GenAI. This project, a collaboration with partners, earned Waylay the Outstanding Catalyst Award for the Use of TM Forum Assets.

In Conclusion

Working with the Waylay framework is a unique opportunity to glimpse the future as it unfolds right before your eyes. This paper aims to capture our latest developments and discoveries along this journey. In doing so, we seek to identify the most effective applications of GenAI in real-world industrial settings, aligning with the current state of the art. Our insights consistently come in pairs: we explore cutting-edge technologies while simultaneously contemplating their optimal application in a business context.

More about our research can be found on this link https://waylay.ai