Dialfire AI Agent - Guide
Dialfire AI Agent - Guide: A few steps to a ready-to-use AI Agent - simple, flexible, fascinating.
Quick Start: How to Launch Your AI Agent in Dialfire
Welcome to the world of Dialfire AI Agents!🎉
Don't worry – you don't need to be a tech ninja to get your own smart AI Agent up and running. In just a few clicks, your agent is ready to support your calls.
It's that easy:
- Open the Voice Automation in the Dialfire menu; this is where all AI Agents are created and managed.
- Click on New IVR in the top right and select your AI Agent. It's created with a single click.
- Voilà: Your AI Agent starts with an example template that's ready to use. This template already contains all the important building blocks to get started – perfect for building your own customizations right away.
Prompting 1x1 – Start Simple, Grow Cleverly 🎓
The most important thing about prompting: Start small and build up step by step.
How to succeed:
-
Begin with a minimal version that only roughly explains to the agent what you intend to do with it. In a figurative sense: Tell the agent where to run, not how to move its feet.
Test what the agent can already do with it – you will be surprised. - Then create a version that is just sufficient to go live with.
- Go live with a manageable call volume that allows you to look at every single call.
- Fix the problems you identify in live operation – these are often very different from what caused you problems in the "dry run." This is the part where you should invest the most time.
☝️Less is often more:
- Set realistic expectations: Humans make mistakes, so does AI – 100% correctness doesn't exist.
- Formulate clearly and simply.
- Avoid contradictory instructions.
- When you say what the bot shouldn't do, also say what it should do instead.
- Use direct speech only where it's necessary (e.g., compliance). It's better to describe "What" the agent should say, not the "How."
- Avoid unnecessary extra knowledge.
- Don't fixate on individual errors and try to compensate for them.
- Additional instructions often degrade performance in other areas that previously worked well. Instead, look for instructions you can delete, which is often much more effective.
- Describe your plan to your agent so it understands the meaning behind individual instructions and can make better decisions.
-
The AI Agent is not an alien👽: Before you explain the world to the agent, test whether it doesn't already know the concept.
Example: “You are a service bot in a car rental” instead of “Customers with car problems call you.”
Even with these simple rules, you can build useful AI Agents for practical use.
💡Attention: As you add more and more tasks, you'll eventually notice that your agent feels a bit dumber, loses spontaneity, and makes mistakes. This is completely natural and is because, like a human, the AI also has a limited attention span.
The good news: There are solutions for this, which we'll present below – so there are no limits for you. 🎉
The First Prompt – The Core of Your Agent
The prompt is the brain of your AI Agent – this is where you define who it is, how it speaks, and what it's capable of.
☝️Important: The structure is not strictly prescribed – creativity and experience make the difference. Everything you already know about dealing with AI remains valid.
☝️Basic rule: Write the prompt as you would a note for a new employee before you let them on the phone. If you follow this, you can skip many tutorials.
Approximate Prompt Structure:
Contains the basic descriptions of how the bot should behave:
- Role & Purpose – Who is speaking? What is the agent's goal?
- Task – What should the agent do in the conversation?
- Guardrails – What must the agent absolutely not do in the conversation?
- FAQs / Answer Templates – Short, clear sample sentences for the LLM
Contains control instructions that are embedded directly into the text to add even more dynamism
Apart from the optional control instructions, there is no specific format you must adhere to. We recommend using Markdown for formatting - it is particularly well understood by AI.
Example of a simple starting prompt:
# Your Role
You are Helena Fisher, a friendly AI phone agent from GreatProducts. You always speak politely, confidently, and in a natural tone.
# Your Task
Greet the customer and ask them to briefly explain their request.
$customer_request Customer's inquiry
Answer questions from the FAQ. For unknown requests, transfer the call using function "connect_agent". If resolved, say goodbye and use function "hangup".
@connect_agent Transfers the call to a human agent.
@hangup Ends the call.
# FAQ
Q: Wants to return product
A: They can return products within 30 days if unused and in original packaging.
Q: Delivery times
A: Standard: 3–5 business days. Express: 1–2 business days.
Q: Opening hours
A: Mon–Fri: 8:00–18:00, Sat: 9:00–14:00.
In the example, you see # Your Task
as a section heading in Markdown format as well as $customer_request
and @connect_agent
as control instructions. We'll go into more detail about these two elements in a moment... stay tuned 😉
Testing the Prompt – Try, Experience, Optimize 🧪
Before your AI Agent shines in real use, test your prompt directly in the Voice Automation menu. Here you can easily check how your agent reacts – either via Call or in the Chat.
How it works:
- Open the menu: Navigate to Voice Automation →Test.
- Choose test mode: Decide on Call (simulated call) or Chat (text interaction).
- Start interaction: Speak or write your test queries, exactly as your customers would.
- Observe results: Check whether the agent correctly fills the variables and reliably executes the Function Calls.
💡Tip: Small changes in one place can have a big effect on the entire process. Test again immediately.
Let's go – your AI Agent is ready to launch! 🚀
🎉 Congratulations!
You did it: You configured and tested your AI Agent, and now it's ready to take the customer service experience to a new level! Before your agent can get started, you must finally assign it to the correct inbound line:
1️⃣ In the Voice Automation menu, go to the test area and activate your final bot version.
2️⃣ Activate on your inbound line:
- Open the menu item Phone numbers and select the desired line.
- Under IVR Settings, select the title of your bot in the IVR dropdown menu.
- Check the Enabled checkbox and save – done!
And voilà – your AI Agent is live and ready to delight your customers. Sit back, enjoy the show, and watch how your new virtual colleague rocks your customer service! 🤘
The billing for your AI Agent is simple and transparent:
- All AI Agent connection minutes are billed at the Conversational AI C rate – in addition to the regular connection fees.
- Additionally, the duration of generated speech output is also billed at the Conversational AI C rate.
- 💡Smart-Saving: Speech outputs that are exactly repeated are automatically cached. No additional costs are incurred for these on the next call.
What we haven't explained yet... 🤔
In the example prompt, two things appeared that we don't want to just leave at that:
-
Variable placeholders like
$customer_request
-
Function calls like
@connect_agent
Both are central tools that make your agent even more powerful.
In the next step, we'll show you how variable placeholders and function calls work – and why they are the key to truly smart conversations.
Variable Placeholders
With inserted variable placeholders, your agent becomes a note-taking professional: It automatically saves information from the conversation directly into the data record, which you can then process further in Dialfire.
Syntax:
$Field_name Field description
→ defines an expected field
Example:
$customer_request Customer request
→ records which request the customer names
☝️Important:
- Place the field with $ in front, on its own line, near the topic the field relates to. This way, the agent knows from the surrounding context when the field needs to be filled. However, do not refer to the variable in the rest of the prompt text, i.e., not: “Don't forget to fill variable XYZ”
- Name the variable so that it makes sense to the agent – e.g., “First name” instead of “Field15” and add a small description after the field name - this works much more reliably.
Variable with selection options
Write another options:
line directly under the variable with allowed values, separated by a comma. The agent can then only use these. For example, it looks like this:
$will_newsletter Customer agrees to receive the newsletter
options: yes, no
Function Calls – Triggering Actions
Function Calls are the action buttons of your agent. They trigger direct actions when something needs to happen immediately.
The actual functions can be seen in the Script tab. These are Javascript functions that you can extend as you wish.
To use these functions, they must be made known to the agent in the prompt. This happens on a new line with @ followed by the function name and a subsequent description. Unlike with variables, the position in the prompt does not matter.
The agent basically decides on its own from the conversation context when to use the functions. But you can influence this in the prompt and have the agent announce its use. This ensures very natural conversation transitions.
Example:
-
In the Script tab, there is the function:
function connect_{
...
} -
In the prompt, you make this function known with:
@connect_agent Connect the caller to a human.
-
And in the text you write, for example:
If you don't know the answer to the question, then say that you are now connecting with an employee who can certainly help and at the same time use the "connect agent" function.
Advanced – Out of the cage
Welcome to the next level! 🎮
Your first AI agents are already running – now we'll show you the secrets of how to introduce your agent to the outside world. Don't worry: Even if it's called "Advanced," you don't need a Ph.D. in computer science for it - but it certainly doesn't hurt. 🙂 See it as a well-stocked toolbox just waiting to build great things.
💡 Can you keep a secret?
What we have called a "prompt" so far is actually not a static prompt at all. It is a dynamic template that is reassembled with every turn.
For the computer scientist, this sounds immediately familiar: not a rigid piece of text, but a flexible rendering system – similar to VueJS or Angular, but for AI Agents.
And this is exactly where the advanced template functions come into play…
Advanced Template Functions – The Superpowers of Your Prompt 🦸♂️
The prompt template can do far more than simple value substitutions. It masters a complex template syntax that you can nest as you wish. This turns your prompt into a flexible modular system that cleanly maps even complex conversation logics.
Value substitutions with {{expression}}
The simplest case: {{data.feldname}}
But beware: There's more to it than just field names – function calls are also possible. This way, you can get dynamic content directly into the template.
Conditional blocks with IF
With the IF directive, you control which content actually gets into the prompt. Either in the middle of the text:
({{#if condition}} Here is an optional text {{/if}})
or with multiple alternatives, enclosing large blocks:
{{#if condition}}
{{#elif alternative_condition}}
{{#else}}
{{/if}}
Use: Display alternative/optional prompt components
Loops with EACH
For repetitive elements, use EACH:
{{#each itemVar @ array-expression}}
{{/each}}
Perfect when things repeat arbitrarily often – e.g., a list of products or contracts.
With these advanced template functions, you bring dynamism to your agents.
💡And have you noticed yet? In the Logs, you will also find a small link to next prompt next to the agent's answers. This way you can easily check whether the generated prompt was actually correct.
Defining Your Own Functions
We have already talked about functions, but not yet about the truly complete syntax. In the Script tab, you define a function in Javascript, something like this:
function funktionsname(args){
}
The args object then contains the parameters that the agent passed during the call.
In the actual prompt, you declare the function, something like this:
@funktionsname Description of the function's purpose
-argument1: Description of the parameter
options: value1,value2
optional: true
-argument2: Description of the 2nd parameter
With options, you specify the allowed values if you only want to allow certain possibilities. And with optional, you specify that this parameter does not have to be present.
And what does the function return?
The return value does not have to follow a specific format. Any object structures can be returned or even a simple string with an instruction on how the agent should proceed.
The only important thing is that the agent can understand the meaning.
💡Tips
- Well-chosen function and parameter names with a suitable description are crucial for reliable use by the agent.
- Use functions sparingly. If the agent has many functions to choose from, it takes up a lot of attention budget.
Variable Placeholders vs Function Calls — what, when, why? 🤔
Variable placeholders run quietly in the background and write recognized values (e.g., name, zip code) into the data object.
- Advantage: saves the LLM's attention budget.
- Disadvantage: The value cannot be checked before the bot replies.
Function Calls are the active players: The LLM calls a function that performs runtime checks or validation.
- Area of use: everywhere you need immediate results – e.g., validations or CRM queries.
💡Rule of thumb:
- Variable placeholders → for non-critical fields
- Function Calls → for everything that needs to be validated or processed immediately
The Javascript Environment 💻
With Javascript, you have a LowCode environment at your disposal, with which you can realize advanced functions.
Global Objects
In the Javascript environment, you will find some global objects that you can use from any function.
data {} - the central object, whose fields are directly transferred to the Contact data record for further processing
temp {} - a temporary object in which you can save serializable data between turns
-
actions []
- to trigger control commands -
tts_translation {}
- a map of regex expressions in string format to phonetic substitutions to force the correct pronunciation of certain terms -
LOG ()
- a function for log outputs that you can then see in the conversation log
Triggering API Calls via HTTP
For API calls to external systems, global functions are available:
GET(url, options)
POST(url, payload, options)
PUT(url, payload, options)
DELETE(url, options)
The result is a Response object with the following functions:
status(
), text(
), json(
)
The request works asynchronously. You can therefore start several requests in parallel and the script only waits until the request is completed when accessing one of the functions on the Response.
Using Script Hooks
You can implement the following hooks as a function to be able to react to certain events:
- onLoad() - when the agent starts
- onUpdate() → after each turn
- onClose() → at the end of the conversation (e.g., closing tasks, write operations)
- onFieldUpdate(fieldname,value) - if defined, the variable placeholders are not simply attached to the data object, but are processed via this function
- onFunctionCall(name,arguments) → if the agent calls an undefined function... this is how dynamic functions can be implemented
Help, my agent is getting dumber - or - what a pointing finger is good for 🤔
Did it start well with your agent? And now it just doesn't do what it's supposed to?
You've already implemented all of our tips - and still - it's exasperating!
Then you're probably at the point where you could use our promised solution for really big agents.
Why static prompts reach their limits
AI agents, like humans, have a limited attention budget. But a new employee could work great with your prompt, couldn't they? Is the AI really that stupid? Why can it write endlessly long texts but fail in such a conversation?
💡 Simple answer: Your employee has a pointing finger, but today's AI doesn't!
A human can remember (or keep their finger on) where exactly they are in the conversation script; the AI has to figure this out again for every answer based on the conversation history. This costs a lot of attention, and little is left for the actual answer.
Welcome to Context-Engineering!
You are on the verge from prompt engineer to context engineer. The idea is to dynamically adjust the context (prompt) so that only relevant parts are present for the next answer. And this is the great hour of prompt templates.
Everything that can be useful for the next answer in the current conversation situation must be included, everything else removed.
Information Agents
If you are working on an agent that should only provide information, it makes sense to only provide the currently relevant part of the knowledge base. So, first have the agent start a function call in which the appropriate part of the knowledge base is retrieved.
For smaller knowledge bases, you could simply select from a list of topics; for very large knowledge bases, you need a RAG system.
You then include the document for the topic area into the prompt via the template. And now the agent can fully concentrate on the actual question.
Transactional Agents
If your agent is supposed to go through a complex process, you should divide it into phases. You switch between the individual phases using function calls.
💡This solves many problems at once:
- full attention in the individual phases
- very high process fidelity - adheres exactly to the instructions
- process can be deterministically controlled - the agent cannot simply skip phases
- individual phases can be optimized and tested independently, no interference with other phases
- not only better but also faster answers due to smaller context
- your agent can grow indefinitely
Example Prompt Template
# Role
You are a phone ordering bot at GreatProducts.
# General Process
You guide the customer through the ordering process:
- Greeting
- Capturing the shopping cart
- Capturing customer data
- Capturing payment data
- Saying goodbye
Focus completely on the current task.
# Current Task
{{#if !temp.phase}}
Greet the customer.
Tell the customer that we will first take down the desired items and at the same time activate the function “Neuer_Warenkorb”
@Neuer_Warenkorb Create a new shopping cart for order entry
{{#elif temp.phase===10}}
Ask the customer for the desired items
….
When all items are captured, then say that you now need the customer data and at the same time activate the function “Daten_Erfassen”
@Daten_Erfassen Prepare for capturing customer data.
{{#elif temp.phase===20}}
…
{{#elif temp.phase===30}}
….
{{#elif temp.phase===40}}
Say goodbye to the customer and end the conversation.
{{/if}}
#FAQ
Q: Why?
A: Because!
@hangup ends the call
Explanation
You define a function in each phase that brings the agent to the next phase. Within this function, you set the field temp.phase to the value for the next phase.
Doesn't this create a strange gap between the phases? No, not at all.
The agent verbally prepares the customer for the new phase. After that, however, the agent does not wait for a reaction from the customer but reacts immediately to the result of the function call with the changed prompt so that it seamlessly starts in the next phase without a conversation gap occurring.
The function declarations are also faded in and out by the template, so the agent only sees the relevant functions at a time. This makes a big difference.
Since you dynamically change the prompt in the ongoing conversation, make sure that enough context remains around the phases so that the previous conversation history still makes sense with the new prompt. In the example, only the # Current Task section is dynamically changed. However, the general process description and the FAQs remain the same.