Tomáš Repčík - 27. 7. 2025

Learnings from Using Firebase AI

Building Chat Applications with Firebase AI

Fire talking

Firebase AI is a simple way to integrate AI into your Android, iOS, Flutter or web application.

It provides the means to use AI models in your application without the need to manage your own AI infrastructure.

This is not a tutorial on how to use Firebase AI, but rather my learnings from using it in my Android application.

How to achieve better results, how to use tool calling properly, and ideas on how to approach it before you start.

Documentation on how to start is available at firebase.google.com.

The implementation is simple and that is why it can be a double-edged sword. It is easy to get started, but it is also easy to misuse it. Here are some tips on how to use it properly.

I will be giving examples in Kotlin, but the same applies to other languages.

Chat Implementation

Before we start, I want to mention that you do not need to track the conversation state in your application or resend the whole conversation history to Firebase AI.

Firebase AI has you covered there as it provides session management for you. If you already have a chat, you can just pass the context as history to the AI model, and it will handle the conversation state for you.

val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel("gemini-2.5-flash", systemInstruction = content(role = "system") {
        text("You are a helpful assistant")
    })

val chat = model.startChat(
    history = listOf(
        content(role = "user") { text("Hello, could you help me build AI chat applications?") },
        content(role = "model") { text("Of course! I'd be happy to help you with that.") })
)

val response =
    chat.sendMessage(content(role = "user") { text("How to keep session history?") })
print(response.text)

Create a model and specify the system instruction, which is the initial context for the AI model. Then start a chat with the initial history of messages.

Afterwards, you can send messages to the chat and get responses from the AI model.

Safety of AI Responses

When using AI models, it is important to ensure that the responses are safe and appropriate for your application.

Firebase AI provides a way to filter out unsafe content from the responses.

You can use the safetySettings parameter to specify the safety level of the responses.

Safety Configuration Options

Firebase AI offers several safety mechanisms to protect your users:

Harm Categories:

Blocking Thresholds:

By default, MEDIUM_AND_ABOVE is used, which blocks content with medium harm and above. So, if you want to be more strict or less strict, you can change the safety settings.

Example:

val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE)
val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.MEDIUM_AND_ABOVE)
val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        "gemini-2.5-flash",
        systemInstruction = content(role = "system") {
            text("You are helpful assistant")
        },
        safetySettings = SafetySettings(listOf(harassmentSafety, hateSpeechSafety))
    )

Proper Usage of Tool Calling

Initialization

A great feature of recent AI models is tool calling. It allows you to call external APIs or functions from the AI model.

The implementation is quite simple. You just need to define the tool and its parameters, and then pass it to the AI model.

val sendMailTool = FunctionDeclaration(
    "sendMail",
    "Sends an email to a recipient with a specified subject and body.",
    mapOf(
        "recipient" to Schema.obj(
            mapOf(
                "name" to Schema.string("The name of the recipient."),
                "email" to Schema.string("The email address of the recipient.")
            ),
            description = "Details of the email recipient, including their name and email address."
        ),
        "subject" to Schema.string("The subject of the email."),
        "body" to Schema.string("The body content of the email.")
    ),
)

val model = Firebase.ai(backend = GenerativeBackend.googleAI())
    .generativeModel(
        "gemini-2.5-flash", 
        tools = listOf(Tool.functionDeclarations(listOf(sendMailTool)))        systemInstruction = content(role = "system") {
        text("You are a helpful assistant")
    })

Usage of Tool Calling

When the tools are defined, the AI model can call the tool with the parameters you defined.

val response = chat.sendMessage(
    content(role = "user") { text("Send email to John Doe at john.doe@example.com with subject 'Hello' and body 'Hi John, how are you?'") }
)       
val functionCalls = response.functionCalls
val sendMailParameters = functionCalls.find { it.name == "sendMail" }
if (sendMailParameters != null) {
    sendMail(sendMailParameters)
}

Here is the catch, where some people get confused. The AI model calls the tool with all the parameters based on the chat.

The most intuitive way is to use the call parameters to send mail and show to the user that the mail has been sent on its own.

Moreover, we think that the model actually knows what we have done with the tool call. Unfortunately, that is not the case.

You have to send some sort of response or confirmation to the AI model that the tool call was actually executed.

If you tried to reflect upon the last message, the AI model will not know that the tool call was executed, and it will lose the context of the last message.

The right way is to give feedback to the AI model, and it will elaborate with the next message, which you should show to the user.

So after you send the email or execute any other tool call, you should send a message to the AI model that the tool call was executed.

if (sendMailParameters != null) {
    // do the actual work of sending the email
    sendMail(sendMailParameters)
    // afterwards send a message to the AI model with content for the context
    val toolResponse = JsonObject(content = mapOf("response" to JsonPrimitive("Email sent")))
    val finalResponse = chat.sendMessage(content(role = "function") { part(FunctionResponsePart(sendMailParameters.name, toolResponse)) })
    // use the final response to update the UI or inform the user
}

This way, the AI model will have the context of the last message, and it will be able to continue the conversation without losing the context.

Keep the AI Tool Call Handling Clear

There can be multiple tool calls in the response, so you should handle them properly.

You can use the functionCalls property of the response to get all the tool calls and their parameters.

val response = chat.sendMessage(
    content(role = "user") { text("Send email to John Doe at john.doe   @example.com with subject 'Hello' and body 'Hi John, how are you?'") }
)
val functionCalls = response.functionCalls
functionCalls.forEach { functionCall -> 
    when (functionCall.name) {
        "sendMail" -> ...
        "anotherTool" -> ...
        "yetAnotherTool" -> ...
        else -> {
            // handle unknown tool calls
        }
    }
}

This way, you can handle each tool call separately and execute the appropriate action based on the tool call name.

Categorization of Messages in Tool Calls

I have been struggling with the consistent categorization of the messages in tool calls.

For example, you want to know the topic of the conversation and categorize it based on the system which you are using in the app.

The Schema can provide you with an implementation of enums, but the AI can pick only one value at a time.

The Schema also provides the array parameter, which can take on other schemas, so the combination of schemas can result in multiple values.

FunctionDeclaration(
            "detectTopic",
            "Detect topic in the messages",
            mapOf(
                "tags" to Schema.array(
                    minItems = 1,
                    items = Schema.enumeration(values = listOf("AI", "Android", "iOS"),
                        description = "Topic tags",
                ), )
            )
        )

If the AI model detects multiple topics, it will return all the applicable topics in the array. Afterwards, you can convert the strings into enum values and use them in your application.

Improved Prompts with Structured Data

Another great feature of Firebase AI is the ability to use structured data in prompts.

This allows you to provide more context to the AI model and improve the quality of the responses.

Instead of sending plain text to the AI in the system message, descriptions and instructions can be provided in a structured format.

{
    "role": "Email Assistant",
    "name": "MailMate",
    "task": "Assist the user in composing, sending, and managing emails efficiently and accurately.",
    "guidelines": {
        "language": {
            "primary": "Use the language preferred by the user",
            "fallback": "If unsure about the user's language, use English"
        },
        "tone": {
            "style": "Be professional, clear, and courteous",
            "restrictions": "Avoid informal language or slang"
        },
        "behavior": {
            "uncertainty": "If any email details are unclear, ask the user for clarification",
            "privacy": "Never share or suggest sharing sensitive information unless explicitly instructed by the user",
            "off_topic": "If the user asks questions unrelated to email, politely redirect them to email-related tasks",
            "multiple_requests": "If multiple email actions are requested, confirm with the user which one to prioritize"
        }
    }
}

The model will understand the structured data and use it to improve the quality of the responses.

A similar approach can be used for tool calls, where you can define the parameters in a structured format. The tool calling will become more predictable and easier to manage.

You do not need to use JSON, feel free to experiment with other formats like YAML or XML, as long as the AI model can parse it.

This is based on Anthropic’s prompt engineering, but it is not limited to it. You can use any structured format that you find suitable for your application.

Socials

Thanks for reading this article!

For more content like this, follow me here or on X or LinkedIn.

Subscribe for more
LinkedIn GitHub Medium X