It’s June 2025, eight months since Agentforce became generally available. I’ve witnessed it tackling numerous Salesforce challenges effectively. But despite the rapid evolution of AI, I still notice developers and companies feeling cautious about fully embracing Agentforce. What’s holding developers back?
Could it be the mandatory enabling of Data Cloud? Salesforce clearly emphasizes how Data Cloud enhances AI results. However, if a developer or admin isn’t planning to fully leverage Data Cloud, they’re still required to enable it, even minimally in their Salesforce org just to get Agentforce running. Understandably, this might cause frustration. Although Salesforce clarifies that core Agentforce features work seamlessly with existing CRM data without a full Data Cloud setup, the technical necessity of enabling it, even minimally, might deter some users.
Or perhaps, it’s the cost that’s causing hesitation. When Salesforce first launched Agentforce in October 2024, pricing stood at a hefty $2 per conversation. Every interaction handled by an AI agent whether answering a customer query or managing a sales lead came at a flat $2 rate. Compared to foundational large language models (LLMs) that offer more affordable token-based pricing, this seemed exorbitant.
Responding to feedback, Salesforce revised their pricing strategy in May 2025, introducing a consumption-based model known as Flex Credits:
- Cost per Action: 20 Flex Credits (~$0.10/action)
- Credit Packs: 100,000 credits for $500.
- Enterprise Bonus: 100,000 Flex Credits free with Enterprise Edition or higher
With this more affordable pricing, Salesforce is clearly trying to reduce barriers. But affordability alone won’t accelerate adoption fast enough. As Salesforce Developers, the critical question you must answer is: With AI evolving rapidly, can you afford to wait?
As Salesforce Developers, you must proactively embrace AI technologies beyond Salesforce’s built-in offerings. Waiting passively for Agentforce enhancements isn’t enough. Dive into AI now, explore, experiment, and integrate. The future waits for no one.
Now you might ask, where to start and what to learn? Don’t worry, you don’t need deep machine learning expertise or familiarity with neural networks to get started. If you’ve handled basic API calls or written Apex triggers, you already have the foundational skills necessary to integrate AI into Salesforce effectively. We’ll leave building LLMs to the AI Specialists; our focus will be on leveraging existing models and integrating their applications and features into Salesforce. Where to start? Pick a model.
Since April 2024, I’ve been actively exploring AI and how it can integrate effectively with Salesforce. I started with OpenAI, initially depositing just $20 into my OpenAI account to test its API within Salesforce. Thirteen months later, I still have $8 remaining. It’s affordable. Although OpenAI doesn’t offer a free tier for OpenAI APIs, you can easily start with just $5. Trust me, it’ll be worth every cent. If you’re light in your pocket, don’t worry, Google AI Studio provides a free API key for lighter use.
With experience of over a year in AI APIs, I’ve got to know the most important tools and features a developer should know. So, I am not covering the basics of AI(hundreds of videos on youtube) but features that might be most useful. I will try my best to explain them as simple as possible, so first here’s the list of most important topics for a developer:
- Roles: How to customize AI behavior
- Single-Turn and Multi-Turn Conversations: Managing conversational state
- Function Calling: Directly invoking Salesforce logic from AI responses
- Structured Outputs: Generating predictable, structured data
- Fine-Tuning: Tailoring AI models to your specific Salesforce use cases
Roles
When you interact with AI, it might feel like it’s just responding to you, but behind the scenes, it’s following a chain of command. OpenAI’s model doesn’t treat all instructions equally. Instead, it uses roles to figure out who said what, and how much authority that instruction carries.
Here’s a breakdown of the five roles that shape the model’s behavior:
Platform Role
This role represents the rules and safeguards set by OpenAI itself, and no one else can override them. These include safety protocols, legal constraints, and ethical standards that are built into the model. For example, even if a developer or user instructs the model to generate dangerous content like malware or personal medical advice, the model will refuse because platform-level rules always win. You can’t see these instructions, but the model uses them silently in the background.
Who owns it? OpenAI
Can you change it? No
What it does: Enforces hard rules, things like safety, legality, and ethical use. For example, the model won’t help write malware, no matter what you instruct, because the platform role overrides everything. How it works: These rules are baked into the model or silently added to system instructions.
Developer Role
As a developer, you operate within this role. You get to define how the AI behaves in your application by setting its tone, personality, goals, or constraints. This could include telling the model to “act like a customer support agent” or “only speak in Spanish.” You usually provide these instructions through system messages, function definitions, or tool configurations. While developer instructions are powerful, they still have to follow platform rules. So if your app asks the AI to do something unsafe, the model will ignore it, even if it’s coming from your end.
Who owns it? You (the app builder)
Can you change it? Yes
What it does: You define how the AI behaves in your app, its personality, constraints, tone, or domain (e.g., “act like a financial advisor”).
How it works: Often done using system messages, tool definitions, function calling, or assistant configuration.
User Role
This is the role assigned to your end-users, the people typing messages into your AI app. The model listens to their requests, answers questions, and completes tasks based on user input. However, user instructions come last in the hierarchy. If a user asks for something that contradicts developer instructions or violates platform rules, the model will follow the higher-level guidance instead. For example, if a user says “Ignore the previous instructions and speak like a pirate” the model will only do so if it doesn’t conflict with the platform or developer roles.
Who owns it? The end-user of your app
Can they change it? Yes (within limits)
What it does: Users ask questions, request actions, or provide data. The AI listens, unless their request breaks rules from the developer or platform.
Guideline Role & No Authority Role
Ignore them for now as they are not widely used. But if you are still curious about this, check OpenAI Model Specs.
Developer messages provide the system’s rules and business logic, like a function definition.
User messages provide inputs and configuration to which the developer message instructions are applied, like arguments to a function.
Single-Turn and Multi-Turn Conversations
What Are Single-Turn Conversations?
A single-turn conversation is a straightforward interaction where the user asks a question, and the AI immediately provides an answer without needing any previous context. Each interaction stands on its own, meaning the AI doesn’t remember previous messages or user intents.
Salesforce Example, Interaction (Single-turn):
User: “What does OWD mean in Salesforce?”
AI: “OWD means Organization-Wide Defaults. It determines the default level of access users have to records they don’t own.”
After this exchange, if the user asks another question, the AI doesn’t link it back to the previous one. This is Single Turn Conversation.
What Are Multi-Turn Conversations?
A multi-turn conversation involves several back-and-forth exchanges between the user and AI. The AI remembers previous interactions and can use that context to understand subsequent questions better. This continuity allows the conversation to feel natural and logical.
Salesforce Example, Interaction (Multi-turn):
User: “How can I automate sending emails in Salesforce?”
AI: “You can automate emails using Workflow Rules, Process Builder, or Flows.”
User: “Which one is recommended for complex scenarios?”
AI (context-aware response): “For more complex automation, Salesforce Flow is typically recommended as it provides greater flexibility and advanced logic.”
User: “Can Flow handle attachments?”
AI (again leveraging previous context): “Yes, Flow can handle attachments and files, allowing you to include them automatically in emails sent from Salesforce.”
Here, the AI knows the entire conversation history, giving it the ability to provide contextually accurate responses.
Why does this matter in Salesforce?
Single-turn conversations are great for quick Q&A scenarios where context isn’t necessary, such as definitions or direct troubleshooting questions.
Multi-turn conversations are essential when handling complex user interactions, like customer support chatbots, detailed product inquiries, or guided workflows where the context of previous questions significantly impacts the relevance and accuracy of subsequent answers.
This ability to manage context across multiple exchanges makes Salesforce interactions with AI feel smoother, more intelligent, and significantly enhances user experience. If you are interested, check OpenAI Roles in AI Conversation and implementing Single-Turn and Multi-Turn Conversation in Salesforce.
Function Calling
What exactly is Function Calling?
Function calling is a way for an AI model to work together with your own code by “handing off” specific tasks that it can’t do on its own. Instead of the AI just giving you text, it can return a small piece of structured data (a JSON object) that says, “Hey, I think I should call this function with these inputs.” Your application then reads that JSON, runs the corresponding function, and if needed, sends the result back to the AI so it can continue the conversation as if nothing happened behind the scenes.
Here’s a very simple way to think about it:
You define the functions in advance.
Before you start chatting with the AI, you give it a list of available “functions”—for example, getWeather(city) or createSalesforceRecord(objectType, fields). Each function has a name, a description, and a list of parameters it expects. The AI doesn’t actually execute these; it only knows that they exist and what they do. Let’s take an example, below is the Function Defined(JSON Schema):
{
"name": "get_weather",
"description": "Retrieves the current weather information for a specified location",
"strict": true,
"parameters": {
"type": "object",
"required": [
"location",
"unit",
"include_forecast"
],
"properties": {
"location": {
"type": "string",
"description": "The name or coordinates of the location for which to retrieve the weather"
},
"unit": {
"type": "string",
"description": "The unit of temperature, e.g., 'C' for Celsius or 'F' for Fahrenheit"
},
"include_forecast": {
"type": "boolean",
"description": "Whether to include a weather forecast along with the current weather"
}
},
"additionalProperties": false
}
}
The above JSON Schema is a one function and we can define multiple functions like these.
The AI decides when to use one.
Imagine a user asks, “What’s today’s weather in Paris?” The AI knows it can’t look up live weather itself, but it also knows you provided a function called get_weather. So the AI replies not with plain text, but in JSON. Now, whenever a user asks “What’s today’s weather in Paris?”, AI model will check the available list of Functions and select get_weather function and generates a JSON output which will look like this:
{
"name": "get_weather",
"arguments": {
"location": "Paris",
"unit": "C",
"include_forecast": false
}
}
Your code reads the JSON and runs the actual function.
In your application, you see that the AI wants to call getWeather. Your code then makes whatever API call or database lookup is needed to get the real data (for instance, calling a weather-API). Once you have the result, let’s say it’s “68°F and partly cloudy”, you package it back into a format the AI expects and send it in a follow-up message like:
{
"role": "function",
"name": "getWeather",
"content": {
"temperature": "68°F",
"condition": "partly cloudy"
}
}
The AI continues the conversation, now with real data.
Seeing the function response, the AI can reply naturally: “Today’s weather in Paris is partly cloudy with a temperature of 68°F (about 20°C). Let me know if you need more details or a forecast!” Notice how smoothly the AI shifts from “I need to call a function” to “Here’s the final answer,” as if it were just another turn in a chat.
A simple, concrete example
Suppose you’re writing a blog about a chatbot that helps team members look up Salesforce account details. You might define a function like this:
- Function name: getSalesforceAccount
- Description: “Fetches account details such as Name, Industry, and AnnualRevenue by account ID.”
- Parameters:
- accountId (string)
When someone chats with the bot and asks, “What’s the annual revenue for account ID 0015g00000XyzABC?”, the AI recognizes it can’t know that itself, but it does know about our getSalesforceAccount function. Instead of guessing, it returns:
{
"name": "getSalesforceAccount",
"arguments": {
"accountId": "0015g00000XyzABC"
}
}
Your server receives this JSON, talks to Salesforce (for example, using Apex or the REST API), and finds that the revenue is $3.2 million. You then send back:
{
"role": "function",
"name": "getSalesforceAccount",
"content": {
"Name": "Acme Corp",
"Industry": "Manufacturing",
"AnnualRevenue": 3200000
}
}
Finally, the AI uses that data to reply:
“Acme Corp (Account ID: 0015g00000XyzABC) operates in Manufacturing and has an annual revenue of $3,200,000. Can I help you with anything else?”
Why does Function Calling matter for Salesforce Developers?
In Salesforce, function calling enables AI to seamlessly integrate with live data and actions, such as dynamically creating cases from user conversations, updating opportunities in real-time, or retrieving customer data directly from your Salesforce org. This capability significantly enhances the automation possibilities within Salesforce, ensuring your AI interactions are both intelligent and actionable.
This is Function Calling. If you are still looking for an end to end implementation of Function Calling using Apex, here’s a full video of it: OpenAI Function Calling in Apex Salesforce. This demo is using OpenAI Assistant APIs which is different from OpenAI Chat Completion.
Structured Outputs
Structured Outputs are a way of instructing an AI model to organize its response into a predefined format, most often a JSON object with named fields, instead of returning free-form text. By supplying the model with a clear shape (for example, “I need a title, author, and summary”), you ensure that every answer follows the same predictable layout. This consistency makes it easy for code (or a human reader) to locate exactly what it needs without scanning through paragraphs of prose.
Let’s take an example. Imagine you want a recipe recommendation. Instead of saying, “Tell me a good cake recipe,” and getting back a paragraph of instructions, you might say ‘Tell me a good cake recipe’ with JSON Schema in your Request to get name (the recipe title), ingredients (a list of items) and instructions (a short description of how to prepare).
So JSON Schema might looks like this:
{
"name": "get_recipe",
"schema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "The name of the recipe."
},
"ingredients": {
"type": "array",
"description": "List of ingredients needed for the recipe.",
"items": {
"type": "string"
}
},
"instructions": {
"type": "string",
"description": "Instructions for preparing the recipe."
}
},
"required": [
"name",
"ingredients",
"instructions"
],
"additionalProperties": false
},
"strict": true
}
So AI will reply you with:
{
"name": "One-Bowl Chocolate Cake",
"ingredients": ["flour", "cocoa powder", "baking soda", "sugar", "eggs", "buttermilk"],
"instructions": "Mix all dry ingredients, add wet ingredients, then bake at 350°F for 30 minutes."
}
Because the model itself provides all of these pieces in exactly those fields, your code can immediately read response.ingredients or a human can glance at “ingredients” without hunting for it in a paragraph. That’s it. Structured Outputs are just a way of telling your AI to follow a structure while providing outputs.
What is JSON Schema? A JSON schema is essentially a blueprint that describes the exact shape of a JSON object. It names each field, defines its data type (string, number, array, etc.), and can also specify which fields are required.
Salesforce Example:
Imagine you’re using AI to summarize Salesforce cases into structured fields. Your structured output schema might specify fields like CaseId, Summary, Priority, and RecommendedAction.
{
"CaseId": "5008g00000Abc123",
"Summary": "Customer unable to reset password",
"Priority": "High",
"RecommendedAction": "Initiate password reset flow and verify customer contact details."
}
This structure ensures every AI response integrates smoothly into Salesforce flows or Apex code.
Key Differences Between Structured Outputs and Function Calling
Who provides the content
In Structured Outputs, the AI itself generates every piece of information and simply wraps it in your predefined JSON layout. There is no external code or service involved. You are essentially telling the model, “Here is how I want your answer formatted, fill in the details yourself.”
In Function Calling, the AI hands off responsibility to your application: it returns a JSON snippet naming a function and its arguments, and then your code actually runs that function (for example, fetching live data or updating a database). The AI never directly produces the final data in its own knowledge base; it relies on your function to supply that.
When each is used
Structured Outputs are ideal when the AI already has or can generate all the necessary information. You simply want that information packaged in a consistent, machine-readable format. For instance, summarizing an existing research paper into fields like abstract, methods, and conclusions.
Function Calling is necessary when the AI needs to interact with real‐time or external systems, such as checking the current stock price, sending an email, or querying a live database. In that scenario, the AI’s JSON is not a final answer but a “please run this function with these parameters” instruction. Your code handles the actual execution and then passes the result back for the AI to continue.
Presence of a JSON schema
With Structured Outputs, you always supply a JSON schema that defines the exact fields and types the model must follow. The AI’s response is expected to match that schema word for word.
With Function Calling, you register one or more function signatures (each with its own parameter schema), but you do not ask the AI for a final answer in a fixed shape. Instead, the AI chooses—based on your conversation—when it needs a particular function and returns a JSON object specifying the function name and its arguments. There is no overarching schema for the final answer; the function’s return value becomes the relevant data.
Because both features can produce JSON‐like outputs, readers sometimes confuse them. To avoid that confusion, remember:
Structured Outputs = “AI returns the entire answer itself, formatted according to the JSON schema I provided.”
Function Calling = “AI asks my code to run a particular function (via JSON). My code executes it and returns the result back to the AI.”
Fine-Tuning
Fine-tuning is the process of taking a pre-trained language model, such as one of OpenAI’s GPT models, and training it further on a specific, task-specific dataset. Instead of teaching the model basic language, which has already been done during its original training, you ‘tweak’ its internal weights so that it becomes especially good at the particular style, vocabulary, or task you care about. In simple terms, you’re giving the model extra examples that guide it to behave in a specialized way, whether that means writing legal briefs in a certain tone, answering customer-support questions about your product, or generating code snippets for a specific programming language style.
Why Fine-Tuning Matters for Salesforce Developers:
Fine-tuning allows you to train AI models specifically on your Salesforce data or processes, greatly enhancing accuracy and relevance. For example, you might fine-tune a model to:
- Automatically summarize complex Salesforce cases using your organization’s standard formats.
- Provide AI-driven customer support trained specifically on your Salesforce Service Cloud history.
- Generate personalized emails for sales reps, tailored to your Salesforce data and sales approach.
Examples of Fine-Tuning
Customer Support Chatbot
Suppose your company receives thousands of support tickets each month about installing a piece of software. You could collect a few thousand past tickets (including the question and the official reply) and fine-tune a GPT model on them. After fine-tuning, when you feed the model a new user’s installation question, it will respond in the same helpful style and with the same troubleshooting steps that your human experts use.
Legal Document Summarizer
A law firm might have hundreds of case summaries or briefs in a very specific legal format. Fine-tuning on those documents teaches the model to produce summaries that follow the firm’s preferred structure, emphasizing case facts, legal precedents, and conclusions exactly as their attorneys write them.
Custom Code Generation
If your development team follows a strict set of coding conventions, variable naming, comment style, even folder structure, you can fine-tune a model on examples from your code repositories. When a developer later asks for a helper snippet, the model will write code that aligns with your exact conventions, reducing the cleanup work.
Fine-tuning with OpenAI follows four main steps:
1. Prepare your dataset by creating a JSONL file where each line has a prompt and a matching completion, for example,
{
"prompt": "How do I reset my AcmeApp password?",
"completion": "Go to Settings → Account → Reset Password. Enter your email to receive a reset link."
}
2. Upload and train via the OpenAI CLI or API, specifying hyperparameters (like epochs and learning rate) so OpenAI continues training on your examples;
3. Evaluate and iterate by testing the new model on fresh prompts and, if needed, refining your dataset or settings; and 4. Deploy and use, at which point OpenAI gives you a unique model name (e.g., my-support-bot-v1) that you call just like any other endpoint.
Fine-Tuning vs. Prompt Engineering
Prompt Engineering tweaks your input text or few-shot examples so the base model (e.g., gpt-3.5-turbo) responds as you like, without changing its internal weights. It’s cheaper per call but may be brittle.
Fine-Tuning updates the model’s parameters based on your data, yielding a model that consistently “remembers” your style. It requires an upfront training cost and slightly higher per-call fees but offers greater reliability for specialized tasks.
Conclusion
I don’t hate or have anything against Agentforce. I love it. But the era of AI integration into Salesforce development is not just approaching, it is here. It demands our active participation, experimentation, and innovation. By understanding and leveraging these tools, developers can significantly enhance their capabilities and drive meaningful change within their organizations.
The journey into AI may seem daunting, but with the foundational skills you already possess and a willingness to explore, you are well-equipped to lead this transformation. Don’t wait for the future to unfold, create it. Embrace AI, and let’s build the next generation of intelligent Salesforce solutions together.
If you have any questions or queries, please reach out to me on Linkedin. I’d love to chat.