Reflections from 2025

Taylor Lint
As we close our 2025 it's fun to reflect on just how far our customers, the ecosystem, and our platform have come. A year ago, most partners and customers in the Salesforce ecosystem were asking high-level questions about AI.
Things like: How does AI work in Salesforce? or What does this actually do?
Fast forward to today, and our best partners and customers are asking very different questions:
- How do we change our organization or billing model in an AI-first world?
- How do we retrain and reskill our teams to focus on more strategic work?
- How do we rethink ownership, accountability, and decision-making when AI is acting as a teammate?
- What else can my team accomplish if suddenly our backlog is automated?
This shift is the difference between experimenting with AI and actually deploying it.
Over the past year, as we’ve built an AI-native platform and worked closely with Salesforce consulting partners and internal Salesforce teams navigating this transition, one theme has stood out above all else:
The move to AI is fundamentally a trust problem.
And we’ve seen that trust shows up in a few concrete ways.
1. People Don’t Trust AI If They Don’t Trust What It Means for Their Job
One of the biggest blockers to AI adoption doesn’t have anything to do with the product itself.
It fails because people don’t trust it. That distrust often starts with fear.
AI is impacting every role in the Salesforce ecosystem. When that fear isn’t addressed, it rarely shows up as loud resistance. Instead, it shows up as quiet non-adoption, or objections rooted in misaligned expectations with the AI.
2. AI Rollout Is Change Management, Not Feature Adoption
Working with AI is not the same as working with a traditional SaaS tool.
Using AI to build or modify Salesforce requires a different mindset than manually configuring something in Setup or writing code. Teams need to understand:
- what AI is good at
- what it’s not good at
- and why its behavior may differ from deterministic systems
Without deliberate enablement and expectation-setting, even good AI feels unreliable — and unreliable systems don’t earn trust.
3. “Why Did It Do That?” Is the Most Important Question
Outcomes alone aren’t enough when you’re rolling out AI across a range of skillsets and roles. To trust AI, users need to understand:
- what context was used
- how decisions were made
- why a specific result was generated
Traceability of inputs, reasoning, and assumptions is critical. Especially in Salesforce teams where AI is often replacing cross functional collaboration or to removing dependencies on technical resources. People need to understand where the result came from to double check the AI’s work or hand it off down the chain.
4. Humans-in-the-Loop Is How Trust Is Earned
Fully autonomous AI sounds appealing and it creates a shiny demo.
But in reality, complex AI workflows that automate dozens or hundreds of decisions need points of intervention to provide real value.
One of the clearest lessons we learned: intervention is a feature, not a failure.
Trust grows when users can:
- review AI outputs
- edit them
- approve changes
- override decisions
That participation creates ownership, and ownership creates trust.
It also reduces frustration (an important part of the expectation setting!) because it’s fewer back-and-forth loops with AI, and more confidence in the outcome.
The goal isn’t to replace expertise but to amplify it.
When AI works, it doesn’t take away work from teams, it gives them leverage. And the companies we see succeeding aren’t chasing fully autonomous systems out of the gate but rather investing in the change required to be AI first and ready for the future.