Back to Posts

Salesloft Product Management SVP Frank Dale on Ethical AI

12 min read
Updated May. 2, 2023
Published Mar. 20, 2023

The following interview with was originally published by  at GZ Consulting.

What experience have you had developing AI tools?

As the SVP of Product Management at Salesloft, I am working with our team to bring Rhythm, Salesloft’s AI-powered signal-to-action engine platform, to life.  Rhythm ingests every signal from the Salesloft platform as well as signals from partner solutions via APIs, ranks and prioritizes those signals, and then produces a prioritized list of actions.  The action list gives sellers a clear, prioritized list of actions that will be the most impactful each day, along with an expected outcome prediction.  In addition to simplifying a seller’s day-to-day, it helps them build their skills by providing the context about why each action matters.

AI is becoming increasingly important in RevTech, with many of our interactions being mediated by AI. Where do you see AI having the biggest impact on Sales reps between now and 2025?

AI will enable significant improvements in both seller efficiency and effectiveness.  The most obvious impact will continue to be automating away low-value, repetitive work.  What will surprise people will be the rapid advance and adoption of AI to suggest next best actions to take and content to use in those interactions with buyers.  A typical workday for a seller will see them greeted by a recommended list of actions to take each day.  Each action will be prioritized based on where the seller sits in relation to their targets, with each action accompanied by suggested content where appropriate.  For instance, I might see a suggestion to respond to an email from a champion in an in-flight deal.  The recommendation will include suggested text for the response as well as a resource to attach to the email.  That’s a future we are actively investing in at Salesloft, which is at the heart of our soon-to-be-released Rhythm product.

Same question, but looking further out to 2030…

As AI becomes more commonly deployed across the sales profession, buyers will experience a more consistent sales experience in each buyer-seller interaction.  As this becomes more common, it’s going to raise the bar on what buyers expect from a sales experience today.  That will put more pressure on sales teams to deliver consistently in ways that today may seem unreasonable but will be possible with AI assistance.

One of the key ways to raise the seller performance bar will be high-impact, tailored coaching.  Manager time is a constrained resource, and seller coaching augmented by AI provides a path to realizing performance improvement without manager time constraints.  We should fully expect AI to help coach sellers to hit their goals based on each seller’s unique profile.  We can expect AI to evaluate the seller’s entire game (activities, conversations, and deal management) to identify the highest leverage areas each individual seller should focus on to improve.  Some of the coaching will be provided by AI at the point of execution, like on a call or when writing an email, with the rest provided throughout the workday as recommendations.

What are the most significant risks of deploying AI broadly across the Sales Function?

Two areas come to mind.  First, AI used without clear boundaries in a sales process can lead to problems.  If you employ AI and automation capabilities, it should be to allow the user to be better armed to make a decision, not make it for them.  AI tools should not replace the human touch but rather augment it.  There’s a lot of pseudo-science tossed up around the topic of AI, but ultimately, humans understand the nuance of relationships better than machines.  One of the ways to address that concern is to deliver models that not only provide a recommendation but can provide the insights that led to it; humans will better trust the model when making decisions based on those recommendations as well as know when to ignore the recommendation.

Second, there’s a privacy component as well.  Companies may create AI models that share data about a particular buyer with other companies’ sales teams without said buyer’s knowledge.  The buyer may know they shared their data with one company but have no idea that multiple other customers at this company are using that same data.  Creating models with this type of function puts companies and sales teams in a high-risk zone that can tread on the unethical.  It isn’t clear that building models in that way may be considered legal in the future.  If you plan to deploy AI in a sales org, it’s important to understand how data is collected and used.

AI Models are only as good as the underlying training data.  How concerned are you about biased models recapitulating discrimination?  For example, emphasizing sales skills that are gender or racially biased when evaluating sales rep performance?

It is a legitimate concern.  AI products are based on probabilities, not certainties.  The recommendations you receive or workflow automations that fire happen based on the probability that the given recommendation or action is right.  Not the certainty that it is right. In a good product, the model is correct more often than a human would be when faced with the same decisions.  At times, this is because the model can evaluate a larger set of factors, and in some cases, it is simply that machines can apply rulesets at a higher level of consistency than humans.

One of the key determinants of the AI model’s value is the dataset upon which it was trained.  If the dataset does not properly represent the real world, the model will produce results that are either biased or provide poor recommendations.  We’ve already seen several examples of that with image editing software that didn’t include black-skinned people in the training dataset.  This led to either poor outcomes or worse dehumanizing results when the AI product was used in the real world.  If you plan to deploy AI in your business, you should ask the provider what precautions they take to prevent bias in their models.  We are very intentional about removing factors that could lead to bias in our training datasets.  Still, it isn’t something I see most technology companies paying attention to in the revenue tech space.

How do you curb racial and gender bias when performing sentiment analysis?

We take great care at Salesloft to remove things that would lead to discriminatory factors.  For example, for our Email Sentiment model, one of the ways we prevent bias is by removing all mentions of people’s names within the email because that could provide clues to their gender, race, or ethnicity.  We do that kind of preprocessing with any data we use in an AI model before we build our models.

One of our assets is our scale.  We’re fortunate that we operate globally and are the only provider in our space with offices in the Americas, Europe, and APAC.  As a result, we work with organizations of all sizes globally, including many of the world’s largest companies.  That means when we build models, we have one of the largest datasets in the world for sales execution.  This enables us to train models based on datasets with both breadth and depth.  When we build a model, it is easier to train it in a way that fairly represents reality and includes safeguards to avoid racial or gender bias.

AI will increasingly be deployed for recommending coaching and mediating the coaching.  What concerns do you have about replicating bias when coaching?

As with any AI product making a recommendation, the potential to make a recommendation with bias is a concern that needs to be addressed when building models.

We take our responsibility to avoid bias in any product we release very seriously.  The revenue technology industry as a whole hasn’t demonstrated a similar commitment to avoid harmful bias as of yet.  I don’t hear other companies talking about proactive steps to avoid it, but I think that will change.  We’re monitoring potential governmental action in both the US and EU that will require companies to raise their standard in this area.  It is only a matter of time before laws are passed that require companies to prevent unlawful bias in their AI products.

Sales activities are becoming increasingly digitized, a boon for revenue intelligence, training, and next best actions.  What guardrails do we need to put in place to ensure that employee monitoring does not become overly intrusive and invade privacy?

Let’s start by recognizing it is reasonable for an employer to have insight into what work is getting done and how it’s getting done.  On the other hand, getting a minute-by-minute record of how each seller spends their day is unreasonable, as is dictating every action the seller takes from morning until nightfall.

We have to start with the right first principles.  I think we can all agree that humans have inherent worth and dignity.  They don’t lose that when they go to work.  The challenge is that we have some companies in the technology industry that forget that fact when developing solutions.  When you forget that fact, I believe that you actually harm the customer that you’re trying to serve.  That harm happens in two ways.

First, you lose the opportunity to realize the true potential of AI, which is to serve as a partner that enables humans to do what they do best…which is to engage with and relate to other humans.  AI should not be used to make final decisions for humans or to dictate how they spend every minute of their day.  Good AI solutions should be thought partners and assistants to humans.  It’s Jarvis to Tony Stark’s Iron Man.

The second way overly intrusive technology harms companies that employ it is via employee turnover.  It’s no secret that industries that offer low autonomy to employees suffer from high turnover.  Most humans fundamentally desire a base level of autonomy; if that’s threatened, they leave whenever a good option opens up.

In short, if the seller is working for the technology instead of the inverse relationship, we’re on the wrong path.

In 2018, Salesforce CEO Marc Benioff argued that the best idea is no longer the most important value in technology.  Instead, trust must be the top value at tech companies.  How does trust play into ethical applications and AI?

We get to build the future we want to realize.  We can either build a future that perpetuates the things we don’t like about today’s world, or we can build a future that elevates human potential.  AI can be used to take us in either direction.  That means what we choose to build with AI and how we build it should be a very value-driven decision.

We can absolutely build highly effective AI-powered solutions that elevate the people who use them and deliver tremendous business value.  The people that believe otherwise simply lack the imagination and skill to do it.

What I love about our team at Salesloft is that we exist to elevate the ability of the people we serve and to enable them to be more honestly respected by the buyers they serve.  In sales and life, the way you win matters.  It matters to the people you serve on your revenue team, and it matters to your customers.

An emerging category of AI called Generative AI constructs content (e.g., images, presentations, emails, videos).  It was just named a disruptive sales technology by Gartner.  They stated that “By 2025, 30% of outbound messages from large organizations will be synthetically generated.”  What risks do you see from this technology?

There are two immediate risks that come to mind.  First, the messages need to be reviewed by a human before they are sent.  The technology has made extraordinary leaps forward.  I’ve spent a fair amount of time playing around with some of the tools released by OpenAI and others.  The output is impressive and also, at times, very wrong.  This goes back to the fact that the output is based on a probability that the answer provided is correct.  You can get a very professional, persuasive email, or you can get something that approximates a professional email but won’t land well with your intended customer.

Second, it has the potential to make every outbound message sound the same.  Generative AI doesn’t replace the need for human skill.  It changes the areas of focus for that skill.  Specifically, the opportunity for humans is to use Generative AI to help generate a higher volume and variety of ideas and then to edit and refine the output.  The returns available to creativity are always high, but they become even higher when everyone is doing the exact same thing in the same way.

Having said that, I see tremendous potential in the technology and think if used properly it will be very valuable to revenue professionals.

SalesLoft CEO Kyle Porter has long emphasized authenticity and personalization in sales conversations.  Do you see Generative AI potentially undermining trust?

Kyle is absolutely right.  At the end of the day, a sale happens when a seller connects with a buyer to help them solve a problem.  You can’t do that without authentic connection and trust.  Generative AI should not replace that human connection, and I don’t think buyers want it to replace human connection.  A close friend of mine was a sales leader at a now-public PLG-driven SaaS company.  They added sales reluctantly.  When they did, the company learned that buyers both bought more from them and were happier customers.  That company now wishes it had added sales much earlier. How we interact with one another can evolve as technology evolves, but it doesn’t change the fact that humans are wired to connect with each other.  I think emerging tools like Generative AI will help us be more productive, but they won’t replace the need for authentic human connection and trust.