AI’s role has grown far beyond automation, reshaping how organizations generate insight and intelligence. That expansion brings new risks, but also new opportunities to use AI in ways that are ethical, human-centered, and effective.
That balance is familiar to Shawn Overcast, general manager of enterprise solutions at Explorance. The global feedback and insights platform operates in more than 50 countries across five continents and has long focused on using AI to surface human potential, not replace it.
During a thought leadership spotlight titled, “Responsible AI as the Intelligence Layer: Turning Employee Sentiment Into a Strategic Advantage” at From Day One’s Miami conference, Overcast shared her company’s background with AI and detailed how using this tool can and should be done in a responsible manner.
Explorance started doing research and development with the AI space regarding machine learning about eight years ago. “We were at this before AI was cool,” said Overcast.
Explorance launched its AI-centric solution, MLY, three years ago. Short for machine learning that answers the question why, the tool was deliberately developed to inform decisions about people as well as the actions taken with them, Overcast says.
MLY reflects Explorance’s approach to responsible AI. The employee sentiment analysis tool helps organizations make sense of open-ended feedback, turning employee comments into insight and competitive advantage without losing the human context.
Challenges With AI
“With great potential comes great risk and great challenge,” said Overcast.
Some problems with AI, says Overcast, include bias, transparency, data fragmentation, skill gaps, and privacy. “The data, the algorithms within AI, are only as fair as the data that it’s trained on,” said Overcast. “So, if there is bias in our hiring models, in our promotion data, for example, then the AI algorithm will carry with it inherent bias.”

The transparency challenge deals with the black box theory, which is the inability to trace back as to why we’re getting the results we are. It’s often hard to trace it back and it’s important to do so to understand the source.
Another AI challenge is data fragmentation. “I’ve been at this for a long time and that has always been a problem,” said Overcast. “Working with data silos is a real thing, a real challenge in our organizations, but it also presents a real challenge with being able to integrate all of that together,” Overcast said.
Skill gaps also present a problem with AI use. “This is a real challenge for some organizations, because it’s not necessarily what we hired for,” said Overcast. “We hired more for the people aspect of the role, or the process aspect of the role, but not necessarily how to adopt new technology quickly.”
Lastly, privacy is an extremely important issue and ultimate challenge at times with AI. Employee information must be protected, and businesses have to be cautious about how the collected data is used.
As some of the challenges in contrast to what AI can provide, Overcast assures that these challenges aren’t ones that should cause us to step back, but rather insights that can help us do things with more thought.
The 7 Principles of Responsible AI
As the HR team stands at the intersection of innovation and responsibility, it’s important to know how to pursue responsible AI.
There are seven principles of responsible AI including the following: fairness and inclusion, transparency and interpretability, accountability and governance, accuracy and decision integrity, privacy and consent, purpose and human intent, and reliability and safety.
“AI is not just a technology conversation, it is an ethical conversation, it is a mindset that we need to have, and these help us with quality control about the information we use to make decisions about people,” said Overcast.
When pursuing fairness and inclusion, it’s important to make sure that AI amplifies every voice and that all employees are heard equally. Overcast offered an example about a global manufacturer that wanted to do a sentiment analysis across its manufacturing plants across the globe.
With use of AI and multi-lingual analysis, it was discovered that a specific work group was having challenges with workload and wellness which was at a Spanish-speaking plant. In the past, if there weren’t Spanish-speaking employees on the main HR team, the data couldn’t be uncovered quickly as it had to be translated and analyzed separately. However, now with AI, it’s the same process regarding all employee comment data and the decisions are now made at the same time.
As for transparency and interpretability, the black box problem exists. The data goes in, results come back; however, we don’t understand why and where the recommendation comes from. There are questions that may come up regarding the sentiment, the topic, or the tone. When using AI, it’s important that the recommendation is ultimately traceable back to the source comment. It’s vital to have trust in the data.
The last responsible AI topic discussed by Overcast was privacy and consent. It’s vital to protect employee data and there are ways to do so with AI. For example, redaction provides a way to ensure employee data privacy. It’s important to ensure the organization is protected, too.
Wherever you are in your AI journey, Overcast advises keeping the seven principles of responsible AI front and center. That includes educating teams on AI’s limitations and recognizing that, while powerful, it is not always accurate. Transparency and human oversight are essential, and responsible AI principles should guide every stage of how the technology is used.
Kristen Kwiatkowski is a professional freelance writer covering a wide array of industries, with a focus on food and beverage and business. Her work has been featured in Eater Philly, Edible Lehigh Valley, Cider Culture, and The Town Dish.
(Photos by Josh Larson for From Day One)
The From Day One Newsletter is a monthly roundup of articles, features, and editorials on innovative ways for companies to forge stronger relationships with their employees, customers, and communities.