Industry Insights

FOW Series: AI and the Future of Recruiting, Architecture, and Ethics of AI with John Sumser


AI, the hype, and the reality

Bill Inman, Executive Vice President at SmartSearch, sat down with HR Tech legend John Sumser, Co-founder and Principal Analyst at the HR Examiner, to talk about the hype and the reality of Artificial Intelligence, ethics, and risk.

This video is a part of our Future of Work series of conversations with our clients, vendor partners, industry advocates, and thought leaders on trends, challenges, and solutions for employers on how to build a better workplace. Watch the conversation here, or read excerpts in the summary below.

 

Let's start with the basics. What is AI and why should I care?

There's an academic definition, and then there's a real-world definition. The academic definition is about specific techniques used to make forecast predictions and consume data and find patterns of data. The commercial definition is more like anything that has sort-of a predictive capacity is what you may as well treat as AI, whether or not it meets the academic definition. So, when something forecasts your behavior, or forecasts an answer or a solution, or tells you how many candidates you need in the pipeline to make the hire. Those are all examples of what I would think of as commercial AI.

Why is AI important in our industry now?

AI is primarily a way of taking the data and turning it into insight and actionable information. And so, what's important about AI is that this approach, which is that the machine is going to weigh in with an opinion on anything that you've got, is here to stay, and we need to learn how to manage it. And you don't manage it by letting the machine just do whatever it wants. You manage it by realizing that the machine has an opinion that's as flawed as yours, and that what you want to do is what you do with any opinion. You get several of them, and then you arrive at the decision. You never, ever turn all the decisions over to a single decision-maker.

People forget that AI is a narrow intelligence used for very specific applications. How are the algorithms and AI models built?

The way that a model is built is it starts with a bunch of data. Scientists look at the data that's already available. A big difference between AI and the way that science has been done is that AI starts with data rather than starting with a hypothesis. The data scientists look at all the data, and they imagine what the data might be able to give them as an inference. Over time, the way that contemporary AI works is you've got these tools that examine a great big pile of data. AI looks for patterns, it presents the patterns to data scientists, and the data scientists sort of make the funnel tighter and tighter around a specific question. The good news is you can really derive a lot of insight out of that. The bad news is because there was no original hypothesis, all of the data that's being used is recycled, and there are no real standards for the quality of data that goes into the data-crunching process. So you get all kinds of weirdness around the edges of AI. The answers are not as good as they appear to be at first.

Can AI be biased? What can AI do, or not do, to address this issue?

There are two primary ways that bias enters into AI. The first way is in describing the problem. When you describe a problem, you impose a bias on the problem. And that bias runs the entire life of trying to solve the problem, even if you didn't get the problem definition right, which is often the case. The second way is where bias is introduced into AI functions, as related to diversity. When teams do not have enough shared life experience to imagine all of the possible implications of their work, you get things like facial recognition systems that categorize some kinds of people as lower-status mammals rather than human beings. You get this weird stuff.

What are the risks of using AI in recruiting-related endeavors?

The biggest risk is taking humans out of the recruiting process. My mom was in recruiting. It's one of the reasons I'm in this industry to begin with. She was at Disney for a good ten years, so she got to see and watch the people she hired grow into good hires, good managers. And you know what? She's still friends with a lot of those people that she hired. She built friendships with them by leaving a job as a corporate recruiter or even using tools to do the job. You lose that layer of wisdom, which is the people part of it, who worked out and why did they work out, which helps you make decisions in the future. We're going to need to infuse that level of wisdom of the “butterfly effect” of hiring somebody and how it works out for the whole organization.

We're seeing more people leave their jobs now than ever before in the Great Resignation. Couple that with recent stats on how 80 million people are essentially going to have to find new careers by 2025 because of AI automation and that’s expected to be 800 million by 2031, which is 10th of the population of the world. AI was supposed to take humans out of dehumanizing jobs. For example, McDonald's is now automating their drive-thru window with AI to basically do speak to people and take the orders and take people out of those jobs. Hopefully, those people can find something that they're passionate about and love. We're on this slippery slope of hopefully AI can bring us back to the point, which is finding the right person for the right job at the right time, which hopefully is their passion. But we're displacing a lot of workers, and there are ethics around both areas.

What are some of the concerns around ethics in AI?

Do you believe that management can leverage AI ethically to make decisions? What machine learning does exactly, is that it treats people like objects. There's got to be a countervailing force. I think of that as ethics. It could easily be thought of as safe. It's the idea that AI shouldn't do harm. We need to be better at that than Google was. When they decided that they would do no evil, they didn't seem to be really able to pull that off. But with AI, we need to care enough about it so that it doesn't make things worse.

As an advisor to a decentralized AI company where benevolent AI is the mantra, it's very important to have control on that. HR Technology can be dehumanizing. We put things as simple as career sites in between a person and a person, and then you get a resume black hole, for example, and that makes recruiting and hiring a very un-human selection process. And I hope that with AI in our industry, we're able to find that human connection again. That's the goal here, isn't it, to make things more human to make better decisions with ethics.

For more insights from John, you can check out his blogs on his LinkedIn profile: John Sumser 

We hope you enjoyed the interview and watch out for the next segment of our Future of Work series.

Similar posts

Stay in touch with SmartSearch

Be the first to know by subscribing to our blog or monthly newsletter.