By: Marissa Geist

Episode 16 summary

April 21, 2026 // 35 min, 13 sec

In an AI-everywhere world, who decides how work gets done to meet business goals — by human or by AI? To HR, this may look like a headcount question. To IT, tech investment decision. Bridging the divide will demand new skills and expertise. Enter the Chief Work Officer.

In this episode:

Cielo's CEO, Marissa Geist, welcomes Jason Scheckner, head of strategic customer engagement and AI at Workday, a Cielo partner. Together, they consider the different kinds of roles and thinking required to capture value in a blended world of humans and AI.

  • Who is best placed to shape your organization’s workforce and build a new ecosystem that blends human with AI capability?
  • On our journey to 2050 and the Chief Work Officer, how can HR and IT collaborate more effectively right now to get the best from their people and their AI investment?
  • Will success come from thinking of AI as labor - not simply tech?
  • How can we change up the conversation from AI as human replacement to appreciate the opportunity AI gives us to create new and different value?
  • If AI frees us from the transactional parts of our jobs, what are the opportunities to upskill and elevate the human experience and contribution in the workplace?
  • Will we ever work for an agentic boss? Will the Chief Work Officer themselves be an AI agent?

Episode transcript

Marissa Geist 

Welcome to the Talent Time Machine, the podcast for talent leaders that takes you on a trip to the world of work in 2050. We're going to think about trends, possibilities, and new realities for talent. I'm Marissa Geist.

It's 2050 and there's a new face at the exec board. Welcome, Chief Work Officer. Gone are the days when CHROs planned their human workforce and CIOs plan their tech investment.

The Chief Work Officer has stepped into the void between them. It's their job to structure how the work a business must perform to meet its goals will be delivered, human and digital.

With me to discuss the different kinds of roles and the thinking we will need in an AI-everywhere future is Jason Scheckner, head of strategic customer engagement and AI at a partner of Cielo’s, you might have heard of them, Workday. Jason, welcome.

Jason Scheckner 

Thanks for having me, Marissa.

Marissa Geist 

I'm going to kick it off by actually contradicting what I just said in the opening, which is there's a new face around the board table, because when we talked about this, you said, don't think of a role, don't think of a person, but more of an idea of the Chief Work Officer. So can you unpack that a little bit?

What's the thinking around that?

Jason Scheckner 

Yeah. And really I can't take credit for this at all. This is born out of a lot of thinking from one of my leaders, Athena Karp, and how we're thinking about this interesting intersection. I mean, you have a Chief Human Resources Officer, CHRO, we're familiar with that role. We all know about CIOs and a variety of other C-suites.

What's interesting is that, and I am sure like me Marissa, you meet with a ton of companies and they've organized their strategy differently, and they have their tech sometimes in the domain area under HR or sometimes all the tech roles under IT and the CIO.

Irregardless, we have this problem, which is that as AI gains more agency… so as we think about agents as digital workers, you have an interesting question, which is: is it an IT and a CIO thing, or is it a CHRO thing?

And then once you begin to ask that question, you understand well what we're really talking about is when we think about digital work, we're talking about work in general, which is human and machine.

And then if we really want to understand that we have this sort of intersection between the two, that's the work that's getting done by the company in order to execute their business objectives, whatever their goals are, shareholder value, etc.

And so we're left with this problem of work. And so the Chief Work Officer, whether actual or theoretical, is really about that intersection and collaboration that needs to happen between the people side of the business and the technology side of the business in order to navigate the new territory that we're in today of digital work.

Marissa Geist

It's fascinating, even just defining what AI is or how it is perceived or reconciled in how we think about work. Like we love to talk about work and describe our work, how we do it versus why we're doing it. “So what's the value I create? How do I get the work done?” Is the conversation that we typically have.

I was actually thinking about this idea in preparation, thinking about the movie industry and pre-Netflix post-Netflix and how, you know, everyone is like “Netflix is going to kill movies.”

And it's like, if you think about it, yeah, as moviegoing, that changed. But at the time of the advent pre-Netflix, around 2000, there were about 300,000 workers in the entertainment or movie-making specifically part of the entertainment industry.

And then Netflix came along. And yes, it definitely changed the way we might see a movie, but it added 150,000 jobs to the movie creation industry. And it changed from studio-centric sort of jobs to just making movies in local communities and consulting and more gig workers.

So the humans found the work around the technology. It didn't decimate an industry, it actually expanded it and expanded a lot of crews and opportunities for people that maybe couldn't go to Hollywood to make it big, but had a passion about moviemaking.

So that whole intersection of forget the how, but what are you trying to deliver and how can AI deliver it for you versus being so locked up in the job title or so locked up in this is the way the humans do it, and this is what the technology does. How does the Chief Work Officer think about defining why you're doing something you're doing versus the way we talk about it now?

Jason Scheckner

Well, by the way, I mean, even if we ignore this whole technology problem for a second, I would say the average company, when we just talk about people has a problem. If you ask a human like, “How would you explain the work that you do?” A lot of people have difficulty explaining it.

I always joke about the example of my kids, I have two teenage daughters. And if they try and tell people, they know words I use, they know words like “Workday” and they know “AI” and “agents.” But if they had to try to explain what I do, they have no idea.

And if they ask me to explain it, I use the example a lot of, have you seen the show, Marissa, Succession?

Marissa Geist 

Yes. Of course.

Jason Scheckner 

Okay. I love that show. So one of the great parts about Succession is that Logan is sort of trying on each of his kids to see if they can handle the work, and he's looking to see which of them can sort of fit into it. But part of that problem is sort of inherent in this idea of how do I find the person to do what I do? Or how do I explain what I do and put that into words?

And I think that's a fundamental problem for humans is regardless of technology. I think technology exacerbates it because it forces us to categorize it and take something that's unstructured, because technology often relies on structuring of that concept to say, “How do we define that?” And then that puts pressure on the people.

So I think it's more important that we're asking the question that we have the role defined, and more importantly, what I tell customers every day is that in the absence of a role or person is that the CHRO or CIO understand there's an inherent requirement for collaboration between those two organizations, for companies to be able to stay ahead of this.

Marissa Geist 

I mean, I don't think that's new, since I've entered the world of human capital management. This is so many years ago, my mom has never been able to say what, she's very proud of me, but has literally never been able to articulate the single job I've ever had. So I think it's a very human thing, like you said.

So if we think about now, we're grappling with that intersection with some constructs that we effectively made in the last 20 years. Like if you think about a Chief Human Resources Officer 15 years ago, 20 years ago, that didn't even exist. So we're in like the moment that we've created ourselves struggling, why it doesn't reconcile with a new future.

And in the future then, so if it's 25 years from now, humans know how to work with tech. It's just your coworker you're in like a multiplayer universe of some humans and some AI doing the work. If we think of AI more as labor than technology.

So what is the Chief Work Officer looking at that then? And are they looking at how we can work together more effectively? Or are you looking at changing product set? Or what do you think when you fast forward and you get through the hiccups of us learning to work together?

Jason Scheckner 

It's a fascinating question because we have to sort of say, well, what we're trying to do is take organizational goals and then deconstruct that into the work that needs execution. And so what you’d probably start with is thinking about strategic planning and readiness of the business, which is okay, we have an objective. We need to work back from that.

And this has everything to do with investments, capital, headcount and so forth. And then you would say, “Okay, well how would we attack that?” Once we understand the output of work from the human side and the machine side, then we'd be able to start from a strategic planning standpoint, start to say, “Okay, what would that mean?”

“How much machine could I use there? Could I lever some of that up? Could I drill that down?” Certainly cost environments will always be a function of the decision, right? We don't operate in a limitless capital environment, so we have to drive business results at certain costs.

And so those become very interesting questions that is about that partnership. And that's where you have even the CFO and other executives involved in these decisions. There's risk factors of certain decisions, right.

If we decide to leverage more digital work than human work, what does that mean? What are the implications there? Globally that can have implications in certain markets. So I think this is all the kinds of questions that people need to partner on and think about.

But I view it as probably, if we were skipping ahead, it's human machine collaboration defining that. I think it's the redefinition of roles and work, and that's in partnership, obviously, with the CHRO and so forth. I think it's that strategic planning we talked about. I think it's the democratization of skills. I think, how does AI take new people and level them up quickly?

We can talk more about that, Marissa. And then finally, I think there's this whole piece of that everybody needs to contribute to around ethical AI governance, responsible AI, whatever you want to call that thing. I think that's a big part of this as well, because if AI is going to be part of your workforce, we're going to need to care about that.

So I think those would be the big topics that we've thought about. I'm sure there's more I haven't put on that list.

Marissa Geist 

So if you think about the crux of somebody who's going to be really a great partner to this new capability, are there predominant skills that you see just persevering? I think about, you know, my grandparents era, it was great to memorize.

If you had a steel trap of a memory, you could get ahead because there were no references and no quick ways to go to Google to just figure out what you're doing.

That skill is not as relevant. I'm not going to say totally irrelevant, but it's not going to be a competitive advantage. So do you see that? Kind of like what goes away, what comes in? What are you thinking?

Jason Scheckner 

What I am seeing in the data right now is that work related tasks requiring human agency are going to increase in value as more AI enters the workplace. What we are seeing is there is absolutely a shrinking demand for things like information processing skills. Why? Because the computers can do it better than we can. The machine can process not only volumes of data, but the speed and the precision that it's just not effective for the human to spend time on it.

So you get emphasis on things that are pretty obvious, I think. Interpersonal and organizational skills. And the good news is that high agency skills actually span diverse aspects of work. So this isn't unique to one area, like humans are going to be relegated to one type of work, for instance, there still be areas that they can explore curiosity.

But what we definitely are starting to see is this idea of, a lot of people talk, Marissa, about AI replacing tasks, and I think that's a logical place that humans start. But the harder concept, and actually this is why the moment for HR is so big right now, is defining how humans get elevated. And what I mean by elevated is if we say the work that the humans do, they won't need to do.

And so it's not about replacing humans, but what would we have them do in replacement for that? And that mapping of that elevation or sort of new work or upskilling, however you want to think about that, I think that's the big question that a lot of people need to be thinking about that.

And as you take each of those functional areas to try and figure out, okay, if we bring in a recruiting agent or we bring in a development agent for coding, or we bring in a supply chain agent for analysis or anomaly agent for payroll, whatever it is. As we bring in those who are replacing work people are doing or augmenting it, what is that elevation? How does that human rise up?

So I think that's the fascinating question. I don't have all the answers. If I did, I'd be a very, very wealthy man. But what I can tell you for sure is there's an absolute time in need right now to define that. And I don't think enough people are actually thinking about it.

And I think this is kind of what we're talking about with this whole Chief Work Officer.

Marissa Geist 

So I think that's really fascinating because I think as humans, we always see ourselves on top, right? And it's focusing on the human side of it. But if I understand the premise correctly, Chief Work Officer means both sides: human and agent.

We pride ourselves on some things that we actually aren't that great at. And I think about in our world of HR, like interviewing, we say we're so great at it. We're notoriously horrible at it.

So what if the agents all of the sudden start to take over a part of the work and they elevate it and is that the work of the Chief Work Officer to make sure they're also growing as fast as the human side? Or how do you think about that balance?

Jason Scheckner 

So when we talk about the output of the agents, so how an individual agent performs the models and things underlying that, I think the output of the work is something everybody at the company needs to care about, the same way we do today. Right?

Like you run a company, you care about the output of the work, and you change definitions of and the work that needs to be done, or the types of work and specialists you have to meet the goals of the organization.

I think that's true of digital work too. We're going to try some things. We're going to see where agents do a good job, where people do better. We'll play with that. Maybe in specialized areas, we'll need humans to continue to do things.

Overall, what I see though probably, and I already see examples of this in sort of highly innovative companies because the good news is we have lots of customers, and you and I both have amazing customers and many mutual customers, they're out there sort of already trying to answer these questions.

And what I've seen so far is that they have said, okay, the humans will, for instance, start to supervise some of the output. Which, by the way, this isn't a new concept. When robotics came to like assembly lines, one of the things they had the humans who used to sit on the assembly lines was observe the output of the robots to make sure that the output of the production was the same.

So it's not actually a new concept to say that we would actually start to evaluate that and evaluate the process output, and then look for optimization of the processes that are inputs to the machine. So I think those are the most logical places.

But then the other area that's kind of fascinating is the strategy work. Today, you're an executive, so hopefully get to spend a decent amount of time on strategy. But a lot of people day to day, they don't get to think about strategy. It's a very small part of what they do. A lot of it is process orientation and then most of it's transactional.

And so what we hope mostly, irrespective all of these kinds of like very logical questions we ask as humans is what we would hope is that the human is freed up to spend more time on strategic work, which would include skills that are strategic output for the company and growth initiatives.

Process is probably never going to go away, but you would hope that it shifts to maybe process reinvention versus just process adherence. And then transactional administration ideally should be pushed down as much as possible to the agents, either enabling humans to do more of what we just talked about or to actually execute the things we don't want the humans to do.

We have a triangle that describes this sort of value of human and agentic collaboration.

Marissa Geist 

So two things that I heard in there that I think have fueled the debate, the fear, we definitely have currently, our clients on a pretty wide spectrum of how comfortable they are experimenting. And in fact, we are just as likely to see a client experimenting now as to wait and see what everybody else does.

Like everybody's all over the bell curve. One thing when I think about this idea of if the agents are labor, are they going to take the work, are there going to be less jobs? And I also have kids. I have four boys. So you can imagine the amount of laundry that is done in our house. And 100 years ago, people spent 10 to 12 hours a week doing laundry.

Now it's down to 1.5 in the average household, I'll give mine three hours that than if the average household is 1.5. But like no one has less housework to do, we find the work. Humans are notorious for finding the work, and that is actually what makes us human is finding a better way and filling our time with better and greater tasks.

So in 2050, if we've got all the administrative hopefully and kind of the places where people don't love to think and sit and work, that's handled by the agents, what do you think humans will be doing? Are we going to be exploring more, inventing more? What do you think this looks like, a post-AI, post-agent world?

Jason Scheckner 

Well, by the way, I feel like you have secretly gotten a hold of some of my material here because the washing machine is a good example of upper limit innovation. So it can replace the task, it can speed up the task. But right, it's not going to take on more and more. And what's interesting about AI is that it's actually unlike that upper limit innovation that we had in the past.

So if we think about the example you gave, yes, we'll find more work. But what's interesting is the AI can find more work and will be able to take on more things way more quickly and with less upper limit or no limit potentially as compared.

So I think that's the one thing that we have to reflect on. And which is again, why it's so important we have a moment to define this, because it's not like we just gain incremental benefit or some…

Marissa Geist 

It's up to how well that we used to do it right? Our thinking isn't the limitation on the top of it.

Jason Scheckner 

Exactly, exactly. And so that puts us in a new sort of paradigm here. And again, if we're not thinking about that, we're waiting to see what happens. I think that becomes a very dangerous moment for us.

Like I said, I think this is one of those pivotal moments where the innovation curve needs to be looked at differently, because we already see AI doing things that a lot of the machines we thought of in the past would be unable to do.

Marissa Geist 

It actually goes back to the second part of the question, which is I heard and saw Workday launching an agent onboarding module because these are just like any other work. And assuming we have agents doing multiple tasks, then there's probably supervisory agents. We've seen the orchestration layer, agents directing other agents’ work.

When we think about this whole ecosystem or the multiplayer universe, will humans have agentic bosses? Do I work for a manager that isn't a human? Do we ever see that as a good thing, a bad thing? Like, what's the interplay of management, hierarchy, decision making in 25 years? Mixed?

Jason Scheckner 

So first of all, we can absolutely imagine agents in our org chart. Why? Because they're executing a whole set of tasks, responsibilities that you would have hired others to do. Now, the interesting thing about something in your org chart is unlike a human which can't hold multiple roles, that agent has the ability to actually do that across multiple orgs.

So let's just say you had an agent that was doing some sort of risk detection. You could actually deploy that across multiple departments. That same skill or agent being leveraged in different use cases could be a part of multiple org charts.

So that's fascinating to think about because that's not a human concept. The closest thing we have to that in humans would be like gigs or projects that we would take on. But the AI can take on projects at a scale humans can’t.

Marissa Geist 

Huge implications for org structures, right? Like I have to define this role because someone who's an account manager is very different than an accountant.

So we've defined it around our very human paradigms versus say, agentically, what's the best and highest use of this.

Jason Scheckner 

Yeah. And forget then the what is left for the you know, the human in those cases there's all sorts. And that's why the opportunity for HR is actually huge because there's comp implications. There's certainly work upskilling implications, all the things we talked about before. In terms of agents that I report to, I think we have to deconstruct what does it mean to report to somebody.

Is that a human structure or is it about something else, you know, somebody to approve my time off, an agent could do that. Somebody to give me a performance review, probably an agent could do some of that. Somebody to give me a coaching, agent can do some of that.

Marissa Geist 

You might actually like. I think all those psychological tools, people are more apt to listen to an agent than another human. Right. If I don't agree with what-

Jason Scheckner 

Well, yeah, there's fascinating data on the medical side of willingness to listen to an agent versus a doctor. And so, yeah, so you're you're spot on, Marissa. What I'm sure of is that actually performance, whether you're digital or human, is critical. Now for humans, we do it for other reasons, some subjective in terms of career and progression and things like that.

But ultimately we care about the organization as we're tasked with hiring a set of people to perform an output, and the performance is a way to measure that. And we absolutely would need to do that with agents, especially as we consider cost and other implications, because, you know, agents could make mistakes.

They could do great work, but they could do it at a higher cost, maybe because it needs more processing or whatever. I don't know. I suspect that most of the time it'd be less cost. But again, these are things we'll have to evaluate the same way we would evaluate a human and make decisions on.

Marissa Geist 

Okay, so Jason, we talk a lot about the Chief Work Officer doing some tactical things, redefining our structures, but are we missing the point of trying to fit the agents or the AI into the way we work? And again, having the limitation of what we can do today versus thinking the white space of what we can do with them that we cannot do on our own, that we never could imagine doing.

So what does that white space look like? How should we be thinking about that?

Jason Scheckner 

Well, I think it connects to the earlier point, which is that the unfortunate part of being humans is we think like humans sometimes, which is we tend to look at the way we work and then one of the logical things we do is we say, okay, what could the AI take that the human does, right. And we look at that as task replacement, for example.

So task replacement is a classic way, I mean and it's all over marketing materials it's: the human does this, 35% efficiency, the AI takes over that. Or they'll even go out of their way to say that the human is inefficient in these tasks.

So it really becomes a story of all of the things that the AI can do. And this is one of the reasons fear exists, actually, because we've actually stated that those are the things that we want the AI to take.

And so it's not far for us to jump and say, well, the AI will take all of our work. Which, what I've encourage a lot of leaders to think about is AI has different scalability than humans. And so why that's important is it actually connects to this white space question that you asked, which is there's inherently work that we would actually never have humans do, or even if we wanted them to do it, it would be too costly and ineffective for them to do it.

Marissa Geist 

Or risky. I always say that with Lasik, I had my Lasik surgery done. I cannot imagine having a human take a scalpel to my eye. I would much, much, much rather have a laser programed by a robot.

Jason Scheckner 

It's a great example, and one example I use all the time that you'll appreciate because we both worked in the hiring business, is that every company in the world inevitably gets tons of applicants every year, and they hire some subset of that, right? They get a million applicants, they hire 20,000. So they're in the business of saying no to the delta, which is 980,000 people.

And so we all agree that it would be great for the human every time they post a job, to go back and look at all the people who’ve applied in the past and say, hey, are any of those any good for the next job?

Marissa Geist 

Silver medalist: the myth of recruiting, right.

Jason Scheckner 

Less waste.

Marissa Geist 

Still in there.

Jason Scheckner 

Still there. It's conceptually the actual, I always joke around, it's the actual most fundamental HR promise that everyone's ever made, which is I'll hold onto your resume and get back to you if it's a better job. But we can't do it. It's just too costly time-wise, and we have so many other things. So even if we wanted to or even if we started to, we can't do it at scale.

But an AI could go through a million people instantly give you a short list, if there are any, if there aren't any, let you know there also aren't any. And by the way, tell you that probably means you have a sourcing problem or hiring challenge ahead of you.

And if there are some, it can tell you, hey, this person applied two weeks ago was a silver medalist, to your point, and is very interested in our company. It turns out, by the way, those people convert at a higher rate than any other source.

So it's a great opportunity for AI and my question to leaders is what are the AI opportunities, the AI white space in all of your processes where you've sort of given up on whether you would ask a human to do that because it's just not scalable or too costly. And you would say, but this is a great use case for AI.

And my theory is that there's actually a ton of those in business which are either revenue opportunities, savings opportunities. And that's not scary because the humans are never going to do that work because we wouldn't let them do that work. But AI is really well-suited.

So it's a great way to think about the opportunities for collaboration and sort of growth space. So you described that as “white space” I think that's a great way to talk about it.

Marissa Geist 

It's almost like if you asked a five-year-old how work should work, they're always focused on the positive and the coolest part of your job that you could do. And they're not bothered about the fact that I can't do this incredibly boring paperwork or these tedious tasks.

So it's almost like taking that wonder of how the work should be done, how the work is supposed to be done, and saying what can't we do now because it's too costly for a human to do, or it's just not practical?

And turns out everything could be practical. It could still be too costly, but it could be the ultimate scalability of AI is actually picturing a work day like you did when you were five years old, with a briefcase and a cup of coffee and like, you know, your fabulous meetings you're going to talk about new, cool things that could be work if you're doing it right.

Jason Scheckner 

I think you nailed it. And honestly, this is where we can even ask the AI to look and identify processes and opportunities. Humans come up with things that sometimes, again, we do that we're not sure we do. One that the jury's out for me is AI interview listening. I hear about this all the time. And there's vendors who do it and do it very well.

My problem, I've been a hiring manager, and so my issue is I don't love to take notes during the interview because I want to be present during the interview. So I get the use case. My problem is I don't actually have the time after the interview to go back and consume the things. So like every meeting I have today essentially is recorded.

I don't really consume, it's nice to have the data, so maybe the data is the valuable output. Maybe, I mean, there's compliance risks of the data and other things, you mentioned risk is a factor in all these decisions. But that's a simple example, sometimes we come up with ideas that would be great to give to an AI. I'm not sure that the value of the output is great, but it's an interesting experiment to see.

And again, probably a low-cost experiment with the quality of AI, for example. So there's interesting places like that where I know, hey, it could be interesting. I'm not sure what the output of it is on the other side. And I gave you two hiring examples, mainly because, you know, this is about talent a lot of times, but there's so many of these in other functional areas of the business as well.

Marissa Geist 

I'm sure it's a “just because you can, should you?” type idea or does someone find value when you actually do it, I think that's probably a big task of the Chief Work Officer till the end of time, right?

Humans have ideas of what will make their lives better, and then you put it in place and the outcome doesn't change, well then, was that the thing to be doing or not?

When you think about who gets to decide this is, you know, a very human thing, right? Who's making the decisions, who's structuring the questions, who's creating the value in the organization? How do you think that changes, is that the work of the Chief Work Officer?

Jason Scheckner 

I think governance is a big part of it, if that's what you're referring to and process sort of definition becomes part of that.

Marissa Geist 

Yeah and value. I guess if the strategy, if you're co-creating a strategy with an agent, who gets to say “This is the decision and this is what we're going to place value on in an organization?” That still seems pretty human.

Jason Scheckner 

I think it's important that the humans participate on that, especially until we have better evidence in terms of what the agents are doing and the output of that, the bias and the content and the hallucinations. So I think we need to have human oversight for the foreseeable future.

I think there are already AIs that monitor the AIs and sort of have layers over that. So whether it's orchestration or governance layers or compliance layers, but I think we need humans involved.

And I think that's one of the opportunities and the new skills that will come out of this is overseeing and evaluating the AI. There's already a lot of work underway there. Most companies I work with today have some version of an AI COE that are cross-functionally evaluating AI across the company. So I think this kind of thing, whether that's again… to me, it doesn't matter if it's under the Chief Work Officer.

I think this is an example of the cross-team collaboration that needs to happen. All we're talking about is the same problem that we're saying, which is that the way work is defined is maybe not set up well and that that will need to be evaluated. And that includes how we think about AI as part of that team. So yes, where that happens, who that happens with, I don't know, could it be this Chief Work Officer?

Probably maybe. Who knows? But it needs to happen.

Marissa Geist 

Okay. Jason. Well, we've been talking a lot about this concept of a CWO, what makes a good one?

Jason Scheckner 

So in all of the theories of where this person could come from, I think it could come from different sides of the business. You know, there's people from a legal side. There's certainly people from an AI side. You could say that they need to understand the HR piece, IT piece. Whatever it is, I think it's going to be somebody who clearly demonstrates, I think, aptitude across a lot of areas.

So they're going to have to have deep interest in cross-functional business knowledge in those domains I talked about. So again, whether they're an HR person that is incredibly technical or a technical person that's incredibly people-oriented, it's got to be a hybrid or something.

Or you could have somebody who's a CWO reporting under the IT side or reporting under the HR side potentially. So I think that's really important.

I do think certainly understanding the human-machine collaboration means that they have to have practical experience with AI, and thinking about that. Otherwise, I think there's too much information that can get missed in terms of the work design.

But I think this person has to be able to take apart the business objectives.

So I think there's also that business orientation. This person also has to be, I mean, very clearly somebody who's a disruptor, not afraid to take risks, but also mindful of risks. So, interesting dichotomy.

Marissa Geist 

I was going to ask, do you need a disruptor or an evangelist or a disrupting evangelist? I don't know if those exist. Is it more bleeding edge, or is this somebody who comes in as the first follower, do you think?

Jason Scheckner 

I could see both versions. And I think there will be multiple people who are different versions of those. And so I think that's okay. We don't know what great looks like yet. We just know that there is a defined need. You know, the best examples I've seen so far are, there's a couple companies out there who have started to merge, like HR and IT departments.

There's been some, like Moderna had a big piece, an article about this and how they're starting to take that leap. That's one example, directionally. We can find proxies for that and contingent in a full-time workforce where you have HR owning contingent work and that's again, it's about contemplating the total work that needs to get done and the people.

So I think it's similar to that. What we know is that the people who do this work are going to need to be aware of all these things. And they're going to need to either physically do both or they're going to need to collaborate across those teams effectively.

Marissa Geist 

That makes total sense, because it's not one role today. So you can't just lift and shift one person or one profile. It’s kind of like when the COO job came up. It depends on what you need in the company about where you pull from.

But I guess my biggest question then is does it have to be a person or could it be AI?

Jason Scheckner 

I think it'll be a person for now, but I reserve the right to come back and change that.

Marissa Geist 

Okay, noted, will reserve the right. We can place bets.

Okay. So thank you for the discussion. I think before we sign off, I do want to do a quick-fire round of what you think this role should be: AI or human? And we actually asked AI what AI thinks, but are you up for a game? If I tell you a role, can you say AI or human?

Jason Scheckner 

I'll do my best.

Marissa Geist 

Okay. No wrong answers, no future facts.

Jason Scheckner 

I'm not using AI. I'm not collaborating with AI to give these answers. So these are purely I should qualify, these are purely human answers.

Marissa Geist 

100% pure human answers.

Jason Scheckner 

Limited by my own capability.

Marissa Geist 

Okay. Weekly grocery shopper.

Jason Scheckner 

I mean, that could definitely be AI, especially with all the delivery services. So yeah, AI for sure.

Marissa Geist 

AI agrees with you. Okay. Driving instructor.

Jason Scheckner 

Yeah. I mean we're not too far away from, again, if you think about autonomous vehicles and where they are, I would actually say some of the AI instructors might even be kinder, gentler than some of the ones my kids have had over the years.

Now, I happen to think I'm a great driving instructor. I don't know if my girls would agree, but I think sadly, probably AI is better suited here.

Marissa Geist 

So AI agreed with you. Blended. It said to begin with, autonomous vehicles are good, a good representation of that. But if you want to learn old school manual driving as a hobby, that's probably not great for the AI. Who knows. Okay, musician.

Jason Scheckner 

I mean, listen, I haven't listened to AI-produced music yet. I think it's like anything else, like within art and anything that's creative, I think there will be always things that are dynamically unique, uniquely human. And I think, the opportunity actually here is to like, invent new things. Those things are always uniquely human.

I think AI can produce probably amazing things at a fidelity rate that's pretty good for today and what humans can do, but I think it's like anything though, it's like there will be… whether it's wine or, you know, any of these things, I mean, humans will gravitate toward the things that are unique and maybe it's that the AI version becomes commoditized and the thing that is unique is the human one, and we subscribe value to that. I don’t know.

Marissa Geist 

That's a really interesting lens to put on all of this actually. Okay, receptionist.

Jason Scheckner 

AI.

Marissa Geist 

AI concurs.

Jason Scheckner 

It's already there.

Marissa Geist 

Yes. Preferred sometimes when you're just like, just let me type in the screen. Okay. Project manager.

Jason Scheckner 

Yeah. I mean, I've got AI doing some project management for me right now. I've got a couple workflow assistants that are keeping me on track with projects. So this is where like that idea of elevation comes in. So I think you'd want the humans to be sort of the cohesive project manager and then AI could execute day to day projects, at least for now, or sub projects and things like that.

But I think AI definitely capable of staying on task with times and deliverables and alerting people.

Marissa Geist 

Okay, last one, politician.

Jason Scheckner 

You know, listen, I think these are the examples where there is that the nuance of humanity is definitely in politics, I think is a very human aspect. I think political decision making, I think there's a lot of use cases for AI, certainly in finding the best path that's nonpartisan to get to the right thing for the constituents. But the politicians themselves, they'll be people for now.

Marissa Geist 

AI also said the same thing. More focus on research, speechwriting, but human politicians, it's an inherently people-centric task I guess.

Thank you so much Jason. This has been an insightful conversation. Lots of things to think about, lots of changes that we can't even picture yet.

But I think a really great couple of concepts about don't let your own humanity limit how you think about coworkers of the future and the Chief Work Officer being the person that figures out how the partnership between AI and humans can unlock to do things that we can't even imagine doing today, and how important this role is in the future, but probably need to be thinking about it today as some of this is very real and very here.

Again, this has been Jason Scheckner, head of strategic customer engagement and AI at our partner Workday. Thank you Jason.

Jason Scheckner 

Thanks, Marissa.

Marissa Geist 

And join me next time on the Trip to Work 2050, on the Talent Time Machine. To catch the next episode or hear from my previous guests, be sure to follow us on your favorite podcast platform.

This episode was edited by Matt Covarrubias and produced by Dusty Weis at Podcamp Media, with the support of Sarah Smelik and John McCarron of Heavenly, and the team at Cielo of Sally Hunter and Annamarie Andrews.

Thanks for listening, I'm Marissa Geist.

About the experts

Marissa Geist headshot
Marissa Geist

Chief Executive Officer, Cielo

Marissa is the Chief Executive Officer of Cielo, the world’s leading global talent acquisition partner. She joined Cielo in 2015 as Senior Vice President of Global Operations, where she was instrumental in scaling Cielo’s delivery model.

LinkedIn connect
Headshot of Jason Scheckner
Jason Scheckner

Head of Strategic Customer Engagement for AI at Workday

Jason Scheckner is a 15-year industry veteran building HR Technology solutions. He has a strong track record in bringing TA Tech and HCM vendor solutions to market, including HiredScore, WayUp, and BountyJobs. He is an expert in Talent Acquisition, Talent Management, DEI&B, Artificial Intelligence for HR, and the broader HR Technology space.

LinkedIn connect