Strategic HR

Building the Core Levers of AI-Ready HR: A Conversation with Laura Maffucci of G-P

Article cover image

As organisations scale AI adoption, responsible use is becoming critical. Laura Maffucci highlights why human judgment, critical thinking, and validation remain essential in an AI-driven workplace.

AI is no longer a distant disruption, it’s a present reality reshaping how work gets done, how roles are defined, and how organisations think about talent. As it accelerates change, it also raises complex questions around ethics, trust, and responsibility, from bias and surveillance to data privacy.


For HR leaders, the challenge is no longer just adoption. It’s about intent. How do you move beyond reactive workforce strategies and actively shape the responsible development and use of AI?

In this episode of People Matters Unplugged, Cheshta Dora, Head of Content, Research and Community at People Matters, speaks with Laura Maffucci, VP and Head of HR at G-P. Known for her pragmatic yet forward-looking approach, Laura brings a grounded perspective on navigating ambiguity, enabling adoption, and building organisations that balance innovation with accountability.


AI: Inevitable, Imperfect, and In Need of Intent


Laura doesn’t romanticise AI, and that’s precisely what makes her perspective valuable.


“No matter how you feel about a new technology, you have to embrace it. It’s a let go or be dragged situation,” she says, drawing parallels with the early days of social media. But unlike previous waves of disruption, AI is moving faster and reaching deeper into how we live and work.


At the same time, she’s clear-eyed about its limitations. “I have more concerns than anything about it. But it’s not going away, so the question becomes: how do you use it in the right way?”


This balance, between acceptance and caution, emerges as a defining theme. AI, in her view, is not just a capability shift, but a mindset shift. One that requires organisations to rethink not only skills, but judgment.


The Real Risk: Not Technology, But Readiness


While much of the conversation around AI focuses on talent shortages or job displacement, Laura points to a more immediate gap: the ability of organisations to help people process and adapt to change.


“The biggest risk isn’t failing to hire AI talent. It’s failing to help people understand and work with what’s already here.”


Many organisations, she observes, rushed into AI adoption with a top-down mandate, “everyone has to use AI”, without answering a more fundamental question: what problem are we trying to solve?


At G-P, the approach was more measured. The company established an AI council early on, not to drive adoption, but to build guardrails around policy, security, and ethics. From there, the focus shifted to enablement: internal experts leading sessions, prompting workshops, and creating ongoing learning mechanisms embedded in daily work.


The takeaway is clear, AI readiness isn’t built through one-time training. It’s built through continuous, contextual learning.


Culture Before Capability: Creating Psychological Safety


One of the most overlooked barriers to AI adoption, Laura argues, isn’t fear of job loss—it’s fear of judgment.

“People worry, will I be seen as less smart? As lazy? Or will more work just be piled onto me because I can do things faster?”

Addressing this requires more than tools, it requires trust.


At G-P, this translated into deliberate cultural signals: open Slack channels to share use cases, simplified approval processes for new tools, and even “AI Awesomeness Awards” to recognise experimentation.


These aren’t just engagement tactics. They are mechanisms to normalise curiosity and reduce hesitation.

“People need to feel safe to be curious,” she says. “That’s what drives adoption.”


Learning in the Flow of Work, Not Outside It


If there’s one area where traditional HR approaches fall short, it’s training.


“This isn’t something you can teach in a one-time session and expect people to go use it,” Laura notes. “AI is the perfect example of learning in the flow of work.”


At G-P, employees are encouraged to explore, experiment, and even bring in new tools, within a structured framework. In one instance, a benefits team member independently leveraged AI to build complex comparison tools, simply because the environment enabled her to do so.


What stands out is not the technology itself, but the autonomy and learning agility behind it. “The skill we need most isn’t technical expertise, it’s curiosity and the ability to learn fast and apply it.”


The Ethics Imperative: Why Human Judgment Still Matters


As organisations scale AI adoption, the question of responsible use becomes unavoidable. And here, Laura is unequivocal: human judgment cannot be outsourced.


“AI can be wrong, and often is. You have to question it, validate it, and apply discernment.”


She points to a critical blind spot: many AI outputs are shaped by unreliable or informal sources. Without understanding where information comes from, organisations risk building decisions on shaky foundations.


This makes critical thinking, not coding, the most essential skill in an AI-driven workplace.

“We’ve lost some of that critical thinking as a population. Now we need to get it back.”


Beyond Hype: Rethinking Roles, Tools, and Talent


AI is also reshaping how roles evolve, especially at the entry level. Interestingly, Laura believes organisations may not need to redesign roles as aggressively as they think.


“The people coming into these roles already think and work with AI. It’s second nature to them.”


However, this doesn’t reduce the need for intentional design. From emerging roles in ethics and compliance to the growing complexity of tool ecosystems, organisations must strike a balance between flexibility and focus.


“You may need more tools than before, but you also need strong oversight to keep them relevant and not overwhelming.”


The parallel she draws is telling: just as benefits evolved from quantity to personalisation, AI tools will need to follow the same path, from abundance to relevance.


A Measured Approach to an Accelerating Future


If there’s one principle Laura returns to, it’s this: stay grounded.


“AI isn’t going to solve everything. You have to be measured in your approach and never lose sight of critical thinking and discernment.”


It’s a reminder that while AI may be accelerating change, leadership still comes down to clarity of intent. For HR, that means moving beyond experimentation to shaping how AI integrates into culture, capability, and decision-making.


The Monday Morning Action


For leaders wondering where to begin, Laura keeps it simple:


“Focus on how you can encourage and enable people, while staying strong on security and discernment.”


In a space evolving as rapidly as AI, there are no fixed answers. But as this conversation highlights, the organisations that will move forward are not the ones with the most advanced tools, but the ones creating the right conditions for people to learn, question, and adapt.


Because in the end, AI readiness isn’t just about technology. It’s about how humans choose to use it.


Loading...

Loading...