Recruitment

Deel’s Alan Price on balancing AI, culture, and human judgement at scale

Article cover image

Price outlines how Deel embeds culture into hiring itself — through dedicated culture interviews and live onboarding sessions that connect new joiners globally and turn “together everywhere” into a lived experience

As AI increasingly shapes how candidates are screened, assessed, and shortlisted, talent leaders face a critical question: how do you scale hiring without losing fairness, judgment, or humanity? 


In this in-depth conversation, Alan Price, Global Head of Talent Acquisition at Deel, explains how the company uses AI to eliminate low-value work while keeping humans firmly responsible for decisions that shape careers and culture. He also discusses why transparency with candidates is non-negotiable, how to avoid bias at scale, and what responsible AI adoption looks like in real-world hiring.


Read here Price's exclusive insights:


Q. Many organisations talk about hiring efficiency, but what specific friction or failure points in traditional hiring convinced Deel to rethink the model?


When you’re handling 1.3 million applications a year for 3,000 roles, you quickly discover that traditional hiring cannot scale to meet this level of demand. Before we reworked our model, recruiters were spending the bulk of their time on manual CV review and admin instead of advising hiring managers. Candidates in high-volume markets across APAC could wait weeks without a meaningful update. 


Perhaps most damaging, manual screening rewarded résumés with prestigious logos and whoever happened to be in the first 150 applicants, rather than consistently surfacing the best-fit, high‑potential talent from non‑traditional backgrounds. We didn't just adopt AI to chase efficiency metrics; we adopted it because the old model couldn't deliver quality, fairness or a coherent global talent strategy at the pace the business and candidates deserved.


Q. As AI increasingly shapes early screening and interviews, where do you believe technology should stop and human judgment must take over? Where do you see AI enhancing decision- making rather than just accelerating it?


Today, AI is essential in eliminating tedious manual tasks and is an effective digital support for human-led decision-making. At Deel, AI is used to manage high-volume admin tasks like screening, role rematching and generating interview notes so recruiters can focus on conversations, but it never makes final hiring decisions. Intuition matters, and humans must own critical judgments on motivation, values alignment and offers.

That said, AI can enhance the decision-making process by surfacing overlooked candidates, flagging interview inconsistencies and providing insights that boost diversity and interviewer calibration. 


The recruiter of the future interrogates AI output, distills it into useful insights for  global hiring managers and applies human judgment that combines experience, judgment and data to shape company culture.


Q. With AI now influencing candidate outcomes, what level of transparency do you believe companies owe candidates about how they’re assessed, and how is Deel approaching that responsibly?


When AI is influencing candidate outcomes, companies owe people more than a footnote in the privacy policy; they owe clear, plain‑language explanations of what the technology does and where humans stay firmly in charge. 


Candidates should know which stages are AI‑assisted, what those tools are optimising for and where decisions are made by human recruiters and hiring managers. At Deel, AI is strictly an assistive layer: it helps us sift through a huge global volume of applications, surface matches we might otherwise miss, and generate better interview notes, but it does not make final hire or no‑hire calls. 


Q. Mass hiring initiatives often risk turning applicants into numbers. What does “respecting the candidate” look like when you’re hiring at a global scale?


At scale, it’s very easy for candidates to become just data points in a funnel, and that’s exactly what we’re trying to avoid. Respecting candidates means three things for us: first, making opportunities genuinely global - open roles that don’t default to where the talent sits or who’s in their network; second, being explicit about what the process looks like, including where AI is involved; and third, committing to closure, so people aren’t left hanging in a black box process. 


We’re still learning how to do this perfectly in very high-volume contexts, but the bar we hold ourselves to is simple: if we were the candidate, would this feel like a fair, honest use of our time?


Q. Accelerating hiring is appealing, but how do you protect long-term quality of hire, cultural alignment, and retention when speed is a core objective?


Speed and quality aren't trade-offs if you redefine what 'speed' actually means. For us, it's not about filling a seat faster. Rather, it's about eliminating the low-value work - manual CV triage, scheduling ping-pong, note-taking that eats into the time recruiters need for human-led judgment. 


We've cut initial screening time by up to 90 percent, which frees our team to focus on structured, skills-based conversations that actually predict performance and fit. A clear, standardised rubric defines what 'good' looks like before a single application lands, so AI and humans are calibrated to the same bar. 


Once someone accepts, we apply the same thinking to onboarding: remote-first, structured, and transparent. New hires get clear expectations, access to async resources that explain how we work across time zones, and deliberate touchpoints to build relationships with their team from the first weeks, so speed doesn’t come at the cost of belonging.


Q. Deel is referenced as a global-first, fully remote employer, but what are the lesser-discussed challenges of operating and scaling culture in such a model?


The lesser‑discussed challenge of being global‑first and fully remote isn’t tools or time zones – it’s that culture no longer happens by accident. In an office, you can rely on corridor updates, coffee runs and first‑day office tours; in our world, there’s no reception, no desk walkabout, so every moment of connection has to be designed. 


That’s why we start culture from the moment an offer is accepted, with e‑learning, video content and leadership messages that explain Deel’s mission, values and origin story, plus playful assets like our ‘joke playbook’ that swaps classic office instructions – ‘go to IT on the second floor’ – for remote equivalents like ‘head to the Ask‑IT Slack channel’. 


We then make values real through a dedicated culture interview in hiring and live onboarding sessions where new joiners from all over the world talk to each other, experience our ‘together everywhere’ value in practice, and see that a truly global team is something to celebrate. 


From there, we reinforce connection with smaller, high‑touch rituals – personal travel budgets to visit colleagues, local ‘Deel dinners’, and a global all‑hands roll‑up with a ‘work from anywhere’ photo competition – while treating Slack as the office and investing in internal communications so people feel informed and included even when they never share a physical space. 


Q. With AI, borderless hiring, and virtual recruitment becoming mainstream, what new capabilities do you believe the next generation of TA leaders must build?


Operating AI will soon be table stakes. The next generation of TA leaders will be distinguished by how well they interrogate it. That means building three capabilities you rarely see on a job spec today.

First, strategic fluency: understanding where AI actually adds value – top‑of‑funnel efficiency, smarter sourcing, interview insights – and where it should never be trusted to make final calls on its own.

Second, prompt craft: knowing how to frame the problem and ask the right questions so AI surfaces overlooked candidates.

Third, storytelling with data: translating AI‑generated insights into narratives hiring managers trust and act on. The recruiter of the future isn’t competing with algorithms; they’re the person who can explain an AI recommendation, spot when the model is drifting, and advocate for the candidate the machine might have missed.

Loading...

Loading...