Edited by: Serena Raymond, Constantin Jacob, Mike de Libero
In an age where technology is rapidly evolving, the talent acquisition landscape faces a growing threat: a flood of deepfake and fraudulent candidates. Reflecting on the events of the past handful of years it’s hard to believe that we defaulted to always trusting candidates' authenticity, whereas today we go through a rigorous verification process to make sure every candidate is legitimate.
At DNSFilter, we have seen fraudulent activity from deepfake candidates increase significantly in the past year. Our own experiences have forced us to develop a set of defensive tools, but given the global distribution of this issue and the potentially very valuable upside to those trying to coordinate these attacks, we have partnered with other talent and security teams to verify the efficacy of our tooling. Through these partnerships we have also been able to validate our strategies and exchange information.
We hope to continue those partnerships and share what we’ve learned with the broader cybersecurity community. This post will discuss what candidate fraud is, why it is an urgent threat for any teams hiring technical talent, and what you can do to help mitigate the risk.
The Rising Threat of Candidate Fraud
Candidate fraud, defined as the intentional misrepresentation of oneself during the hiring process, is becoming increasingly prevalent. Gartner predicts that by 2028, 1 in 4 job candidates could be fake, though we assume that this may already be the case in certain industries. Let’s think about that statistic and let the panic sink in for anyone responsible for hiring top tech talent in the next few years. Cue the sleepless nights, panic attacks and uncontrollable sweating. Not only is this the biggest issue facing teams that hire technical talent, but it stands to reason that it will become more pervasive due to the proliferation of AI-generated profiles and sophisticated interview tactics. If candidate fraud wasn’t on your radar as a talent leader or hiring manager, it should be.
Fraudulent hires are not merely annoyances to the recruiting team; they can lead to severe consequences, including direct financial loss, security breaches, IP theft, malware installation, and reputational harm. If you ask most talent leaders what keeps them up at night, you’ll likely hear about the fear of advocating to hire a candidate only to find out that the person isn’t real, the new hire has malicious intentions, or even worse you end up finding out that the candidate you hired ended up breaching secure and/or highly confidential systems of your organization.
Why is This Happening and Getting Worse?
Several factors contribute to the rise in candidate fraud:
- Remote work and global hiring: The increase in remote work and virtual interviewing makes it easier for deepfake candidates to hide their true identity, location, and even the fact that someone else might be involved in the interview process. They can pass phone screens, video interviews, background checks, and virtual I-9 verifications with ease.
- Generative AI: AI tools enable the easy creation of fake profiles and tailored resumes, plausible answers, and deepfakes. Our very own CEO, Ken Carnesi, recently highlighted the proliferation of deepfake candidates on our dnsUNFILTERED podcast.
- Job market imbalances: High demand for certain skill sets, combined with economic pressures, can lead individuals or groups to use deceptive tactics to secure employment.
- Sophistication of bad actors: Organized crime and state-sponsored groups are exploiting the hiring process with tactics like identity theft and coordinated efforts to bypass security measures. Notably, North Korean IT worker scams have funneled $250MM - $600MM annually to the regime since 2018, often funding sanctioned activities. The misconception is that these are individual bad actors, but in reality they are oftentimes sophisticated operations with years of experience and resources fueling fraudulent activity.
Talent Teams as the First Line of Defense
Talent acquisition teams and hiring managers are crucial in defending against candidate fraud. As Benjamin Sesser, CEO of BrightHire, states: Talent teams are on the front lines of security, whether they realize it or not. It is essential to develop a “spidey sense” for potential fraud and to be vigilant during the hiring process.
At DNSFilter, our talent team has been busy with risk mitigation, updating interview training, and creating educational materials to share across the business. These efforts have resulted in the ability to identify and weed out dozens of illegitimate candidates at the beginning of the hiring process.
Actionable Tips for Recruiters to Identify Deepfake Candidates
- Be vigilant during interviews: Pay attention to background noises, static speech/video, whether the candidate appears to be reading from a script, and reluctance to turn on the camera. Note any background noise on the initial phone screen and a lag indicating that it is an international call instead of a domestic call. Recruiters frequently work with U.S. based candidates who have an unreliable phone network and/or unstable internet connections. Our team (and many others) have identified a noticeable difference between a U.S. based candidate who needs an internet boost and the fraudulent candidates claiming to be connecting from a U.S. city while actually calling in from an international location.
- Look for conflicting information: Watch for inconsistencies between the candidate's resume, social media profiles, and interview responses. This can also include inconsistencies between experience and salary requests. For example, the candidate will present themselves as a senior-level employee but ask for compensation that is commensurate with a more junior candidate.
- Scrutinize online profiles: Check for newly created LinkedIn profiles with few connections or minimal activity. Use reverse image searches on profile pictures to check for stock photos or use by other individuals. Verify personal websites, LinkedIn, and GitHub profiles for basic or recently created information.
- Assess communication: Note any speech patterns that don't align with the candidate's claimed background. Be wary of interviews that sound like they are being conducted from a call center. Listen for unnatural pauses and delays.
- Observe behavior: Watch for a lack of eye contact and signs of being coached or signs of reading from a script. Look for unnatural facial movements, poor lip-syncing, flickering around the face, and a lack of normal blinking or emotional expression. Full disclosure: These “telltale signs” are decreasing in frequency as the schemes become more sophisticated. It is getting much harder to spot these visual cues because AI is getting better at mimicking human interaction and the nation state actors perpetuating these operations are getting more tech savvy.
- Reluctance for in-person interactions: The candidate will refuse to do an in-person interview and refuse any work-related travel.
- Verify location: Check for mismatches between the stated location and the IP address. Use tools to identify VoIP numbers or VPNs.
- Require cameras on: Mandate cameras during video interviews and ask candidates to briefly turn off background blur if there is suspicion.
- Ask probing questions: Ask specific questions about their claimed location or delve deeper into project details mentioned on their resume. Note evasive or vague responses. Open-ended questions that require storytelling are a great way to probe into a candidate’s background.Force candidates to break away from prepared remarks and off-screen coaching by asking questions that are not routine.
- Verify provided numbers, email, and addresses: You can leverage publicly available information to see if accounts owned by the candidate have ever been associated with other individuals.
- Record interviews (with consent): Use tools to record interviews for review and to verify consistency in appearance and answers. Many basic tools to accomplish this are built into the operating systems that your team is already using so you can get started with something without having to wait on tooling evaluations or budget approvals.
Risks of Hiring Fraudulent Candidates
Hiring a fraudulent candidate poses significant risks:
- Financial loss: Salaries paid to underperformers or deepfakes and costs associated with rehiring.
- Security breaches & IP theft: Fraudulent hires may exfiltrate sensitive company data, customer information, or trade secrets.
- Malware and system damage: There may be incidents where fraudulent hires install malware, such as ransomware, on company systems. This includes backdoors to retain access to systems after they have been let go from the company.
- Negative team impact: Fraud-related hires are significantly more disruptive than typical mishires.
- Reputational harm: Fraudulent hires can devastate brand trust, especially when linked to data breaches or public incidents.
Real-World Examples
Mitigation Strategies and Best Practices
To combat these threats, organizations should implement the following:
- Robust vetting processes: Implement strong identity verification, including using specialized providers for sensitive roles, and cross-reference contact information with lists of known deepfake aliases.
- Enhanced interview procedures: Train hiring panels to recognize suspicious behavior, mandate cameras during video interviews, ask probing questions, and note evasiveness.
- Technical controls: Utilize tools to identify VoIP numbers and VPNs. Using a VPN for your business can also protect these deepfake candidates from being able to collect any private information about your organization, such as location metadata. We recommend using Guardian Firewall + VPN.
- Do not download resumes as attachments: Resumes sent directly from unverified sources can be used to deliver malware. View candidate profiles and resumes through the applicant tracking system. Require referrals to be sent through the ATS as well. Recruiters should no longer accept referral resumes via Slack, email, or any other method other than through the lens of the ATS. Consider safe browsing options & the use of a VPN for your Talent team while accessing these resources. Unintentional human mistakes, like clicking on a link by accident, should still be protected to prevent your team's digital lives to remain unharmed, for example their physical location being exposed through their IP address accessing a resource.
- Partner with talent and security teams outside of your organization to share information: We have a growing list of unique identifiers of fraudulent candidates that we share with industry partners. These types of lists can be used to cross-reference with data in the applicant tracking system.
- Explore tools that detect deepfakes: There are new tools that can pair with video interviewing software that claim to detect deepfake activity. If it’s within budget, consider implementing something like this.
- Principle of least privilege: Limit new hires' access to systems and data, especially initially, and monitor for unusual activity. You might consider DNS filtering or behavioral analytics tools for this.
- Cross-functional collaboration: Equip frontline business units—from recruiting to sales—with shared threat context and clear escalation paths to surface anomalies early. Conduct routine training for hiring teams to educate them on the growing risk of fraudulent candidates.
- Secure equipment shipping: Only ship company laptops and equipment to verified residential addresses or use secure pickup locations.
- Require in-person onboarding: If feasible, request employees to be in-person for new hire training and/or team off-sites.
Stay Vigilant and Proactive
Candidate fraud is an evolving threat that requires constant vigilance and proactive measures. By staying informed, training teams to spot red flags, implementing robust verification processes, and leveraging technology thoughtfully, Talent teams can protect their organizations from the risks of hiring fraudulent candidates.
Ensure you have visibility into your remote network and get a free trial of DNSFilter.