You found an AI detector API that claims 99% accuracy. The integration looks simple. But before you send your first resume to that endpoint, there is a question nobody on the vendor’s landing page is asking: what happens to the candidate data after you send it?
If you are processing candidate resumes through a third-party AI detection API, you are creating a data processing relationship that GDPR and other privacy laws have specific opinions about.
What you are actually sending
When your ATS calls an AI detector API, it sends the text content of a candidate’s resume — or a significant portion of it — to a third-party server. That text contains personal data: names, contact information, employment history, education, sometimes demographic details.
Most AI detector APIs are not built for HR data. They were originally designed for academic plagiarism detection and content marketing. Their privacy policies, data retention practices, and security standards were written for essay-checking, not candidate screening.
Before you integrate, read the vendor’s privacy policy as if you were a data protection officer. Because that is who will be asking questions if something goes wrong.
GDPR: the basics you cannot skip
If you hire in the EU, EEA, or process data of EU residents, GDPR applies to every system that touches candidate data. Here is what that means for an AI detector integration:
Lawful basis: You need a legal reason to process candidate data through the detector. The two common options are legitimate interest (GDPR Article 6(1)(f)) and explicit consent. Legitimate interest requires you to document why detection is necessary and proportionate. Consent requires the candidate to agree before their resume is scanned — and that consent must be specific, informed, and revocable (GDPR Advisor, 2026).
Transparency: Candidates must be told that their data will be sent to a third-party AI detection service, what data is shared, why, and how long it is retained. Your privacy notice needs to name the purpose explicitly. “We may use third-party tools” is not specific enough.
Data processing agreement: GDPR Article 28 requires a written agreement between you (the data controller) and the API vendor (the data processor). This agreement must specify what data is processed, for what purpose, how long it is stored, and what security measures are in place. If the vendor does not offer a DPA, that is a red flag.
Recent keyword data from DataForSEO Labs (United States, English) shows “ai resume screening” at roughly 390 monthly searches. Many of those searchers are evaluating tools without considering the compliance layer underneath.
US privacy laws: not just a European problem
GDPR gets the headlines, but US privacy requirements are expanding. Several states have passed or proposed comprehensive privacy laws:
- California (CCPA/CPRA): Requires disclosure of data sharing with third parties and gives consumers the right to opt out of the sale or sharing of personal data.
- New York City Local Law 144: Requires bias audits for automated employment decision tools. If your AI detector contributes to a hiring decision, it may fall under this law.
- Illinois AI Video Interview Act and BIPA: Impose specific requirements on biometric data and AI tools used in hiring.
The regulatory landscape is moving toward more disclosure, more auditing, and more candidate rights. Adding an unvetted third-party API to your pipeline increases your surface area for compliance issues.
Five questions for the API vendor
Before signing a contract, get written answers to these:
-
Where are the servers? If the vendor’s infrastructure is outside your jurisdiction, cross-border data transfer rules apply. For EU data sent to US servers, you need Standard Contractual Clauses or an adequacy decision.
-
How long is candidate data retained? Some APIs cache input text for model improvement. If your candidates’ resumes are stored on the vendor’s servers indefinitely, that conflicts with data minimization principles.
-
Is candidate data used for model training? Many AI companies use input data to improve their models. If resume text is used for training, you are sharing candidate data for a purpose that has nothing to do with hiring — and almost certainly outside the scope of your stated lawful basis.
-
What security certifications does the vendor hold? SOC 2 Type II, ISO 27001, or equivalent. If the vendor cannot demonstrate security standards, you are sending personal data into a system you cannot audit.
-
Will the vendor sign a GDPR-compliant DPA? If the answer is no, or “we don’t usually do that for this product tier,” walk away.
The data minimization alternative
GDPR’s data minimization principle says you should process only the data necessary for a specific purpose. An AI detection scan processes the entire resume text to answer one question: “Was this written by AI?” That is a lot of personal data for a low-value signal.
A criteria-based approach processes less data for a more useful result. CriteriaMatch evaluates specific qualifications — certifications, work rights, language skills, experience thresholds — against defined requirements. The screening is targeted, the result is actionable, and the data processing is proportionate to the purpose.
You do not need to send entire resumes to a third-party server to decide if a candidate meets your requirements.
For more on building hiring workflows that respect both candidate rights and team efficiency, see our guide on responsible AI in recruiting.
Map the compliance before you sign
The AI detector API market is moving fast. Vendors are selling urgency — “AI resumes are flooding your pipeline” — without addressing the privacy infrastructure required to use their products responsibly.
Before you evaluate accuracy, evaluate compliance. The technical integration is the easy part. The data governance is where most teams realize the cost is not worth the signal.