-

Homeapna IndiaHow Apna Detects and Removes Suspicious Job Listings in Real-Time

How Apna Detects and Removes Suspicious Job Listings in Real-Time

apna

The listing looked perfect. “Customer Support Executive, Work From Home, ₹18,000 to ₹22,000/month, Freshers Welcome.” Posted by a company called Nexus Digital Services. Logo looked professional. Job description was detailed. Salary was realistic for the role. A 21-year-old B.A. graduate in Indore applied within an hour of seeing it. He got a call the next morning. “Congratulations, you’ve been shortlisted.” Quick phone screening. Basic English questions. He answered well. Then: “Your training starts Wednesday. We just need ₹900 for your onboarding kit, employee ID, and login credentials.”

Nexus Digital Services didn’t exist. The logo was stolen from a real company’s website. The job description was copied, word for word, from a legitimate listing on another portal. The salary was realistic because the scammer had done their homework. Everything was engineered to pass the smell test long enough for the ₹900 to change hands.

That listing would have survived for days on a platform that doesn’t check. On Apna, it wouldn’t have made it past Tuesday afternoon.

This blog explains how. Not the marketing version. The mechanical version. What actually happens between a recruiter uploading a job listing and a candidate seeing it on their screen. Because fake job detection on Apna isn’t a feature someone turns on. It’s a set of systems running continuously, checking things that candidates would never think to check, catching listings that look real to the human eye but carry signals that only a machine scanning thousands of postings simultaneously can spot.


Why Fake Listings Have Gotten So Hard to Spot

The low-effort scams still exist. The ones with broken grammar and ₹50,000/month for data entry with no experience. Those are easy. A 16-year-old could spot them. They still catch people, but mostly through WhatsApp forwards and Telegram channels where nobody’s checking anything.

The ones that are actually dangerous in 2026 look indistinguishable from real job listings. And they look that way because scammers have gotten very good at reverse-engineering what real listings look like.

A scammer who wants to post a convincing fake listing for a customer support role doesn’t write the job description from scratch. They go to a legitimate job portal, find a real listing for a similar role at a real company, copy the description, change the company name, adjust the salary by ₹1,000 to ₹2,000 so it doesn’t match exactly, and post it. The result is a listing with proper formatting, correct industry terminology, realistic requirements, and a salary that passes the “does this make sense?” test. Because it was built from a real listing. The body is legitimate. The identity attached to it isn’t.

They’ll register a company name that sounds like a hundred other companies. Nexus Digital Services. Greenfield Solutions. PrimeConnect Global. Names generic enough that a candidate won’t bother verifying them. Names that return no Google results but also trigger no alarm because plenty of small, real companies also return no Google results.

Some go further. They build a one-page website. An about page with stock photos. A contact page with a form that nobody monitors. Just enough digital presence that a candidate who does a quick check thinks “okay, they have a website, seems fine.” The website cost ₹2,000 to build on a template. The scam will earn ₹50,000 before anyone catches it. The economics work in the scammer’s favour at every step.

This is why candidate awareness, while important, isn’t sufficient. A candidate can be careful, sceptical, and diligent, and still not catch a listing that was built specifically to pass all of those personal filters. The listing doesn’t trigger suspicion because it was constructed from real components. Only a system checking things that the candidate can’t check, like the recruiter’s Aadhaar record, the company’s GST registration, and the pattern of activity across hundreds of similar listings, catches these reliably.


What Happens Before a Listing Goes Live on Apna

This is the part that most candidates never see. The space between a recruiter creating a listing and that listing appearing in a candidate’s feed. On a platform with no verification, that space is empty. Recruiter posts. Listing appears. Done. On Apna, that space is where the first round of fake job detection happens.

The recruiter verification happens before the listing even enters the pipeline. We covered this in detail in the previous blog about Apna Safety, but the short version: the recruiter’s identity gets checked against Aadhaar. The company’s existence gets verified through GST, PAN, and CIN (Corporate Identity Number) records. If the person isn’t verifiable or the company doesn’t exist in government databases, the account can’t post. The most common scam setup, an anonymous person pretending to be from a fictitious company, dies at this step. Before a single listing is created.

But verification alone doesn’t catch everything. A scammer using a real company’s registration details, or a recruiter at a legitimate company who goes rogue, or someone who passes verification and then changes their listing content after approval. These scenarios require a second layer. And that second layer is where things get interesting.

The listing itself gets examined. Not by a person reading every word (at the volume Apna operates, over 2.7 crore job applications were facilitated in Q3 2025 alone, human review of every listing isn’t physically possible). By AI systems trained to recognise patterns that correlate with fraud.

What does the AI check? It’s looking for mismatches. A listing offering ₹35,000/month for a role that typically pays ₹15,000 to ₹18,000 in that city. That’s a salary anomaly. The AI knows what customer support roles in Indore typically pay because it has data from thousands of legitimate listings for the same role in the same market. When a new listing sits far outside that range without a clear reason (a senior variant, a specialised skill requirement), it gets flagged.

It checks for description patterns associated with scams. Phrases that appear disproportionately in listings that later get reported as fraudulent. “No interview required” in combination with “start immediately” in combination with a salary above market average. Individually, each of those phrases is fine. A real listing might say “start immediately” because the company needs someone urgently. But the combination of all three, weighted by the AI across historical data about which listings turned out to be fake, produces a risk score. High enough risk score and the listing gets held for review before it reaches any candidate.

It checks whether the same job description has been posted under multiple company names. One of the lazier scammer techniques: copy the same listing text, post it as “Nexus Digital Services” on Monday, “Greenfield Solutions” on Wednesday, “PrimeConnect Global” on Friday. Same words. Different identity each time. A human scanning listings wouldn’t notice because they’d see each one individually. The AI sees all three simultaneously and recognises the duplication. Flagged.

It checks whether the company’s claimed location matches the recruiter’s digital footprint. A listing says “office in Bangalore, Koramangala.” The recruiter’s verified details place them in a different city with no connection to Bangalore. That’s a geographic inconsistency. Not proof of fraud on its own. But another signal that, combined with others, pushes the risk score higher.

None of these checks are binary. The system doesn’t say “this listing is a scam” or “this listing is safe.” It generates a confidence score based on how many signals align with patterns that historically correlated with fraud. High confidence of legitimacy? The listing goes live immediately. Medium confidence? The listing gets queued for human moderation before appearing. Low confidence? The listing gets blocked and the recruiter gets flagged for review.

That layered approach matters because fake listings exist on a spectrum. Some are obvious scams that the system blocks instantly. Some are borderline cases that need a human to look at the context. And some are legitimate listings that happen to trigger one or two signals (a startup with no GST registration yet, a real company offering above-market salary for a hard-to-fill role). The system needs to catch the scams without blocking the legitimate postings. That calibration is ongoing, refined every week as new data comes in about which flagged listings turned out to be real and which turned out to be fraud.


What Happens After a Listing Is Live

Getting past the initial checks doesn’t mean a listing is safe forever. Some scams are slow burns. The listing looks clean at posting. The description is normal. The salary is reasonable. But 3 days later, the recruiter starts messaging candidates with “congratulations, you’ve been selected, just send ₹800 for your training kit.”

This is where the real-time monitoring layer kicks in. Apna’s AI system doesn’t just check listings at the gate. It watches what happens after.

Chat conversations between recruiters and candidates on the platform get monitored for patterns associated with fraud. When a recruiter sends a message containing phrases like “registration fee,” “send ₹” followed by a number, “security deposit,” “training charge,” “payment required before joining,” or “transfer to this account,” the system flags the conversation. Not after the candidate reports it. Before. The AI is reading the text patterns in real time. A human moderator reviews the flagged conversation. If the violation is confirmed, the recruiter is restricted, the listing is removed, and every candidate who interacted with that recruiter or listing gets notified.

That notification piece is something candidates don’t think about but it matters. If a listing gets removed, the candidates who applied to it aren’t left wondering why the recruiter went silent. They get a notification explaining that the listing was found to be suspicious. That turns a confusing silence into useful information. And it reinforces the habit of applying through verified platforms instead of unverified WhatsApp groups where a removed listing just vanishes and nobody tells you what happened.

The system also watches for velocity patterns. A recruiter who posts 15 listings in 2 days across different job categories and different company names. A recruiter whose listings consistently get very high application volumes but whose response rate to candidates is near zero (because they’re collecting data, not hiring). A recruiter who messages 50 candidates on the same day with identical text. Each of these patterns, on its own, could have an innocent explanation. A large staffing agency posts high volumes legitimately. But the combination, weighted against what the system has learned from thousands of confirmed fraud cases, produces a signal.

And then there’s the feedback loop. Every time a candidate reports a listing or a recruiter, and the moderation team confirms the fraud, that data goes back into the AI model. The phrases the scammer used. The listing patterns they followed. The timing of their messages. The system gets slightly better at catching the next version of the same scam. Which is necessary because scammers adapt. They change their language when they realise certain phrases get flagged. They adjust their salary ranges when they see that extreme outliers get caught. The AI adapts back. It’s a continuous cycle, not a one-time build.


Where Candidates Fit Into the System

The automated systems do the heavy lifting. But candidates see things machines don’t. A machine reads text patterns. A candidate reads intent. A machine flags “send ₹800.” A candidate notices that the recruiter’s tone shifted from professional to pushy after the “selection” call. A machine detects salary anomalies across thousands of listings. A candidate notices that the “office address” the recruiter gave them doesn’t show up on Google Maps.

That’s why the reporting system exists. Not as a suggestion box. As an active detection layer that works alongside the AI.

Every listing on Apna has a report function. Every chat conversation has one. If something feels wrong, even if the candidate can’t articulate exactly what’s wrong, even if there’s no money request yet, the report enters the moderation pipeline. Multiple reports against the same recruiter trigger accelerated review. A recruiter who gets reported by 3 candidates in a week gets reviewed within hours, not days.

During the Apna Safety pilot in August and September 2025, the system verified over 1,46,000 recruiters. Reported fraud exposure dropped 45%. Safety complaints dropped 60%. The platform’s Play Store rating climbed to 4.7. And more than half of the verified cases involved opportunities targeting freshers and first-time job seekers, the demographic most frequently targeted because they’re the most actively searching and the least experienced at spotting scams.

Fun Fact: According to the Indian Cyber Crime Coordination Centre, job-related scams were the largest single category of cyber fraud complaints in India in 2024, with estimated losses of ₹5,100 crore.

The candidate’s role in this system is simple but it matters: if something feels off, report it. Don’t wait until money is requested. Don’t wait until you’re sure it’s a scam. Report the feeling. The moderation team will determine whether the feeling was right. And if it was, every candidate who would have been contacted by that recruiter next week just got protected by 30 seconds of your time.


Fake job detection on Apna isn’t one thing. It’s a stack of things running simultaneously. Recruiter identity verification against government databases before any listing gets created. AI analysis of listing content checking for salary anomalies, description plagiarism, geographic mismatches, and known fraud phrase patterns. Real-time monitoring of recruiter-candidate conversations watching for fee requests and suspicious messaging behaviour. Velocity and pattern analysis tracking recruiter activity across the platform over time. And candidate reporting feeding human intelligence into a moderation pipeline that results in real consequences for confirmed violators.

Each layer catches a different type of scam. The identity verification catches the anonymous fraudsters. The listing analysis catches the sophisticated ones who use real-looking descriptions. The conversation monitoring catches the ones who pass the initial checks but reveal themselves when they ask for money. The candidate reports catch the edge cases that fall between the algorithmic cracks.

No system catches everything. Scammers adapt. They change language when old phrases get flagged. They build better fake websites. They find new ways to make fraud look like employment. But each adaptation the scammer makes forces them to invest more effort, take more risk, and operate under more pressure. And a scammer who has to work harder to look legitimate on Apna will eventually move to a platform where they don’t have to work at all. That migration, pushing fraud off verified platforms and onto channels where it’s someone else’s problem, isn’t the final solution. But it makes Apna materially safer for the 60 million+ people who use it to look for work.

₹5,100 crore lost to job scams in 2024. Every fake listing caught before a candidate sees it is a small piece of that number that doesn’t compound into next year’s total. That’s what the system is for. Not perfection. Prevention at scale.


FAQs About Apna’s Suspicious Job Listing Detection

How does fake job detection on Apna work? Through multiple layers running simultaneously. Recruiter identity is verified against Aadhaar before they can post. Company existence is checked through GST, PAN, and CIN records. Listing content is analysed by AI for salary anomalies, description plagiarism, and known scam patterns. Conversations are monitored in real time for fee requests and suspicious behaviour. And candidate reports feed into a human moderation pipeline that can block recruiters within hours of a confirmed violation.

Can a fake listing still appear on Apna? The system catches the vast majority but no detection system is 100% foolproof. Some sophisticated scammers pass initial verification and only reveal themselves through their behaviour on the platform. That’s why the real-time conversation monitoring and candidate reporting layers exist: to catch what the gate-level checks miss. During the pilot, the combination of all layers reduced fraud exposure by 45%.

What happens when a listing gets flagged? It depends on the confidence score. High-confidence fraud signals result in immediate blocking. Moderate signals queue the listing for human moderation before it reaches candidates. Low-confidence flags may result in the listing going live with the recruiter placed under elevated monitoring. Confirmed fraud results in listing removal, recruiter suspension or permanent blocking, and notification to all candidates who interacted with that listing.

How can candidates help detect fake listings? By reporting anything that feels suspicious through the in-app report function. Early signals matter: a vague job description, a recruiter who’s evasive about the company address, a listing that promises unrealistic pay. Candidates don’t need to be certain it’s a scam to report. The moderation team investigates. Multiple reports from different candidates against the same listing trigger accelerated review.

What if a scam listing appeared on WhatsApp or Instagram claiming to be from Apna? Verify the recruiter’s phone number through the Apna Safety lookup tool on the app or at apna.co/apna-safety. The system will show whether the recruiter is active (verified), blocked, or unregistered. If the result is blocked or unregistered, the person is not a verified recruiter on Apna regardless of what they claimed. Report the number through the tool and disengage.


All the Best!

Category

Looking for a new opportunity?

Get access to over 5000 new job openings everyday across India.