Blog · 02
Detecting fluorescence after wash: where vision AI wins.
Why post-wash fluorescence is harder than it looks. ~6 min read.
A Tier-1 supplier ships a batch of machined parts to an OEM. Two weeks later the OEM calls: there's residual machining oil on the components, and the safety-critical assembly process can't tolerate it. The supplier's QA team had no way to see the contamination at the wash line. By the time it surfaced at OEM intake, three days of production were on a quality hold.
What post-wash fluorescence inspection actually does
After a component is machined or formed, it carries oils, swarf, and other residues from the production process. For safety-critical parts — anything in the powertrain, brake, or steering paths — these need to be removed before assembly. Plants run a wash line for exactly this purpose, and the verification step is fluorescence inspection: the part is illuminated under UV, and any remaining oil residue (or, optionally, any fluorescent penetrant that has wicked into surface flaws) glows.
It's a deceptively complex inspection task. You're looking for two things at once:
- Cleaning failure — residual contaminant from incomplete wash. This is a process problem.
- Penetrant indications — actual surface defects (cracks, porosity) revealed by the dye penetrant. This is a part problem.
Both glow under UV. They look superficially similar. A naive system flags both as "fail" and the line ends up either over-rejecting good parts or under-rejecting bad ones, depending on how the threshold is set.
The complexity that breaks off-the-shelf vision
Three things make this inspection hard for typical vision systems:
- High dynamic range. The fluorescent indication is bright; the rest of the part is dim. Standard sensors either blow out the indication or can't see the part it's on.
- Geometry variability. Crankshafts, axles, gears, suspension components, brake discs — all go through the same wash line, and all look completely different under UV. A model trained on one component generalizes badly to the next.
- The base rate problem. Most parts pass. The model has to be calibrated for very low base rates of failure, where the cost of a false positive (line stop, manual re-inspection, OEE hit) is real, but the cost of a false negative (shipped contamination, customer reject, supplier rating drop) is much higher.
Most off-the-shelf machine vision systems fail at one of these three. Some cost ₹50L+ per line and still require a human to verify every flagged indication. The economics don't work for an Indian Tier-2 making components at single-digit margins.
Where vision AI actually wins
Despite the complexity, this is one of the inspections where modern vision AI has a real edge over both manual inspection and traditional rule-based vision. Three reasons:
- Modern sensors and tone mapping handle the dynamic range. This was a bigger problem ten years ago than it is today.
- Per-component fine-tuning is fast. Once you have the pre-trained backbone, fine-tuning to a new component is a matter of days, not months.
- Anomaly classification, not just detection. A trained model can do what's hardest for the unaided eye: separate cleaning failures from penetrant indications, and rank both by severity.
Our pilot at an auto component wash line in Aurangabad has been running for four months. Cleanliness verification at 100% line coverage, with cleaning failures and penetrant indications classified separately and reported into the existing QA system. The Tier-1 supplier's OEM-mandated cleanliness target was met within eight weeks of go-live.
What we're not claiming
Vision AI does not replace cleanliness specs, ASTM standards, or Level II/III certified inspectors. Those are still the system of record. What it does is shift the inspection from sample-based to 100% coverage, and from end-of-shift batch reports to real-time alerting — without doubling the headcount.
If you're running a wash line and the next inspection step is a UV booth, book a pilot. We'll come look at your line, your components, and your defect rates. If we're not a fit, we'll say so.