A recently enabled feature in our services caused an unexpected increase in source integrity breakdown rejections on Facial Similarity reports, due to payload integrity verification (used to flag network injection attacks).
This anomaly resulted in 5K reports being incorrectly flagged as “consider” for customers using the Standard and Photo Fully Auto variants of Facial Similarity reports, and only for clients that leverage the Device Intelligence report.
We’ve determined that the root cause of the issue was a race condition bug on the live photo upload module of the Smart Capture Web SDK which, for clients having the Device Intelligence product enabled, could inadvertently cause incorrect signatures to be computed.
2025-12-10 11:12 GMT: A code change was rolled out to start collecting new device intelligence data for added fraud prevention.
2025-12-11 10:50 GMT: High payload integrity flagging was identified, signalling an apparent rise in network injection attacks, which turned out to be a false flag for fraud, but a true signal for something going wrong with our deployment. We took longer than usual to identify the issue due to both a misconfiguration on our alerting stack (we were wrongfully not alerting on-call personnel for such alerts), and also the low amount of affected reports given the progressive rollout strategy we took to incrementally deliver this feature. In order to reduce impact, we targeted only a portion of our latest Web SDK version traffic, additionally limiting to the Standard and Photo Fully Auto variants of Facial Similarity reports, and only for clients that leverage the Device Intelligence report.
2025-12-11 12:00 GMT: The code change and used feature flags for rollout were reverted.
2025-12-11 16:52 GMT: All the affected reports were rerun.
2025-12-15 15:42 GMT: After a lengthy investigation that revealed the problem to be a race condition, a fix was implemented (not yet deployed).