Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera integrity certification for facial biometrics and physical documents attestation #13

Open
Guar1s opened this issue Sep 16, 2022 · 5 comments

Comments

@Guar1s
Copy link

Guar1s commented Sep 16, 2022

We are Allowme, a business unit of Tempest Security Intelligence, a cybersecurity company from Brazil, Latam, with more than 22 years in operation. Allowme's mission is to help companies protect the digital identities of their legitimate customers through a complete fraud prevention platform.

Through our Security by Design culture, AllowMe has become the most trusted platform on the market, protecting valuable data and the reputation of innovative businesses.

We facilitate faster and more accurate decision making, optimizing flows to scale business sustainably.

Threat Context
The use of security mechanisms is needed for digital identities attestation and protection. To achieve that, many authentication factors are used, including biometric facial authentication, which is becoming more common.

One of the main attack paths, which compromise facial validation, is the use of external mechanisms to inject 3rd party photos and videos impersonating someone else, known as face spoofing attacks (instead of using the native camera).

An engaged attacker could easily collect the target's social media photos and use them to open accounts on digital banks on behalf of these targets in LATAM, without consent or authorization of the owners. During Know Your Client (KYC) processes for financial services, many documents are usually requested, making document pictures upload a mandatory feature, which became possible pathways for Identity Falsification Attacks or Fake Accounts Creation.

Proposal
To have a safe and reliable way to know if the photo of the face or of a physical document was taken from a physical native camera (or not). In other words, the proposal is to have a way to attest the underlying method to collect that data (picture) to avoid spoofing.

Privacy implications and safeguards
There are no PII being used for origin identification of the picture.

Safeguard #1
The API could return only a bit informing if the image has been captured through the native physical hardware or not.

@dvorak42
Copy link
Member

Does this sort of capability primarily need to be used in the case where the physical device is actively being streamed/used or do a lot of these flows allow a user to upload an image/video that they may have taken before or via a different device?

One concern with this capability is that given that the attacker has to be fairly skilled/have control of the device, even if they use a real device, they could just run an extension or malware on the device that changes the contents of the image that is being transmitted over the wire. You'd likely need something where each camera/videodevice signs/attests to image data coming from the individual camera module/device which would be a fairly expansive ecosystem challenge. Even then an attack where the real camera is pointed at a screen with the "fake" image might also bypass most of the value of this capability.

Can you go into the sort of threat actors you tend to see, to see where their capabilities to bypass this sort of measure tend to lie. Is this mostly on devices they fully control or via malware/intrusion into real user's devices?

@Guar1s
Copy link
Author

Guar1s commented Oct 24, 2022

@dvorak42 the main threat to avoid is a change of the image source in the camera locally. Many security systems have been implemented with facial biometrics and physical document capture, so fraudsters inject fake images stolen from the internet to impersonate someone else.

@dvorak42
Copy link
Member

We'll be briefly (3-5 minutes) going through open proposals at the Anti-Fraud CG meeting this week. If you have a short 2-4 sentence summary/slide you'd like the chairs to use when representing the proposal, please attach it to this issue otherwise the chairs will give a brief overview based on the initial post.

@npdoty
Copy link

npdoty commented Oct 28, 2022

I suspect it will be hard to provide and generally counter to our Web platform design of preventing users from choosing the images they want to provide in a particular API. It seems both hard to provide the guarantee against attackers and it could be potentially very harmful in the way that it's deployed against people if provided.

@dvorak42
Copy link
Member

From the CG meeting, there's also some concern that to fully protect here you'd need hardware-level support since there are cases of real hardware devices that allow for injecting images/video feeds at the hardware level and fraudsters are likely to have those devices or be able to buy them. Clarity on how prevalent the case where the attacker is only skilled enough to make local changes but not hardware-level attacks would be useful.

Likely work involving getting hardware vendors to provide an attestation would fall out of scope of the CG, but if there's a use for the weaker camera attestation that UAs/clients could afford without hardware support, that would maybe fall in-scope, though there are also the concerns brought up about DRM/harm from requiring users to use specific technology.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants