PassiveFaceLiveness (Deprecated)

Smart camera that captures a reliable selfie of your user using artificial intelligence, capable of detecting and disapproving snapshots and recordings. Ideal for your onboarding.

Required permissions

In the info.plist file, add the permissions below:




Privacy - Camera Usage Description

To capture the user selfie



First, instantiate an object of type ** PassiveFaceLiveness:**

let passiveFaceLiveness = PassiveFaceLiveness.Builder(mobileToken: "mobileToken")
    // see the table below




String mobileToken

Usage token associated with your CAF account


.setPeopleId(peopleId: String?)

Identifier of the user for the purpose of identifying a fraudulent profile

No, only used for analytics

.setAnalyticsSettings(useAnalytics: Bool)

Enables/disables data collection for analytics

No, the default is true

.setStabilitySensorSettings(message: String?, stabilityThreshold :Double?)

Changes the default settings of the stability sensor. The threshold of this sensor is in the range of the last two accelerations collected from the device.

No. The default is "Keep the phone still" and 1.5, respectively

.setCaptureSettings( beforePictureInterval: TimeInterval?)

Changes the settings used for capturing the selfie. The parameter indicates the time in milliseconds between the correct face fitting and the actual capture of the picture.

No. The standard is 2.0

setLayout(layout: PassiveFaceLivenessLayout)

Changes the document masks for success, failure and normal.

Also allows you to change the sound and cancel buttons at the top of the screen


.setColorTheme(color: UIColor)

Change the color of the sound and cancel buttons that are at the top of the screen. Also change the color of the popup buttons, inflated before each document.


.enableSound(enableSound: Bool)

Enables/disables sounds and the sound icon in the SDK

No. The default is true

.showStepLabel(show: Bool)

Show/hide the lower middle label (which contains the document name)

No. The default is true

.showStatusLabel(show: Bool)

Show/hide the central label (which contains the status)

No. The default is true


Change the default network settings

No. The default is 60 (seconds)

.setProxySettings(proxySettings: ProxySettings?)

Sets the proxy settings, as explained here

No. The default is nil

.showPreview(_ show: Bool, title: String?, subtitle: String?, confirmLabel: String?, retryLabel: String?)

Enables/disables the capture preview. If show has true, after each capture, the SDK provides a screen for the user to approve or make the capture again. For the remaining parameters, enter nil to use the default value or a String for a custom text.

No. The default is false

.setCompressSettings(compressionQuality: CGFloat)

Allows you to set the quality in the compression process. By default all captures are compressed. The method expects values between 0 and 1.0 as parameters, 1.0 being the best quality compression (recommended).

No. The default is 1.0

.setMessageSettings(waitMessage: String?, stepName: String?, faceNotFoundMessage: String?, faceTooFarMessage: String?, faceNotFittedMessage: String?, holdItMessage: String?, invalidFaceMessage: String?, multipleFaceDetectedMessage: String?, sensorStabilityMessage: String?, verifyingLivenessMessage: String?, eyesClosedMessage: String?)

Allows to customize messages displayed in the "status" balloon during the capture and analysis process.

No. The pattern is this

.setPersonCPF(personCPF: String)

Binds a proof-of-life attempt to a cpf

No. The default is nil

.setPersonName(personName: String)

Binds a proof-of-life attempt to a name

No. The default is nil

.setManualCaptureSettings(enable: Bool, time: TimeInterval)

Enables/disables manual capture. The time parameter sets the time for the capture mode to be enabled.

No. The default is disabled

.enableMultiLanguage(_ enable: Bool)

Enable/disable multi-language support

No. The default is enable

.setVideoCaptureSettings(time: TimeInterval)

Allows you to enable and configure video capture.

No. The default is disabled

.setGetImageUrlExpireTime(expireTime: String)

Sets how long the image URL will last on the server until it is expired. Expect to receive a time interval between "30m" to "30d".


  • setGetImageUrlExpireTime("30m"): To set minutes only

  • setGetImageUrlExpireTime("24h"): To set only hour(s)

  • setGetImageUrlExpireTime("1h 10m"): To set hour(s) and minute(s)

  • setGetImageUrlExpireTime("10d"): To set day(s)

No. The default is 3h

.setImageCaptureSettings(beforePictureInterval: TimeInterval!, enableManualCapture: Bool, timeManualCapture: TimeInterval)

Allows you to set the capture per image. The beforePictureInterval attribute sets the time the user should stay with the face docked in the mask. The enableManualCapture attribute enables or disables manual capture. And timeManualCapture defines the time at which manual capture will be enabled.

No. The default is enabled. For beforePictureInterval the default is 2 (seconds). For enableManualCapture the default is false and for timeManualCapture the default is 20 (seconds).

.setMask(type: MaskType)

Sets the type of mask used in the captures. There are three types:

  • .standard, with the dotted pattern in the document format;

  • .empty, which removes the mask entirely.

No. The default is .standard

.setCurrentStepDoneDelay(currentStepDoneDelay: TimeInterval)

Delay the activity after the completion of each step. This method can be used to display a success message on the screen itself after the capture, for example.

No. The default is false

.setEyesClosedSettings(threshold: Double, isEnable: Bool)

Allows you to customize the SDK's open-eye validation settings. The method takes as parameter isEnable to enable or disable the validation, and threshold, value between 0.0 and 1.0

No. The default is true and 0.5

.setResolutionSettings(resolution: Resolution)

Allows you to set the capture resolution. The method takes as parameter a Resolution, which has the following options:

No. The default is hd1280x720




Specifies appropriate capture settings for output video and audio bit rates suitable for 3G sharing


Specifies the appropriate capture settings for the output video and audio bitrates suitable for sharing over WiFi


Specifies appropriate capture settings for high-quality video and audio output


Specifies appropriate capture settings for high-resolution photo quality output


Specifies appropriate capture settings for high-resolution photo quality output


Specifies the appropriate capture settings for video output at 720p quality (1280 x 720 pixels)


Capture settings suitable for 1080p (1920 x 1080 pixels) quality video output


Capture settings suitable for 2160p (3840 x 2160 pixels) quality video output

.setStage(stage: CAFStage)

Allows you to choose the environment in wich the SDK will run (production, beta). The method takes as parameter an enum CAFStage to select the environment:

No. The default is .PROD




Will use the Trust Platform production environment to register the SDK executions.


Will use the Trust Platform beta environment to register the SDK executions.

Each environment (beta and production) requires its own specific mobileToken, generated in the Trust platform of the respective environment.



Default Value

waitMessage: String

Message displayed when SDK is in the process of opening


stepName: String

Static label present at the bottom of the activity

"Registro facial"

faceNotFoundMessage: String

Message displayed when the algorithm does not recognize a face

"Não encontramos nenhum rosto"

faceNotFittedMessage: String

Message displayed when the face is not fitted correctly to the mask

"Encaixe seu rosto"

faceTooFarMessage: String

Message displayed when there is a very small face

"Aproxime o rosto"

multipleFaceDetectedMessage: String

Message displayed when more than one face is detected

"Mais de um rosto detectado"

holdItMessage: String

Message displayed when the user is in the correct position for capture

"Segure assim"

invalidFaceMessage: String

Message displayed when proof-of-life verification rejects the selfie

"Não conseguimos capturar seu rosto. Tente novamente."

verifyingLivenessMessage: String

Message displayed during proof-of-life verification.

"Verificando selfie…"

captureProcessingErrorMessage: String

Message displayed when a processing problem or error occurs in the API response.

"Ops, tivemos um problema ao processar sua imagem. Tente novamente."

eyesClosedMessage: String

Message displayed when both eyes are closed.

"Não use óculos escuros e mantenha os olhos abertos."

After creating an object of type PassiveFaceLiveness, start the PassiveFaceLivenessController by passing this object as a parameter in the constructor:

let passiveVC = PassiveFaceLivenessController(passiveFaceLiveness: passiveFaceLiveness)
passiveVC.passiveFaceLivenessDelegate = self
present(passiveVC, animated: true, completion: nil)

Getting the result

To get the result, you must implement the PassiveFaceLivenessControllerDelegate delegate in your controller:

class YouController: UIViewController, PassiveFaceLivenessControllerDelegate{
    // MARK: - Passive Faceliveness Delegates
    func passiveFaceLivenessController(_ passiveFacelivenessController: PassiveFaceLivenessController, didFinishWithResults results: PassiveFaceLivenessResult) {
        //Called when the process was successfully executed
        //The result variable contains the data obtained
    func passiveFaceLivenessControllerDidCancel(_ passiveFacelivenessController: PassiveFaceLivenessController) {
        //Called when the process was canceled by the user
    func passiveFaceLivenessController(_ passiveFacelivenessController: PassiveFaceLivenessController, didFailWithError error: PassiveFaceLivenessFailure) {
        //Called when the process terminate with an error
        //The error variable contains info about error



Can it be null?

image: UIImage?

Selfie picture taken or image of the best frame taken from the video - if VideoCapture format enabled.

Yes, in case of error

capturePath: String?

Path of the video on the device

Yes, in case of error

imageUrl: String

Url containing the jpeg selfie on our temporary server

Yes, in case of error

signedResponse: String

Signed response from the CAF server that confirmed that the captured selfie has a real face (it is not a snapshot or video). Use this parameter if you want an extra layer of security by checking that the signature of the response is not broken, caused by an interception of the request. If it is broken, there is a strong indication of request interception

Yes, in case of error or server unavailability

trackingId: String?

Identifier of this run on our servers. If possible, save this field and send it along to our API. This way we will have more data about how the user behaved during the execution

Yes, if the user sets useAnalytics = false or the analytics calls do not work

lensFacing: Int

Defines the face of the camera that was used. Use PassiveFaceLivenessResult.LENS_FACING_FRONT or PassiveFaceLivenessResult.LENS_FACING_FRONT to validate.



Superclass that leads to the SDK shutdown. To find out what the reason was, find out which object class has the ** isKindOfClass() ** method, equivalent to instanceof in Java and is in Dart:





The token entered is not valid for the corresponding product

Parameterize "test123" as token in the SDK builder


You are missing some mandatory permission to run the SDK

Start DocumentDetector without camera permission granted


Internet connection failure

User was without internet during facematch in FaceAuthenticator


When an SDK request receives a status code of failure

In theory, it shouldn't happen. If it does, let us know!


There is no space on the user device's internal storage

When there is no space on the internal storage while capturing the document photo


Customizing the Layout

You can customize the layout by creating an object of type PassiveFaceLivenessLayout and passing it as a parameter to PassiveFaceLivenessBuilder:

let layout = PassiveFaceLivenessLayout()

    greenMask: UIImage(named: "my_green_mask"),
    whiteMask: UIImage(named: "my_white_mask"),
    redMask: UIImage(named: "my_red_mask"))

layout.changeSoundImages(soundOn: UIImage(named: "my_sound_on_image"),
                        soundOff: UIImage(named: "my_sound_off_image"))

layout.closeImage = UIImage(named: "my_close_image")

layout.setFont = UIImage(named: "my_font")

let passiveFacelivenessConfiguration = PassiveFaceLivenessBuilder(apiToken: "API_TOKEN")
    .setLayout(layout: layout)

Last updated


2023 © Caf. - All rights reserved