Face Detection & Smile Detection using GoogleMLKit/FaceDetection | iOS | Swift

Izaan Saleem
4 min readDec 28, 2023

Welcome to an exciting journey into the world of mobile vision technology! In this tutorial, we’ll explore the powerful capabilities of Google ML Kit for iOS, focusing on face detection and smile recognition. Whether you’re a beginner eager to dive into the fundamentals or an experienced developer seeking advanced insights, this guide is designed to cater to all skill levels.

Note: Building on the latest advancements, this tutorial focuses on creating an interactive Face and Smile Detection App in iOS using Swift, emphasizing the updated and enhanced capabilities of GoogleMLKit. Recognizing the deprecation of FirebaseMLKit in iOS, we’ll explore the modern replacement and leverage the latest features for a seamless development experience.

In this tutorial:

  1. Face detection
  2. Smile detection

GoogleMLKit/FaceDetection installation

Let’s open project directory in terminal create pod file using “pod init” command.

Open the podflile and install ‘GoogleMLKit/FaceDetection’ and set deployment target 13.0

# Uncomment the next line to define a global platform for your project
# platform :ios, '9.0'

target 'LivenessFaceDetection' do
# Comment the next line if you don't want to use dynamic frameworks
use_frameworks!

# Pods for LivenessFaceDetection
pod 'GoogleMLKit/FaceDetection'

post_install do |installer|
installer.pods_project.targets.each do |target|
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '13.0'
end
end
end
end

save Podfile and run “pod install” command in terminal and open .xcworkspace file.

Setup UI

Put UIView on UIViewController and attach @ IBOutlet as “videoPreview

Screenshot of UIViewController with UIView

Let’s jump into coding :)

1. Import Frameworks

import MLKitVision
import MLKitFaceDetection

2. Connect an ‘@IBOutlet’ of UIView as ‘videoPreview’

 @IBOutlet weak private var videoPreview: UIView!

3. Download ‘SharedFaceDetection.swift’ class from here and add in your project so we can use this as Singleton class.

SharedFaceDetection.swift contains the functionalities of setup face detection, predict face and detect face.

Drag & Drop into project > Make sure to check ‘Add to targets’ and ‘Copy items if needed’

4. Download ‘CameraPreview.swift’ class from here and add in your project to use the UIView as CameraPreview.

CameraPreview.swift contains the functionalities to setup camera for recording video, start recording, stop recording, quality of video, etc.

Drag & Drop into project > Make sure to check ‘Add to targets’ and ‘Copy items if needed’

4. Setup Camera to detect Face.

Add following code in your ViewController.

private func setUpCamera() {
SharedFaceDetection.shared.setupFaceDetection()

videoCapture = CameraPreview()
videoCapture?.delegate = self
videoCapture?.fps = 15
videoCapture?.setUp(sessionPreset: .vga640x480) { success in
if success {
// add preview view on the layer
if let previewLayer = self.videoCapture?.previewLayer {
self.videoCapture?.previewLayer?.frame = self.videoPreview.bounds
self.videoPreview.layer.addSublayer(previewLayer)
}
// start video preview when setup is done
self.videoCapture?.start()
}
}
}

5. Create ViewController ‘extension’ to use delegate.

extension ViewController: CameraPreviewDelegate {
func videoCapture(_ capture: CameraPreview, didCaptureVideoFrame pixelBuffer: CVPixelBuffer?, timestamp: CMTime) {
if let pixelBuffer = pixelBuffer {
SharedFaceDetection.shared.predictUsingVision(pixelBuffer: pixelBuffer) { face, pickedImage in
self.drawSquareOnFace(faces: face, in: pickedImage)
}
}
}
}

Face is detected and you’ll be getting detection in form of ‘CVPixelBuffer’. We will pass pixelBuffer to predictUsingVision(pixelBuffer: pixelBuffer) and it’s callback will return face & pickedImage.

Finally, draw square on face for user interface and to verify.

6. Use the following code to draw square on face and detect smile on face.

private func drawSquareOnFace(faces: [Face], in originalImage: UIImage) {
for face in faces {
let boundingBox = face.frame
let imageSize = originalImage.size

let faceRectConverted = CGRect(
x: imageSize.width - boundingBox.origin.x - boundingBox.size.width - 46,
y: boundingBox.origin.y + 50,
width: boundingBox.size.width,
height: boundingBox.size.height + 20
)

var labelText = ""

if face.smilingProbability > 0.3 {
labelText = "Smiling 🙂"
} else {
labelText = "👀"
}

self.label.numberOfLines = 0
self.label.textColor = .green
self.label.text = labelText
self.label.font = UIFont.systemFont(ofSize: 20)
self.label.sizeToFit()
self.label.center = CGPoint(x: faceRectConverted.midX, y: faceRectConverted.maxY + self.label.frame.height / 2 + 5)
self.label.frame.size.width = face.frame.width

// Add the label to your image view or any other container
self.view.addSubview(self.label)

self.squareLayer.bounds = faceRectConverted
self.squareLayer.position = CGPoint(x: faceRectConverted.midX, y: faceRectConverted.midY)
self.squareLayer.borderWidth = 2.0
self.squareLayer.borderColor = UIColor.green.cgColor

// Add the square layer to your image view or any other container
self.view.layer.addSublayer(self.squareLayer)

}
}

7. Add the following into your info.plist for asking the permission to use camera.

<key>NSCameraUsageDescription</key>
<string>Allow camera access to detect your face</string>

By the end of this tutorial, you’ll not only have a fully functional iOS app capable of face and smile detection but also a solid understanding of the principles behind it. Let’s embark on this exciting journey together and elevate our iOS development skills with Google ML Kit!

You can download the code from my Github Repository.

If you found this tutorial helpful, give it some 👏 and consider sharing it with others to spread the knowledge. Your support makes a difference! Thank you for your attention, and as always, happy coding! 🚀

--

--

Izaan Saleem

iOS Developer | iOS | Swift | XCode | Storyboard | Firebase | Firestore | Realtime | Database