3 Examples of Image Processing in iOS

3 examples of image processing in ios

My previous article focused on a detailed insight into image processing in iOS. In this blog, I am sharing with you certain examples to make the concept clearer for the iOS enthusiasts out there.

Core Image and Example

All the inbuilt filters are divided into 21 categories color adjustment category, blur category, color effects category, distortion category, generator category, stylize category etc.

Core Image provides a single class, a CIFilter, which is used to create all the filters. The framework offers various methods for querying the system for the names of all the available filters, and it’s these names that are used to create instances of the filters.

CIFilter.filterNamesInCategory(kCICatetgoryBlur)

let blurFilter = CIFilter(name: “CIZoomBlur”)

If you pass nil to that filter name and category, we will get the list of all the available filters.

Let’s Dive into the Core Image

Core Image has three classes that support image processing on iOS. One of them is the CIFilter class, which provides a premade filter that can be applied to an image.

1. Querying for Filters


guard let image = imageView?.image, let cgimg = image.cgImage else {

print(“imageView doesn’t have an image!”)

return

}

let coreImage = CIImage(cgImage: cgimg)

let filter = CIFilter(name: “CISepiaTone”)

filter?.setValue(coreImage, forKey: kCIInputImageKey)

filter?.setValue(0.5, forKey: kCIInputIntensityKey)

if let output = filter?.value(forKey: kCIOutputImageKey) as? CIImage {

let filteredImage = UIImage(ciImage: output)

imageView?.image = filteredImage

}

else {

print(“image filtering failed”)

}

Now after building and running the above code, we can see the sepia tone applied to an image in the output.

 

2. Using Multiple Filters and Creating a Filter Chain

Core Image has a huge number of filters available already. But sometimes, we want to achieve an effect which just isn’t possible with Core Image’s built-in filters. By combining multiple filters, we can achieve this. With the right combination, we can get almost any result.

Let’s look at an example where we combine a sepia filter with a brightening filter to create a brightened sepia image:


let openGLContext = EAGLContext(api: .openGLES2)

let context = CIContext(eaglContext: openGLContext!)

let coreImage = CIImage(cgImage: cgimg)

let sepiaFilter = CIFilter(name: “CISepiaTone”)

sepiaFilter?.setValue(coreImage, forKey: kCIInputImageKey)

sepiaFilter?.setValue(1, forKey: kCIInputIntensityKey)

if let sepiaOutput = sepiaFilter?.value(forKey: kCIOutputImageKey) as? CIImage {

let exposureFilter = CIFilter(name: “CIExposureAdjust”)

exposureFilter?.setValue(sepiaOutput, forKey: kCIInputImageKey)

exposureFilter?.setValue(1, forKey: kCIInputEVKey)

if let exposureOutput = exposureFilter?.value(forKey: kCIOutputImageKey) as? CIImage {

let output = context.createCGImage(exposureOutput, from: exposureOutput.extent)

let result = UIImage(cgImage: output!)

imageView?.image = result

}

}

Likewise, one can create multiple filters according to the needs.

 

3. Detecting Faces in an Image

Core Image can analyze and find human faces in an image. It performs face detection, not recognition. Face detection is the identification of rectangles that contain human facial features, whereas face recognition is the identification of specific human faces. After Core Image detects a face, it can provide information about facial features, such as eye and mouth positions. It can also track the position of an identified face in a video.

Knowing where the faces are in an image lets you perform other operations, such as cropping or adjusting the image quality of the face (tone balance, red-eye correction and so on).

func detectFaces() {

let image = imageView?.image

let cgimg = image?.cgImage

let faceImage = CIImage(cgImage: cgimg!)

let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])

let faces = faceDetector?.features(in: faceImage) as! [CIFaceFeature]

print(“Number of faces: \(faces.count)”)

let transformScale = CGAffineTransform(scaleX: 1, y: -1)

let transform = transformScale.translatedBy(x: 0, y: -faceImage.extent.height)

for face in faces {

var faceBounds = face.bounds.applying(transform)

let imageViewSize = imageView?.bounds.size

let scale = min((imageViewSize?.width)! / faceImage.extent.width,

(imageViewSize?.height)! / faceImage.extent.height)

let dx = ((imageViewSize?.width)! – faceImage.extent.width * scale) / 2

let dy = ((imageViewSize?.height)! – faceImage.extent.height * scale) / 2

faceBounds.applying(CGAffineTransform(scaleX: scale, y: scale))

faceBounds.origin.x += dx

faceBounds.origin.y += dy

let box = UIView(frame: faceBounds)

box.layer.borderColor = UIColor.red.cgColor

box.layer.borderWidth = 2

box.backgroundColor = UIColor.clear

imageView?.addSubview(box)

}

Let’s walk through the code now,

  • First, we are creating context with default options. You can also apply nil as an option.
  • Then we have created options directory to specify the accuracy of a detector.
  • Then created a detector for the faces.
  • Set up an options dictionary for finding faces. It’s important to let Core Image know the image orientation so the detector knows where it can find upright faces.
  • Uses the detector to find features in an image. The image you provide must be a CIImage

Auto Enhancing Images

This feature analyzes an image for its histogram, face region contents, and metadata properties. Then it returns an array of CIFilter objects whose input parameters are already set to values that will improve the analyzed image.

The below list shows the filters Core Image uses for automatically enhancing images:

  • CIRedEyeCorrection
  • CIFaceBalance
  • CIVibrance
  • CIToneCurve
  • CIHighlightShadowAdjust

CIRedEyeCorrection: This filter is used to repair red/amber/white eye due to camera flash

CIFaceBalance: This filter is used to adjust the color of a face to give a skin to flawless effect

CIVibrance: This filter will increase the saturation of an image without distorting the skin color

CIToneCurve: This filter will adjust the image contrast.

CIHighlightShadowAdjust: This filter will adjust the shadow details.

This is all about the basics of the core image framework. Using the above techniques, you should be able to use it to apply some neat filters to images.

You can also create your own custom filters algorithm with the help of the core image kernels. These kernels are those which act on every single pixel of the destination image, the output image, individually. Till then I hope that you will make your pictures more beautiful, more vibrant and amazing with the above core image techniques.

DEV IT is a renowned iOS app development company that would help you know the importance of image processing along with the iOS app development for better and advanced image processing. For more information, contact us today!