1. Non-functional requirements:
1.1. This project’s goal is to have an iOS Framework (i.e., a component that can be embedded in other iOS projects) that uses the Google’s ML-Kit library, specifically its Face Detect feature.
1.2. Only the Face Detect feature is expected to be included in this implementation; i.e., the other features (like text recogn., pose detection, etc) must NOT be implemented, in order to keep the code as simple/clean as possible.
1.3. The expected deliverable is a Xcode Workspace, containing two projects: the Framework itself, and a “demo” simple App.
1.3.1. The Demo project’s goal is to demonstrate calling the Framework.
1.3.2. The demo app should be as simple as possible: just a ViewController with a button, which invokes the Framework’s use case.
1.4. The whole implementation must be done in Swift (native project), and must NOT use SwiftUI (only UIKit/Storyboard allowed).
2. Regarding the Use Case (functional requirements):
2.1. The Framework must expose a method (say “startDetection(camera_to_use)”) in order to invoke its screen (the unique screen of the Framework).
2.2. Such screen (View) must have the camera’s Live Preview for the face detection (fitting the whole screen), and, on top of it, a Label (TextView) showing the recognized aspects of “the currently detected face” in the Live Preview, on a “ongoing basis” (information provided by the ML-Kit, whenever available, such: Euler X,Y,Z angles; Smiling probability; Eyes blinking probability; etc).
2.3. The screen must present an “oval contour” (dashed), in order to point the user to position his/her head inside it (applying a shadow to the screen area outside this contour), as depicted in the attached image.
2.4. Also, it must “automatically” save the current frame (as a JPEG image in a memory-array), whenever the “current face’s smiling probability” >= 80%. (This array must be available as the Framework result, as detailed in requirement #2.6, ahead.)
2.5. The camera to be used (front or back) should be informed as a parameter in the call to the Framework. Once the “Live Preview” is started, it’s not allowed to change the camera (i.e., no “switch camera” button in the UI).
2.6. There must be a button to “finish capturing”, that closes this screen and get back to the calling App. Once finished, the Framework must provide the array of captured images as “result”. The demo app must save the images locally (save them as JPEG files in the internal storage, for sake of simplicity).
10 фрилансеров(-а) готовы выполнить эту работу в среднем за $268
Hello, I am an experienced iOS developer. I have just recently done a very similar project (please see my latest review). I would like to work on your project. Thank you.
Hi, as I am an iOS app developer, I have rich experience in developing special framework in Xcode. Will deliver your idea perfectly. Let's discuss more detail. Thanks Sergey