Classify images offline using Watson Visual Recognition and Core ML
Read this in other languages: 中文, 日本.
Classify images with Watson Visual Recognition and Core ML. The images are classified offline using a deep neural network that is trained by Visual Recognition.
This project includes the
QuickstartWorkspace.xcworkspaceworkspace with two projects:
Make sure that you have installed Xcode 10 or later and iOS 11.0 or later. These versions are required to support Core ML.
Use GitHub to clone the repository locally, or download the .zip file of the repository and extract the files.
Identify common objects with a built-in Visual Recognition model. Images are classified with the Core ML framework.
QuickstartWorkspace.xcworkspacein Xcode.
Core ML Vision Simplescheme.
Tip: This project also includes a Core ML model to classify trees and fungi. You can switch between the two included Core ML models by uncommenting the model you would like to use in ImageClassificationViewController.
Source code for
ImageClassificationViewController.
The second part of this project builds from the first part and trains a Visual Recognition model (also called a classifier) to identify common types of cables (HDMI, USB, etc.). Use the Watson Swift SDK to download, manage, and execute the trained model. By using the Watson Swift SDK, you don't have to learn about the underlying Core ML framework.
After you sign up or log in, you'll be on the Visual Recognition instance overview page in Watson Studio.
Tip: If you lose your way in any of the following steps, click the
IBM Watsonlogo on the top left of the page to bring you to the the Watson Studio home page. From there you can access your Visual Recognition instance by clicking the Launch tool button next to the service under "Watson services".
If a project is not yet associated with the Visual Recognition instance you created, a project is created. Name your project
Custom Core MLand click Create.
Tip: If no storage is defined, click refresh.
Navigate to the Assets tab and upload each .zip file of sample images from the
Training Imagesdirectory onto the data pane on the right side of the page. Add the
hdmi_male.zipfile to your model by clicking the Browse button in the data pane. Also add the
usb_male.zip,
thunderbolt_male.zip, and
vga_male.zipfiles to your model.
After the files are uploaded, select Add to model from the menu next to each file, and then click Train Model.
In the Visual Recognition instance overview page in Watson Studio, click the Credentials tab, and then click View credentials. Copy the
api_keyor the
apikeyof the service.
Important: Instantiation with
api_keyworks only with Visual Recognition service instances created before May 23, 2018. Visual Recognition instances created after May 22 use IAM.
Use the Cocoapods dependency manager to download and build the Watson Swift SDK. The Watson Swift SDK can also be installed via Carthage and Swift Package Manager.
Core ML Vision Customdirectory.
Run the following command to download and build the Watson Swift SDK:
pod install
Tip: Regularly download updates of the SDK so you stay in sync with any updates to this project. If you have updated to a new version, you may need to run
pod repo updatebefore installing.
QuickstartWorkspace.xcworkspacein Xcode.
Core ML Vision Customscheme.
Pull new versions of the visual recognition model with the refresh button in the bottom right.
Tip: The classifier status must be
Readyto use it. Check the classifier status in Watson Studio on the Visual Recognition instance overview page.
Source code for
ImageClassificationViewController.
Add another Watson service to the custom project with the Core ML Visual Recognition with Discovery project.