Javascript library for precise tracking of facial features via Constrained Local Models
clmtrackr is a javascript library for fitting facial models to faces in videos or images. It currently is an implementation of constrained local models fitted by regularized landmark mean-shift, as described in Jason M. Saragih's paper. clmtrackr tracks a face and outputs the coordinate positions of the face model as an array, following the numbering of the model below:
The library provides some generic face models that were trained on the MUCT database and some additional self-annotated images. Check out clmtools for building your own models.
For tracking in video, it is recommended to use a browser with WebGL support, though the library should work on any modern browser.
For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.
Download the minified library clmtrackr.js, and include it in your webpage.
/* clmtrackr libraries */
The following code initiates the clmtrackr with the default model (see the reference for some alternative models), and starts the tracker running on a video element.
You can now get the positions of the tracked facial features as an array via
getCurrentPosition():
You can also use the built in function
draw()to draw the tracked facial model on a canvas :
See the complete example here.
First, install node.js with npm.
In the root directory of clmtrackr, run
npm installthen run
npm run build. This will create
clmtrackr.jsand
clmtrackr.module.jsin
buildfolder.
To test the examples locally, you need to run a local server. One easy way to do this is to install
http-server, a small node.js utility:
npm install -g http-server. Then run
http-serverin the root of clmtrackr and go to
https://localhost:8080/examplesin your browser.
clmtrackr is distributed under the MIT License