The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
For tracking in video, it is recommended to use a browser with WebGL support, though the library should work on any modern browser.
For some more information about Constrained Local Models, take a look at Xiaoguang Yan's excellent tutorial, which was of great help in implementing this library.
Download the minified library clmtrackr.js, and include it in your webpage.
/* clmtrackr libraries */
The following code initiates the clmtrackr with the default model (see the reference for some alternative models), and starts the tracker running on a video element.
You can now get the positions of the tracked facial features as an array via
You can also use the built in function
draw()to draw the tracked facial model on a canvas :
See the complete example here.
First, install node.js with npm.
In the root directory of clmtrackr, run
npm installthen run
npm run build. This will create
To test the examples locally, you need to run a local server. One easy way to do this is to install
http-server, a small node.js utility:
npm install -g http-server. Then run
http-serverin the root of clmtrackr and go to
https://localhost:8080/examplesin your browser.
clmtrackr is distributed under the MIT License