Homography.js
Lightweight, High-Performance and easy-to-use library for performing Affine, Projective or Piecewise Affine transformations over any Image or HTMLElement from only a set of reference points. In Javascript.
Install / Use
/learn @Eric-Canas/Homography.jsREADME
<img src="./Documentation/logo/HomographyJSLogoWhite.png" height=25px> Homography.js
<img src="./Documentation/logo/HomographyJSLogo.png" width="20%" align="left"> Homography.js is a lightweight <a href="#performance">High-Performance</a> library for implementing homographies in Javascript or Node.js. It is designed to be easy-to-use (even for developers that are not familiar with Computer Vision), and able to run in real time applications (even in low-spec devices such as budget smartphones). It allows you to perform <a href="https://en.wikipedia.org/wiki/Affine_transformation" target="_blank">Affine</a>, <a href="https://en.wikipedia.org/wiki/Homography" target="_blank">Projective</a> or <a href="https://en.wikipedia.org/wiki/Piecewise_linear_function" target="_blank">Piecewise Affine</a> warpings over any <code>Image</code> or <code>HTMLElement</code> in your application by only setting a small set of reference points. Additionally, Image warpings can be made persistent (independent of any CSS property), so they can be easily drawn in a canvas, mixed or downloaded. Homography.js is built in a way that frees the user from all the <i>pain-in-the-ass</i> details of homography operations, such as thinking about output dimensions, input coordinate ranges, dealing with unexpected shifts, pads, crops or unfilled pixels in the output image or even knowing what a <a href="https://en.wikipedia.org/wiki/Transformation_matrix">Transform Matrix</a> is.
Features
<ul> <li>Apply different warpings to any <code>Image</code> or <code>HTMLElement</code> by just setting two sets of reference points.</li> <li>Perform <a href="https://en.wikipedia.org/wiki/Affine_transformation" target="_blank">Affine</a>, <a href="https://en.wikipedia.org/wiki/Homography" target="_blank">Projective</a> or <a href="https://en.wikipedia.org/wiki/Piecewise_linear_function" target="_blank">Piecewise Affine</a> transforms or just set <b>Auto</b> and let the library decide which transform to apply depending on the reference points you provide.</li> <li>Simplify how you deal with canvas drawings, or subsequent Computer Vision problems by making your <code>Image</code> transforms persistent and independent of any CSS property.</li> <li>Forget all the <i>pain-in-the-ass</i> details of homography operations, even if you only have fuzzy idea about what an homography is.</li> <li>Avoid warping delays in real-time applications due to its design focused on <a href="#performance">High-Performance</a>.</li> <li>Support for running in the backend with Node.js.</li> </ul>Install
To use as a <b>module</b> in the browser (Recommended):
<script type="module">
import { Homography } from "https://cdn.jsdelivr.net/gh/Eric-Canas/Homography.js@1.4/Homography.js";
</script>
If you don't need to perform <b>Piecewise Affine Transforms</b>, you can also use a very lightweight UMD build that will expose the <code>homography</code> global variable and will charge faster:
<script src="https://cdn.jsdelivr.net/gh/Eric-Canas/Homography.js@1.4/HomographyLightweight.min.js"></script>
...
// And then in your script
const myHomography = new homography.Homography();
// Remember to don't override the homography variable by naming your object "homography"
Via npm:
$ npm install homography
...
import { Homography } from "homography";
Usage
In the Browser
Perform a basic <b>Piecewise Affine Transform</b> from four <i>source points</i>.
// Select the image you want to warp
const image = document.getElementById("myImage");
// Define the reference points. In this case using normalized coordinates (from 0.0 to 1.0).
const srcPoints = [[0, 0], [0, 1], [1, 0], [1, 1]];
const dstPoints = [[1/5, 1/5], [0, 1/2], [1, 0], [6/8, 6/8]];
// Create a Homography object for a "piecewiseaffine" transform (it could be reused later)
const myHomography = new Homography("piecewiseaffine");
// Set the reference points
myHomography.setReferencePoints(srcPoints, dstPoints);
// Warp your image
const resultImage = myHomography.warp(image);
...
<p align="center"><img src="./Documentation/exampleImages/PiecewiseAffineExampleSimple.PNG" width="50%"></p>
Perform a complex <b>Piecewise Affine Transform</b> from a large set of <code>pointsInY * pointsInX</code> reference points.
...
// Define a set of reference points that match to a sinusoidal form.
// In this case in image axis (x : From 0 to width, y : From 0 to height) for convenience.
let srcPoints = [], dstPoints = [];
for (let y = 0; y <= h; y+=height/pointsInY){
for (let x = 0; x <= w; x+=width/pointsInX){
srcPoints.push([x, y]); // Add (x, y) as source points
dstPoints.push([x, amplitude+y+Math.sin((x*n)/Math.PI)*amplitude]); // Apply sinus function on y
}
}
// Set the reference points (reuse the previous Homography object)
myHomography.setReferencePoints(srcPoints, dstPoints);
// Warp your image. As not image is given, it will reuse the one used for the previous example.
const resultImage = myHomography.warp();
...
<p align="center"><img src="./Documentation/exampleImages/PiecewiseAffineExampleSinusoidal.PNG" width="80%"></p>
Perform a simple <b>Affine Transform</b> and apply it on a <code>HTMLElement</code>.
...
// Set the reference points from which estimate the transform
const srcPoints = [[0, 0], [0, 1], [1, 0]];
const dstPoints = [[0, 0], [1/2, 1], [1, 1/8]];
// Don't specify the type of transform to apply, so let the library decide it by itself.
const myHomography = new Homography(); // Default transform value is "auto".
// Apply the transform over an HTMLElement from the DOM.
myHomography.transformHTMLElement(document.getElementById("inputText"), squarePoints, rectanglePoints);
...
<p align="center"><img src="./Documentation/exampleImages/AffineTransformOnHTMLElement.PNG" width="30%"></p>
Calculate 250 different <b>Projective Transforms</b>, apply them over the same input <code>Image</code> and draw them on a canvas.
const ctx = document.getElementById("exampleCanvas").getContext("2d");
// Build the initial reference points (in this case, in image coordinates just for convenience)
const srcPoints = [[0, 0], [0, h], [w, 0], [w, h]];
let dstPoints = [[0, 0], [0, h], [w, 0], [w, h]];
// Create the homography object (it is not necessary to set transform as "projective" as it will be automatically detected)
const myHomography = new Homography();
// Set the static parameters of all the transforms sequence (it will improve the performance of subsequent warpings)
myHomography.setSourcePoints(srcPoints);
myHomography.setImage(inputImg);
// Set the parameters for building the future dstPoints at each frame (5 movements of 50 frames each one)
const framesPerMovement = 50;
const movements = [[[0, h/5], [0, -h/5], [0, 0], [0, 0]],
[[w, 0], [w, 0], [-w, 0], [-w, 0]],
[[0, -h/5], [0, h/5], [0, h/5], [0, -h/5]],
[[-w, 0], [-w, 0], [w, 0], [w, 0]],
[[0, 0], [0, 0], [0, -h/5], [0, h/5]]];
for(let movement = 0; movement<movements.length; movement++){
for (let step = 0; step<framesPerMovement; step++){
// Create the new dstPoints (in Computer Vision applications these points will usually come from webcam detections)
for (let point = 0; point<srcPoints.length; point++){
dstPoints[point][0] += movements[movement][point][0]/framesPerMovement;
dstPoints[point][1] += movements[movement][point][1]/framesPerMovement;
}
// Update the destiny points and calculate the new warping.
myHomography.setDestinyPoints(dstPoints);
const img = myHomography.warp(); //No parameters warp will reuse the previously setted image
// Clear the canvas and draw the new image (using putImageData instead of drawImage for performance reasons)
ctx.clearRect(0, 0, w, h);
ctx.putImageData(img, Math.min(dstPoints[0][0], dstPoints[2][0]), Math.min(dstPoints[0][1], dstPoints[2][1]));
await new Promise(resolve => setTimeout(resolve, 0.1)); // Just a trick for forcing canvas to refresh
}
}
<i>*Just take attention to the use of <code>setSourcePoints(srcPoints)</code>, <code>setImage(inputImg)</code>, <code>setDestinyPoints(dstPoints)</code> and <code>warp()</code>. The rest of code is just to generate coherent sequence of destiny points and drawing the results</i>
<p align="center"><img src="./Documentation/exampleImages/ProjectiveTransformVideo.gif" width="30%"></p>API Reference
new Homography([transform = "auto", width, height])
Main class for performing geometrical transformations over images.
Homography is in charge of applying <a href="https://en.wikipedia.org/wiki/Affine_transformation" target="_blank">Affine</a>, <a href="https://en.wikipedia.org/wiki/Homography" target="_blank">Projective</a> or <a href="https://en.wikipedia.org/wiki/Piecewise_linear_function" target="_blank">Piecewise Affine</a> transformations over images, in a way that is as transparent and simple to the user as possible. It is specially intended for <i>real-time applications</i>. For this reason, this class keeps an internal state for avoiding redundant operations when reused, therefore, critical performance comes when multiple transformations are done over the same <i>image</i>.
Related Skills
next
A beautifully designed, floating Pomodoro timer that respects your workspace.
product-manager-skills
41PM skill for Claude Code, Codex, Cursor, and Windsurf: diagnose SaaS metrics, critique PRDs, plan roadmaps, run discovery, and coach PM career transitions.
devplan-mcp-server
3MCP server for generating development plans, project roadmaps, and task breakdowns for Claude Code. Turn project ideas into paint-by-numbers implementation plans.
