In late 2010 I gave a presentation on the then-new HTML5
canvas element. I briefly touched on the pixel manipulation capabilities it offered, which inspired me to play around further afterwards.
I realized that the ability to run various functions on any pixel within a defined area meant that common image manipulation abilities were a real possibility, so I rolled up my sleeves and got to work sorting out how to turn this vague idea into something.
Over a couple of months of evenings I went deep on image processing algorithms and performance optimization, learning how to implement things like convolution matrices and bit shifting to create a lightweight, browser-based image manipulation library. This was also the first true open source project I launched, which garnered a lot of attention and some useful contributions back upstream.
Aside from the main image processing functions, another issue that needed to be solved was the interface to pull a document’s images into the
canvas element and get them back out into the document somehow without requiring much configuration on the author’s part.
The technique I hit upon involved having the script check for a defined list of filter classes that helped it decide where to apply a filter. For each defined class the script pulls in the image data, regardless of whether it found an
img element or CSS background image, and renders it to an off-screen
canvas element, applies the filter, then captures the canvas contents to a data-uri and replace the original source with this processed image data.
The more recent W3C filter effects spec has made PaintbrushJS mostly redundant today, but in 2010 this was the only way to accomplish these effects on the fly without the assistance of a heavy server-side library.