This demo visualization shows a photomosaic (& videomosaic) implemented in hardware:

(press r to randomly pick another painting)

Strategy

A palette is built in software, comprising an array of paintings sorted in ascending order based on a selected lightness metric as the sorting criterion. The low-resolution texel produced by the pixelator (specifically, texture2D(source, stepCoord)) is then employed as a lookup key to sample the palette during the rendering of the mosaic tiles within the fragment shader.

flowchart LR A[/mosaic shader/] --> E[texture mappping] B[create palette] --> E C[/paintings/] --> B D[/"source (target image or video)"/] --> E

Palette creation using p5.quadrille.js

Each painting in the data set is first pushed onto a paintings array:

let paintings;
let n;

function preload() {
  // ...
  paintings = [];
  for (let i = 1; i <= n; i++) {
    paintings.push(loadImage(`p${i}.jpg`));
  }

the obtained paintings array is used to create a palette quadrille which is then sorted (in ascending order) using the cell average LUMA (although other lightness metrics may too be employed as sorting criterion), and then rendered side-by-side onto a single offscreen p5.Graphics buffer object:

const SAMPLE_RES = 90;
let palette;
let pg;

function setup() {
  // ...
  palette = createQuadrille(paintings);  
  pg = createGraphics(SAMPLE_RES * palette.width, SAMPLE_RES);
  palette.sort({ ascending: true, cellLength: SAMPLE_RES });
  drawQuadrille(palette, {graphics: pg, cellLength: SAMPLE_RES, outlineWeight: 0});
}

the resulting pg which should look like this:

is emitted as the uniform sampler2D palette to the shader where it is gets sampled to fill the mosaic tiles.

References

  1. wikipedia:
  2. Coding train ascii-text images implemented in software tutorial: