I am creating this post as notes from Wes Bos Javascript course which you can sign up for and do with me here: https://beginnerjavascript.com/. Here is a link to Wes' GitHub readme.md.
Today we are going to build a Face Detection and Censorship App that you can use to show your webcam but with a blurred out face!
If you haven't already tried out the Etch-A-Sketch Project, we lean on some of the knowledge built there with canvas
and ctx
. It would be a good project to undertake before attempting this one.
This should be pretty fun but there are some limitations on what browsers supporting FaceDetector()
which at the time of writing this is a proposed API. I suggest using chrome or android browser. Here are some pre-requisits:
1. You may need to enable flags - head over to about:flags and search for Experimental Web Platform Features. Enable it and you will likely need to restart your browser. Test to see if it's enabled by running :
typeof FaceDetector
in the console, you should see:
"function"2. You need to also have a server in order to enable support of this project - due to security features, you will need to load this from a server like parcel or AWS cloud9
Getting started:
If you don't already have node
and npm
installed (you can check by running node -v && npm -v
in terminal). You need to install those if not already done. Note, when I say "Run", I mean from terminal while in the directory of this project. I recommend creating a folder named face-detection-and-censorship
and staying in that folder for the duration of this project.
At the time of writing this my versions are:
v13.5.0
6.13.4
(respectively)
Package.json
Next, we will want to create our package.json
file to allow for easy installation later. This file is super helpful because if you followed my previous guide TIL: "HOW TO NOT PUSH YOUR `NODE_MODULES` TO GITHUB" You will see that it's not very helpful to make GitHub commits that include your node_modules. The package.json file basically says "these are the dependencies for this project" and when you run npm install
it will reinstall the node_modules again later.
Run: cd face-detection-and-censorship
&& npm init -y
This will create the package.json file inside the face-detection-and-censorship directory without having to edit any fields due to the -y
flag there.
Install Parcel
Run: npm install parcel --save
This will install parcel and create an entry into your package.json file.
Create some files
Create an html file named index.html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http-equiv="X-UA-Compatible" content="ie=edge"> <title>Censorship</title> <link rel="stylesheet" href="base.css"> </head> <body> <div class="wrap"> <video class="webcam"></video> <canvas class="video"></canvas> <canvas class="face"></canvas> </div> <script src="app.js"></script> </body> </html>
Create a CSS File named base.css
:
/* normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */button,hr,input{overflow:visible}progress,sub,sup{vertical-align:baseline}[type=checkbox],[type=radio],legend{box-sizing:border-box;padding:0}html{line-height:1.15;-webkit-text-size-adjust:100%}body{margin:0}details,main{display:block}h1{font-size:2em;margin:.67em 0}hr{box-sizing:content-box;height:0}code,kbd,pre,samp{font-family:monospace,monospace;font-size:1em}a{background-color:transparent}abbr[title]{border-bottom:none;text-decoration:underline;text-decoration:underline dotted}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative}sub{bottom:-.25em}sup{top:-.5em}img{border-style:none}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;line-height:1.15;margin:0}button,select{text-transform:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{border-style:none;padding:0}[type=button]:-moz-focusring,[type=reset]:-moz-focusring,[type=submit]:-moz-focusring,button:-moz-focusring{outline:ButtonText dotted 1px}fieldset{padding:.35em .75em .625em}legend{color:inherit;display:table;max-width:100%;white-space:normal}textarea{overflow:auto}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}[hidden],template{display:none} /* Variables */ html { --grey: #e7e7e7; --gray: var(--grey); --blue: #0072B9; --pink: #D60087; --yellow: #ffc600; --black: #2e2e2e; --red: #c73737; --green: #61e846; --text-shadow: 2px 2px 0 rgba(0,0,0,0.2); --box-shadow: 0 0 5px 5px rgba(0,0,0,0.2); font-size: 62.5%; font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif; box-sizing: border-box; } *, *:before, *:after { box-sizing: inherit; } body { font-size: 2rem; line-height: 1.5; background-color: var(--blue); background-image: url("data:image/svg+xml,%3Csvg width='20' height='100' xmlns='http://www.w3.org/2000/svg'%3E%3Cpath d='M0 21.184c.13.357.264.72.402 1.088l.661 1.768C4.653 33.64 6 39.647 6 50c0 10.271-1.222 15.362-4.928 24.629-.383.955-.74 1.869-1.072 2.75v6.225c.73-2.51 1.691-5.139 2.928-8.233C6.722 65.888 8 60.562 8 50c0-10.626-1.397-16.855-5.063-26.66l-.662-1.767C1.352 19.098.601 16.913 0 14.85v6.335zm20 0C17.108 13.258 16 8.077 16 0h2c0 5.744.574 9.951 2 14.85v6.334zm0 56.195c-2.966 7.86-4 13.123-4 22.621h2c0-6.842.542-11.386 2-16.396v-6.225zM6 0c0 8.44 1.21 13.718 4.402 22.272l.661 1.768C14.653 33.64 16 39.647 16 50c0 10.271-1.222 15.362-4.928 24.629C7.278 84.112 6 89.438 6 100h2c0-10.271 1.222-15.362 4.928-24.629C16.722 65.888 18 60.562 18 50c0-10.626-1.397-16.855-5.063-26.66l-.662-1.767C9.16 13.223 8 8.163 8 0H6z' fill='%23fff' fill-rule='nonzero' fill-opacity='.1' opacity='.349'/%3E%3C/svg%3E%0A"); background-size: 15px; } /* Table Styles */ table { border-radius: 5px; overflow: hidden; margin-bottom: 2rem; border-collapse: collapse; } td, th { border: 1px solid var(--grey); padding: 0.5rem; } /* Helper Divs */ .wrapper { max-width: 1000px; margin: 4rem auto; padding: 2rem; background: white; } .box, .wrapper { box-shadow: 0 0 3px 5px rgba(0,0,0,0.08653); } a { color: var(--blue); text-decoration-color: var(--yellow); } a.button, button, input[type="button"] { color: white; background: var(--pink); padding: 1rem; border: 0; border: 2px solid transparent; text-decoration: none; font-weight: 600; font-size:2rem; } :focus { outline-color: var(--pink); } fieldset { border: 1px solid black; } input:not([type="checkbox"]):not([type="radio"]), textarea, select { display: block; padding: 1rem; border: 1px solid var(--grey); } .success { border: 1px solid red; } h1, h2, h3, h4, h5, h6 { color: white; margin-top: 0; line-height: 1; text-shadow: var(--text-shadow); } .wrapper h1, .wrapper h2, .wrapper h3, .wrapper h4, .wrapper h5, .wrapper h6 { color: var(--black); text-shadow: none; } * { box-sizing: border-box; } body { margin: 0; } .wrap { position: relative; min-height: 100vh; display: grid; justify-content: center; align-items: center; } .wrap>* { grid-column: 1; grid-row: 1; } .face { position: absolute; }
Lastly, Create a blank app.js
file
Head to the package.json file and edit the scripts line (should be 7) and save. It should look something like this:
{ "name": "face-detection-and-censorship", "version": "1.0.0", "description": "", "main": "app.js", "scripts": { "start": "parcel index.html" }, "author": "", "license": "ISC", "dependencies": { "parcel": "^1.12.4" } }
Run npm start
which will run parcel index.html (note: to run this, you have to be in this project folder). You should get a link in your terminal to the live project on your own computer (an example: http://localhost:1234).
Create The Video In the Browser Window
Open up your app.js
file and make sure it's working by adding console.log('it works');
or something like that and checking the live server's console log.
Write A Function That Will Populate The User's Video
A requirement to run navigator.mediaDevices
is a node_module you should already have installed called regeneratorRuntime
. To avoid errors that are indecipherable later, simply create a constant and require this Node package like this:
const regeneratorRuntime = require('regenerator-runtime');
You will need to select the HTML for the video like this:
const video = document.querySelector('.webcam');
Create a function and get the user's video. In it we will create a function called stream
that will populate upon load - Then you pass it options/properties (you could do audio but in this case, we are doing video), specifically the video's height and width properties in an object of the video property. Then we create a video
with a source
object
(srcObject) of that stream
variable we just created. From there we tell the video to play. Check out what that will look like below:
const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); // POPULATE THE VIDEO AND PLAY IT async function populateVideo() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, }); video.srcObject = stream; //VIDEO SOURCE await video.play(); // PLAY IT }; populateVideo(); //CALL THE FUNCTION
Note: We are going to create a function above that will make a
promise
andawait
this isn't covered here but hopefully you can understand that this creates an asynchronous (orasync
) function that loads the promise at a later point in a video source via anawait
. I'll be covering this topic in a later post more in depth and update this post with a link to it.
Run npm start
You should now see a popup that requests permission for the browser to see your camera, accept that!
You should now see your web camera in the web browser.
Aside: Global Variables Through A Bundler:
You will likely not be able to access the populateVideo()
function globally via the console because you are using the parcel-bundler. An easy way to work around this is to console.log(populateVideo);
and then 2-finger/right-click it and select "store as global variable" like below:
Size The Canvases To Be The Same Size As The Video
If you haven't already tried out the Etch-A-Sketch Project, we lean on some of the knowledge built there with canvas and ctx. It would be a good project to undertake before attempting this one, but since you are already here, I'll be glazing over the ctx
and canvas
bits.
From terminal, mash the ⌃c
to kill the running server for now.
If you run console.log(video.videoWidth, video.videoHeight);
you will see we will get 1280 720. So Let's make the canvas size to match the video size.
We are going to create some new variables for the canvas and then match the canvas height and width properties to the properties we just console logged.
const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); const canvas = document.querySelector('.video'); const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); // POPULATE THE VIDEO AND PLAY IT async function populateVideo() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, }); video.srcObject = stream; //VIDEO SOURCE await video.play(); // PLAY IT // RESIZE CANVAS SIZE TO MATCH VIDEO SIZE canvas.width = video.videoWidth; canvas.height = video.videoHeight; faceCanvas.width = video.videoWidth; faceCanvas.height = video.videoHeight; }; populateVideo(); //CALL THE FUNCTION
Face Detection
Let's create a function for detecting a face.
const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); const canvas = document.querySelector('.video'); const faceCanvas = document.querySelector('.face'); const faceDetector = new window.FaceDetector(); // POPULATE THE VIDEO AND PLAY IT async function populateVideo() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, }); video.srcObject = stream; //VIDEO SOURCE await video.play(); // PLAY IT // RESIZE CANVAS SIZE TO MATCH VIDEO SIZE canvas.width = video.videoWidth; canvas.height = video.videoHeight; faceCanvas.width = video.videoWidth; faceCanvas.height = video.videoHeight; }; //Removed populateVideo() //FACE DETECTION async function detect() { const faces = await faceDetector.detect(video); console.log(faces); }; populateVideo().then(detect);
You will be able to start the server again and then see [DetectedFace]
in the console log. But this only happens on load. We are going to have to re-run the detect()
function more often, we will do this with requestAnimationFrame()
and recursively pass detect
as an argument in that function.
It's like saying "Hey browser, if you're going to the store later because I need to pick up a couple of things".
Wes Bos
const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); const canvas = document.querySelector('.video'); const faceCanvas = document.querySelector('.face'); const faceDetector = new window.FaceDetector(); // POPULATE THE VIDEO AND PLAY IT async function populateVideo() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, }); video.srcObject = stream; await video.play(); // RESIZE CANVAS SIZE TO MATCH VIDEO SIZE canvas.width = video.videoWidth; canvas.height = video.videoHeight; faceCanvas.width = video.videoWidth; faceCanvas.height = video.videoHeight; }; //FACE DETECTION async function detect() { const faces = await faceDetector.detect(video); console.log(faces.length); //REQUEST WHEN THE NEXT ANIMATION FRAME IS AND THEN RUN DETECT() requestAnimationFrame(detect); /*this is recursion function - it will run until it is stopped explicitly */ }; populateVideo().then(detect);
If anyone has issues getting the
Mark Sauer-Utley (slack link included)FaceDetector
to detect their face, try passing in the optionfastMode: true
to the constructor. Myfaces
array was empty for a long time but this fixed it! The constructor will look like this:const faceDetector = new FaceDetector({ fastMode: true });
You should see in the console log on load a continuously updating [Detected] and then a number of faces it detects.
Draw Your Face
First, you have to run a loop for each face like this:
// FACE DETECTION async function detect() { const faces = await faceDetector.detect(video); faces.forEach(drawFace); requestAnimationFrame(detect); };
Create a function that will output the dimensions of the detected face like this:
function drawFace(face){ const {width, height, top, left } = face.boundingBox; console.log({witdth: width, height: height, top: top,left: left }); };
It'll look something like this:
Draw A Box Around Your Face
Add two variables at the top of the page for ctx
(remember this from the Etch-A-Sketch project?).
const ctx = canvas.getContext('2d'); const faceCtx = faceCanvas.getContext('2d');
Inside the drawFace
function starts drawing the blur.
function drawFace(face){ const {width, height, top, left } = face.boundingBox; // console.log({witdth: width, height: height, top: top,left: left }); ctx.strokeRect(left, top, width, height); //draws a box around your face };
That's a little hard to see the box around the face, so let's change the color to yellow and clear out the old boxes so you only see the active one to make it look something a little more like this:
function drawFace(face){ const {width, height, top, left } = face.boundingBox; // console.log({witdth: width, height: height, top: top,left: left }); ctx.clearRect(0,0, canvas.width, canvas.height); ctx.strokeStyle = '#ffc600'; ctx.lineWidth = 5; ctx.strokeRect(left, top, width, height); };
Censor Your Face Box
To Pixelate the face on top of the video let's create a censor()
function.
You will want to also run another forEach loop in the Face Detection section like this:
const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); const canvas = document.querySelector('.video'); const faceCanvas = document.querySelector('.face'); const ctx = canvas.getContext('2d'); const faceCtx = faceCanvas.getContext('2d'); const faceDetector = new window.FaceDetector({fastMode:true}); // POPULATE THE VIDEO AND PLAY IT async function populateVideo() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, }); video.srcObject = stream; //VIDEO SOURCE await video.play(); // PLAY IT // RESIZE CANVAS SIZE TO MATCH VIDEO SIZE canvas.width = video.videoWidth; canvas.height = video.videoHeight; faceCanvas.width = video.videoWidth; faceCanvas.height = video.videoHeight; }; // FACE DETECTION async function detect() { const faces = await faceDetector.detect(video); // console.dir(faces.length); //REQUEST WHEN THE NEXT ANIMATION FRAME IS AND THEN RUN DETECT faces.forEach(drawFace); faces.forEach(censor); // ************* requestAnimationFrame(detect); //this is recursion function (calls itself)- it will run until it is stopped explicitly }; // DRAW A BOX AROUND THE FACE function drawFace(face){ const {width, height, top, left } = face.boundingBox; // console.log({witdth: width, height: height, top: top,left: left }); ctx.clearRect(0,0, canvas.width, canvas.height); ctx.strokeStyle = '#ffc600'; ctx.lineWidth = 5; ctx.strokeRect(left, top, width, height); }; //BLUR THE FACE function censor ({boundingBox: face}) { console.log(face) }; populateVideo().then(detect);
The question is how to censor it, the best way to do it in Wes' experience is to take a super small snapshot of the face causing you to lose the resolution and then blow it back up. Let's see how that looks:
const regeneratorRuntime = require('regenerator-runtime'); const video = document.querySelector('.webcam'); const canvas = document.querySelector('.video'); const faceCanvas = document.querySelector('.face'); const ctx = canvas.getContext('2d'); const faceCtx = faceCanvas.getContext('2d'); const faceDetector = new window.FaceDetector({fastMode:true}); //****IMPORTANT DON'T MISS THIS const SIZE = 10; const SCALE = 2; //****IMPORTANT DON'T MISS THIS // POPULATE THE VIDEO AND PLAY IT async function populateVideo() { const stream = await navigator.mediaDevices.getUserMedia({ video: { width: 1280, height: 720 }, }); video.srcObject = stream; //VIDEO SOURCE await video.play(); // PLAY IT // RESIZE CANVAS SIZE TO MATCH VIDEO SIZE canvas.width = video.videoWidth; canvas.height = video.videoHeight; faceCanvas.width = video.videoWidth; faceCanvas.height = video.videoHeight; }; // FACE DETECTION async function detect() { const faces = await faceDetector.detect(video); // console.dir(faces.length); //REQUEST WHEN THE NEXT ANIMATION FRAME IS AND THEN RUN DETECT faces.forEach(drawFace); faces.forEach(censor); requestAnimationFrame(detect); //this is recursion function (calls itself)- it will run until it is stopped explicitly }; // DRAW A BOX AROUND THE FACE function drawFace(face){ const {width, height, top, left } = face.boundingBox; // console.log({witdth: width, height: height, top: top,left: left }); ctx.clearRect(0,0, canvas.width, canvas.height); ctx.strokeStyle = '#ffc600'; ctx.lineWidth = 5; ctx.strokeRect(left, top, width, height); }; //BLUR THE FACE function censor ({boundingBox: face}) { faceCtx.clearRect(0,0, faceCanvas.width, faceCanvas.height); faceCtx.imageSmoothingEnabled = false; faceCtx.drawImage( video, face.x, face.y, face.width, face.height, face.x, face.y, SIZE, SIZE, ); const width = face.width * SCALE; const height = face.height * SCALE; faceCtx.drawImage( faceCanvas, //source face.x, face.y, SIZE, SIZE, face.x - (width - face.width)/2, face.y - (height - face.height)/2, width, //blows it back up height, ) }; populateVideo().then(detect);
My "Ask" and Final Thoughts
Making this project was rather challenging to work through some major errors I came across. One was my blog wouldn't allow editing the others I tried to outline in the quotes above - specifically, it wasn't finding my face. I'd almost entirely given up and was going to scrap this project but did a quick search into the Wes Bos slack channel and found the problem - this highlights some of the benefits of this course. Having other developers/students to lean on when you have troubles is so incredibly valuable.
If you found this article helpful, share/retweet it and follow me on twitter @codingwithdrewk! There is so much more in Wes' courses I think you will find valuable as I have. I'm learning so much and really enjoying the course, Wes has an amazingly simple way to explain the difficult bits of Javascript that other courses I've taken could only wish. You can view the course over at WesBos.com. (I am in no way getting any referrals or kickbacks for recommending this)
Drew is a seasoned DevOps Engineer with a rich background that spans multiple industries and technologies. With foundational training as a Nuclear Engineer in the US Navy, Drew brings a meticulous approach to operational efficiency and reliability. His expertise lies in cloud migration strategies, CI/CD automation, and Kubernetes orchestration. Known for a keen focus on facts and correctness, Drew is proficient in a range of programming languages including Bash and JavaScript. His diverse experiences, from serving in the military to working in the corporate world, have equipped him with a comprehensive worldview and a knack for creative problem-solving. Drew advocates for streamlined, fact-based approaches in both code and business, making him a reliable authority in the tech industry.