Sketch 4: Facemosh
Image: Still image from playtesting YMOC Sketch 3: Facemosh
Facemosh is a two-person sketch. Each person is represented with the image of just their face provided by the Mediapipe Facemesh algorithm. The face images are layered directly on top of each other with opacity so that they blend. The points between the participants’ eyes (as determined by the algorithm) are aligned at the center of the canvas. As the user moves their head left to right across the canvas the opacity of the images changes. If they move their head all the way to the left, they see only their face. If they move their head all the way to the right, they see only their partner’s face. In the middle they see a 50/50 blend of the two faces.
Image: Still image from playtesting with Pierre.
Image: Still image from playtesting with Pierre. As I move my head to the right my image fades and Pierre’s becomes more prominent.
I playtested this sketch with Pierre Drescsher and Renee Carmichael. The playtest with Pierre was successful, the playtest with Renee was not (more on that later). Pierre and I spent approximately five minutes on the site together. We were on the same local network connection. We both experienced little to no visible latency. He was using a MacBook Pro/Chrome. I was using an MSI gaming PC/Chrome. Here are my notes from the experience:
- The experience was surprisingly fun and playful. We spent several minutes and lost ourselves in the experience.
- The Facemesh algorithm provides consistently good, fast results.
Image: Still image from playtesting with Renee. Facemesh was showing the outline of Renee’s face with pixels from a part of the image that was not her face.
While playtesting with Renee I ran into a number of bugs. First the webRTC connection was not working. I received several errors that I had not seen before. There appeared to be a problem with the turn server I was using. There was also a CORS/HTTPS error. I had previously tested this same server/client code with Nun in Brooklyn without errors. I need to investigate what would have created the errors.
We were eventually able to get the sketch working using HTTPS in an Incognito window in Chrome. Unfortunately we continued to run into errors with the pixels displayed from Facemesh. I believe this is tied to an issue in the Facemesh implementation in ml5, which always returns face key points at a 640:480 resolution, regardless of the actual webcam resolution. I posted a github issue about this on the ml5 repo and will track the issue. Renee and I considered if the problem was because of the webcam resolution of her new MacBook Pro (with the M1 chip). However, I looked this up and the resolution of her webcam is the same as the resolution of two other webcams that I tested. I will continue to research and test this issue.
While I originally only thought of working with full body detection algorithms (ie. PoseNet, BodyPix) with this project, I’m glad to have started working with Facemesh. Facemesh is surprisingly performant, and I found it very fun to play around with. I would like to continue to build experiences with this algorithm. On this particular sketch I am currently using opacity to blend the face images. As a next step I will actually average the pixels of the images. I think this would make for a more interesting/refined output.