Sample the color of a specified pixel (or something recognizable in the streaming format) every 30 frames from the original video.
Store collection of pixels in a database and share in a peer to peer network or stored on invidious instances. Because the sample size is small, and the database can be split up by youtube channel, the overall size and traffic should remain low.
When streaming a youtube video, if the plugin detects that the pixel in the video doesn’t match the one in the database, automatically skip until where the pixel matches the data in the database.
That is prone to error, just a pixel can be too small of a sample. I would prefer something with hashes, just a sha1sum every 5 seconds of the current frame. It can be computed while buffering videos and wait until the ad is over to splice the correct region
The problem with (good) hashes is that when you change the input even slightly (maybe a different compression algorithm is used), the hash changes drastically
Yes, that’s why I’m proposing it as opposed to just one pixel to differentiate between ad and video. Youtube videos are already separated in sections, just add some metadata with a hash to every one.
I think that downsizing the scene to like 8x8 pixels (so basically taking the average color of multiple sections of the scene) would mostly work. In order to be undetected, the ad would have to match (at least be close to) the average color of each section, which would be difficult in my opinion: you would need to alter each ad for each video timestamp individually.
Sample the color of a specified pixel (or something recognizable in the streaming format) every 30 frames from the original video.
Store collection of pixels in a database and share in a peer to peer network or stored on invidious instances. Because the sample size is small, and the database can be split up by youtube channel, the overall size and traffic should remain low.
When streaming a youtube video, if the plugin detects that the pixel in the video doesn’t match the one in the database, automatically skip until where the pixel matches the data in the database.
That is prone to error, just a pixel can be too small of a sample. I would prefer something with hashes, just a sha1sum every 5 seconds of the current frame. It can be computed while buffering videos and wait until the ad is over to splice the correct region
The problem with (good) hashes is that when you change the input even slightly (maybe a different compression algorithm is used), the hash changes drastically
Yes, that’s why I’m proposing it as opposed to just one pixel to differentiate between ad and video. Youtube videos are already separated in sections, just add some metadata with a hash to every one.
I think that downsizing the scene to like 8x8 pixels (so basically taking the average color of multiple sections of the scene) would mostly work. In order to be undetected, the ad would have to match (at least be close to) the average color of each section, which would be difficult in my opinion: you would need to alter each ad for each video timestamp individually.
Yes, that could be an alternative to computing hashes, I don’t know what option would be less resource intensive
Imagine thinking they can’t detect when you try to skip forward during an ad.
They can’t. They have no clue where you are currently in the video, and even if they did run some client side script, you could easily spoof it.