Cocoa in the Shell

Philips Hue + mpv = Ambilight

It’s time for my annual post, once again I’ll not talk about iOS or Cocoa stuff since my interest for Apple is at an historical low these days.

A colleague of mine was kind enough to lend me his Philips Hue lamps. Having colored lamps is cool and all, but as a developer the really fun stuff is coding for them, not just lighting them and changing the color via an app on your phone. And so I decided to try implementing ambilight for mpv.

So I coded a dynamic library in C++ which analyzes frames and lights lamps accordingly. It’s only designed to work with the Hue Starter Kit, so it handles only three lamps.

There’re videos showing the result at the end of the post if you don’t want to read the annoying stuff.

mpv

If built with the correct switch mpv allows us to load a dynamic library which will be called before frames are displayed, it allows to develop custom filters for example.

Here is how to build it correctly :

git clone https://github.com/mpv-player/mpv.git
cd mpv
./bootstrap.py
./waf configure --enable-vf-dlopen-filters
./waf build
sudo ./waf install

Workflow

The worflow is pretty simple and can be decomposed like this :

  • Plugin is called with frame data as planar YUV (yuv420p).
  • Convert yuv420p to RGBA.
  • From the right and left borders get 4 columns of pixels, and from the middle a square of 128x128.
  • Compute the dominant color for these three.
  • Convert the RGBA color to xy.
  • Send to Hue.

yuv420p to RGBA

I didn’t know much about video colorspace before (I still don’t), so to convert yuv420p to RGBA I first used ffmpeg APIs which are easy to use and very efficients. The advantage is that with mpv we already have all the ffmpeg stuff installed, so it doesn’t add more dependencies.
Then I decided to code my own function for the fun. At first it was as slow as Apple fixing bugs (they are not slow, they just don’t fix them), then I implemented it in OpenCL and got satisfying results.

Compute dominant colors

For that I used the STL container unordered_multiset.
As I said, for the right and left lamps I’m getting the colors from the right and left edges of the frames. For the middle lamp I use a square of 128x128 at the middle of the frame.

My famous image editing skillz.
My famous image editing skillz.

Below is my implementation of these two functions.

void hue_controller_t::get_edges(rgba_pixel_t* pixels, const size_t width, const size_t height, std::unordered_multiset<rgba_pixel_t>& left_edge, std::unordered_multiset<rgba_pixel_t>& right_edge, const size_t col)const
{
    const size_t size = height * width;
    const size_t step = width - 1;
    size_t z = 0;
    for (size_t y = 0; y < size; y += width)
    {
        z = y + step;
        for (size_t x = 0; x < col; ++x)
        {
            left_edge.insert(pixels[y + x]);
            right_edge.insert(pixels[z - x]);
        }
    }
}

void hue_controller_t::get_middle(rgba_pixel_t* pixels, const size_t width, const size_t height, std::unordered_multiset<rgba_pixel_t>& middle, const size_t wh)const
{
    const size_t ox = (width / 2) - (wh / 2);
    const size_t oy = (height / 2) - (wh / 2);
    const size_t mx = ox + wh;
    const size_t my = oy + wh;
    size_t index = 0;
    for (size_t y = oy; y < my; y++)
    {
        for (size_t x = ox; x < mx; x++)
        {
            index = x + y * width;
            middle.insert(pixels[index]);
        }
    }
}

Once the three arrays are filled, we need to find the dominant color for each one, that part is a bit slow I guess, I need to bench seriously sometime.

counted_pixel_vector_t colors;
const int random_colors_threshold = (int)(height * COLOR_THRESHOLD_MIN_PERCENTAGE); // 0.005
for (auto cur_color : edge)
{
    const int color_count = (int)edge.count(cur_color);
    if (color_count <= random_colors_threshold) // prevent using random colors
        continue;

    colors.emplace_back(cur_color, color_count);
}
std::sort(colors.begin(), colors.end(), [&](const counted_pixel_vector_t::value_type& i1, const counted_pixel_vector_t::value_type& i2) {
            return (i1.second > i2.second);
});

RGBA to xy

Hue don’t understand RGB and that’s truly fucking annoying. Luckily in the Hue SDK for OS X Philips provides code to do the conversion. So I transformed the Objective-C code to C++ and that’s all.

Light me already

I coded a simple wrapper around BSD sockets that just sends a JSON payload to the Hue bridge via a REST API (sounds so cool and hipster), there’s no buffer overflow checks and all but I don’t care I just did this for the fnu after all, fuck the security paranoiac guys.

Done.

And that’s all, these steps are called every 24 frames, because I only have 24 fps videos (except when I’m doing Motion Interpolation), so colors change every second. Trying to increase it will probably make the video stutter (at 1080p).
When dominant color is black the lamps are simply white, because black light is kinda hard to do… When a matching color is not found, the previous one is kept.

Since I feel you are bored, and I am too, here are three examples videos :

some animu
more animu
still animu

Conclusion

It was fun but it’s not super viable I think. The best way to achieve a nice ambilight setup is buying some LED strips, Hue are not exactly done for that anyway and the color range is not good enough I think.

Also there’s clearly room for improvement, like the algorithm for the dominant color detection, mine is really trivial (dumb, even). But as I said I just did this for the fnu so…

And as usual because I’m freaking cool I put this ugly code on github.

See ya in 2016, or not.

PS: Fucking stupid Apple event March 9, A FUCKING MONDAY. like I have nothing better to do than wasting an evening for a fucking idiotic watch.

Tags: ,