Sy

SYNLIGHT
Homemade Ambiligth system for Windows

(2016 - 2021)


1. PRESENTATION

The idea behind this project was to create my own Ambilight system. Such systems use LEDs on the rear of the screen to replicate the color of its edges. This creates a pleasant experience while watching movies but is also enjoyable on a daily use, as it reduces visual fatigue. I love this feature but unfortunately it is only available with Philips TVs. I spent some time playing around with individually addressable RGB LEDs (WS2812B, also known as Neopixel from Adafruit) and I program softwares to run on Windows regularly so why not try to combine these two ? Furthermore, I felt like there is much to learn throughout the process.

In addition, I could add several features among which :

You can find the whole code for this project on my dedicated GitHub page.

For this lecture I advise you to have some knowledge about the .Net framework  and the Arduino  environment.

HOW :

In order to change the LEDs color according to the outside pixels from my monitor, I have to take screenshots, to process them, to send the informations to a microcontroller and having it driving every LED accordingly.

DISCLAIMER AND LIMITS :

My solution is a software that runs on a Windows computer. It is not an physical solution with video input and LEDs as output. I've seen such solution on the Internet and even tried some. For many reasons hardware solutions don't please me and I keep my setup as presented below. I tried to use DirectX direct capture with SmallDX but it didn't improve the maximum frequency of the process nor allowed me to capture in-game screenshot. As my programing knowledge is limited when approaching GPU access, GPU-driven fullscreen mode (gaming generally) will not be supported for now and will be processed as a dimmed white frame. The CPU load is light enough to let it runs in background and forget about it on most computers. This is the solution I went with.


2. PC PART

To program the PC software I went with C# .net language and Microsoft Visual Community. I used the Model-View-Viewmodel architecture instead of the more simple Windows-form-application architecture as it was part of my learning.

a. THE USER INTERFACE

Here is a view of the user interface that bears the controls needed to match every setup.

Each of these value can be changed in real time while the program is running, and can also be set manually within the parameter file to let the program run in background

b. CAPTURING A FRAME :

In order to capture a frame we need a container. Let's declare a simple Bitmap matching the size as our screen :

int screenWidth = (int)SystemParameters .PrimaryScreenWidth;
int screenHeight = (int)SystemParameters .PrimaryScreenHeight;
Bitmap bmpScreenshot = new Bitmap(screenWidth,screenHeight);

Now lets fill this fresh Bitmap with the pixel of the screen using the Graphics object :

Graphics gfxScreenshot = Graphics.FromImage(bmpScreenshot);
gfxScreenshot.CopyFromScreen(0, 0, 0, 0, Screen);

Almost done. Now a nice tips regarding the Bitmap constructor : it is possible to create a new Bitmap from another while changing its size. For a better understanding, take a look at the following line :

Bitmap bmpScreenshot = new Bitmap(bmpScreenshot, Width,Height);

Here is created a new Bitmap with the number of LEDs in the X axis as width and the number of LEDs in the Y axis as height, way simpler to process ! The freshly new Bitmap object will looks like a rather small 2-dimensional array of pixel.

The three images above show the desktop screenshot, the resized one and finally the relevant parts of the resized screenshot : its edges.

c. SCANING :

Scanning the Pixels consists of adding to a byte array the R component (red), then the G component (green), then the B component (blue) of every pixel. This could be summed up by the following code :

byteToSend = new byte[600]; //UP TO 200 LEDs
byteToSend.Add( scaledBmpScreenshot.GetPixel(x, y).R );
byteToSend.Add( scaledBmpScreenshot.GetPixel(x, y).G );
byteToSend.Add( scaledBmpScreenshot.GetPixel(x, y).B );

As you can see it is straightforward, each pixel fills 3 bytes of the byte array. The way the scaled Bitmap is scanned depends on the first LED position and on the direction of rotation.

Some tricks can be performed as well. For example, a software low-pass filter merges the current pixel array (byteToSend) and the previous one (lastBytToSend) to generate the array to be sent (newByteToSend). This prevents flickering and smooth out the color changes.

d. WIRELESS COMMUNICATION :

This system is targeting local networks. The PC software can find an ESP8266 on a network (providing it uses the right code) by pinging every local addresses (192.168.0.X and 192.168.1.X) . The IP address to send to can also be set up manually. This is all done within the constructor of the AutoNodeMCU class. At the end of every process loop, once the resized image has be scanned, the pixel array is ready to be sent. This is done by this line of code :

sock.SendTo(newByteToSend, endPoint);

Here, endpoint  represents both the IP address of the NodeMCU and the port it is listening to.


3. NODEMCU PART

As stated before, the NodeMCU can connect to a wifi network, and can listen and send data. To connect to a network, many ways can be used. I went with the WiFiManager  library from Tzapu  for its ease of use. After importing the library, only three simple lines of code are required in the main sketch :

#include <WiFiManager.h>

void setup()
{
   /*
    * OTHER COMMANDS
    */

    WiFiManager wifiManager;
    wifiManager.autoConnect( "AutoConnectAP");
}

Visit the official WiFiManager GitHub page for more information.

This gives you a WiFi Connection manager with a fallback web configuration portal. On top of that, it retains the log-in informations over reboot. Configure once with a smartphone or a laptop, and it's done.

Once connected, the NodeMCU waits for incoming UDP packet. Upon receiving one that is not a special command, it affects the values to the corresponding LEDs :

if(UDP.parsePacket()>0)
{
   UDP.read(packetBuffer, UDP_TX_PACKET_MAX_SIZE);

   for(int n = 0; n<=packetSize - 3;n += 3)
   {
      red   = packetBuffer[n];
      green = packetBuffer[n+1];
      green = packetBuffer[n+2];
      strip.SetPixelColor( n/3,RgbColor(red,green,blue) );
   }
   strip.Show();
}

And voila ! When starting the PC program while the NodeMCU is connected to the local network, the LEDs start to shine within a second.


4. CONCLUSION

Throughout this project, I've experimented a lot with the .Net framework. Communication over wifi (UDP) is a powerful tool that I'll be using more and more. As time went on, I've optimized the frame capturing part. A more efficient screenshot capturing method and the use of clever delays give the right balance between CPU workload and frequency. In case of heavy CPU workload from other tasks, this program automatically throttle and will never be the one overloading the CPU. Dictionary are used for multilingual user interface (currently supplied with English[default] and French files). This project also gave me an understanding of the MVVM architecture.


UPDATE 05/04/2017 - Major performance improvement

Thanks to AloyseTech's suggestion, I was able to change the frame capturing method. I used to grab the whole screen (A) but now I strictly look at the useful parts (B) :

Even though 4 operations are needed, it is still faster due to the lower total size captured and the lower pixel count. This improvement allows for a decreased CPU usage and faster loop, meaning higher frequency.


UPDATE 06/10/2017 - LED circular shifting

I received a request on GitHub to update the program in order to accommodate setups where the first LED is not in a corner. Each shifted LED makes a 3 bytes left or right circular shift of the array to be sent. Here is an illustration of a 3 bytes right shift of a 100 LEDs array :


UPDATE 09/11/2017 - Custom static color

Not part of the original ambient lighting system, but nonetheless interesting to have : Static colors. Here is a view of the static color tab :

The color to be displayed by the LEDs is made from the 3 sliders representing red, green, and blue. I tried to implement a color picker, the same that be found in various photo editing softwares, but that will be for another time.


UPDATE 02/03/2018 - Multiple payloads and visual refresh

Sending UDP payloads to the NodeMCU is limited to arrays no longer than 1470 bytes. This was discovered after a question asked on GitHub. Both the C# program and the controller were updated to support multiple payloads, containing the colors of the LEDs, followed by a terminal payload, telling the NodeMCU to refresh the colors.

Here is an example corresponding to 1870 LEDs :

The program also got a visual refresh :
- "Style" and "Tempaltes" elements are applied on the controls of the view
- Some checkboxes have been replaced by toggle switches
- The static color can blend with the ambiant light and is integrated in a single tab
- Custom topbar to remove the icon