Linux Format

entering the atomic age

-

LXF: A while ago KMS came along, and that was exciting. Now I understand something else exciting called Atomic Modesettin­g has happened, but I don’t really understand what that is.

DS: Originally, KMS was a lot of the display control layer from X just transplant­ed straight into the kernel. To be fair, it made things a lot better – it gave us hot-pluggable displays – but it was pretty creaky in a lot of places.

One of the things it didn’t really do as well was overlay planes. If you’re on a normal 3D composited desktop where everything goes through your GPU so if you’re just playing Netflix, then you would be smashing your power by keeping this huge GPU active when all it’s doing is moving pixels around for no reason.

In a display controller you have overlay planes that can convert to RGB and then do really nice scaling at much better quality than your GPU can. That’s all built into the GPU too, but the way KMS was done made it really hard to use them a) at all and then b) even when you could use them there was no API guarantee of timing. So you had no idea when your frame was actually displayed. Some of them would overwrite the current frame, some of them would block and cut your framerate in half. The other thing is that even though these overlay planes are there and great, there are some weird restrictio­ns on how you can use them.

But we want to make use of these overlay planes, especially on low-power systems. I think we measured on one device you got three hours of video playback without overlays and five with them. But we can’t just put everything into a plane and know that it’ll work. So the first thing Atomic Modesettin­g did is just give us the ability to test a config beforehand. The way things like Weston work is we try one possible config, test it, and if it doesn’t work throw it away and try it on another plane.

On Android you have Hardware Composer, which has this platform specific backend to it that ‘knows’ the optimal configurat­ion for anything. So it’ll look at the scene and say ‘this video should be on plane three because it’ll fit there and it has a better scaler. And this overlay should be on plane one because if we have that small then it’s better for local and global bandwidth’.

But that’s insanely hardware-specific so all we can really do in generic userspace is brute force it. With the old, pre-atomic API you had setting a mode as one call, displaying the buffer as another call, and displaying the planes as another other call. So you’d do one bit, then get to the end and find out that it didn’t work. Then you’d have to go back to the start and reconfigur­e everything.

Atomic makes it possible to do all this in one shot. It means we can have the content and the overlays synchronis­ed. It also means we can configure multiple displays at once with a single call. Previously, if you had two displays and wanted to change resolution on one display, then you’d have to disable both then re-enable them.

The other thing was most hardware in X had its own userspace driver and in the kernel KMS was a loosely defined Atomic, where everyone implemente­d their own semantics. But Atomic has strict API semantics that are in this common core, so we can write generic userspace against Atomic and know that it works portably. We now have the closest thing to KMS conformanc­e testing. We’ve got a solid, huge test suite for userspace now that covers all the API guarantees.

It was possible to use KMS well before Atomic, but usually you needed some kind of platform knowledge, and even then it was a pain. It’s also quite nicely extensible. Before you had modesettin­g, changing your buffer and changing overlay planes. Then if you also wanted to turn your monitor on or off, or change colour settings or something, those would all be separate calls again. Atomic enables us to chain everything together, so it’s much more smooth and seamless.

 ??  ??

Newspapers in English

Newspapers from Australia