A lot of the system is asleep most of the time. But in any case, the 700MHz on ARM it's still a powerful processor, but it is by historical standards a pretty beefy device. Because we're pinned in one place in our hardware, we're doing more work on the software side to get all of the juice out of it.
So we've spent a lot of time optimising system level components, hopefully upstreaming, optimised versions of system level components for Linux - so, pixmap. Optimised versions of things like memcopy and memset. We had an interesting debate about X acceleration. We don't have an X accelerator, we don't have an acceleration driver.
We have a lot of components of the chip, a lot of subsystems, which can be used to influence the X server accelerator. And it's actually OK. Software X is OK. I'm always surprised by how OK it is. It's ARM moving a pixel at a time, although now we've done this pixmap stuff it's an ARM going 'pixels - bang, bang, bang'. Insofar as a fairly low performance ARM can move pixels fast, we now move pixels fast.
We've had this debate about whether we do hardware X acceleration. And I think we've come down on the side that we shouldn't. What we should do is move to the brave new world.
LXF: Is that because it's easier?
EU: I think it is easier. I think we can afford to do one or the other … and given the choice I'd rather spend my money on the brave new world - given that that's where we want to go. People want to go [there] so I think we're going to try and go quite quickly.
We've done some work with a company here in Cambridge called Collabora. Pek [Pekka Paalanen], who works for them and is a big Wayland developer, has been doing a back-end for Weston, the reference Wayland compositor, which drives our hardware composition engine so we have a very powerful, what we call HVS, video scaler in the device which is effectively a big hardware sprite engine.
It just gives you a vast number of big hardware sprites and what you do is you put a window in each hardware sprite and then you [claps] bang them together onto the display. You can stack up a lot of these before you run out of memory bandwidth. At the point where you run out of memory bandwidth, there's a fallback which does offscreen composition. It's really cool. And that gives you pure 60Hz window drag.
In a world where every window is a hardware sprite, dragging a window doesn't move any pixels. We've got this hardware video scaler, which is probably about the same size of the ARM - bigger than the ARM, because the HVS has got a lot of buffer memory to control the sprites.
Given we've got [something] dedicated to building composited render hierarchies and we've got a compositing window manager, we should do the software work to join them together. It's a good example of where we can put software in and get a much improved user experience without minting a new chip. And it also gives us access to all that lovely Wayland.
LXF: This sounds like first high-profile use of Wayland we've heard of.
EU: I believe we're the first upstreamed non-GL backend for Wayland. It's typically on top of GL, but GL is a really lousy thing to do composition on top of because the scaling filters in GL are primitive.
Even with downscaling you have to generate a bunch of mipmaps in order to do that, whereas what ours does is: suppose I scale something down by a factor of 3:1, the problem is that's part-way between 2:1 and 4:1. If you do it with GL, you end up with a 2:1 mipmap and then you may get some noise, or you use the 4:1 mipmap and then you get blender. Or you try 'linearing' them together and then you get something a little bit better but still not great.
What ours does is (if you do 3:1) build each pixel by averaging together a 3x3 grid of pixels from the original image, every frame, for you for free.
Learn more about the foundation's work with Wayland and more by reading a further 4,000 words of this exclusive Eben Upton interview on tuxradar.com. Also check out our guide to Raspberry Pi operating systems.