How Windows Server is changing to better match cloud app development

Microsoft

Microsoft has been hard at work with Docker recently, supporting Docker containers on Azure and making the Docker engine run natively on the next version of Windows Server (so developers who are working with the Docker APIs can get all that functionality on Windows Server).

But containers optimised for microservices aren't the only area where Microsoft is building features that help make things more scalable more quickly. The new Nano Server SKU and the new Hyper-V Containers will give you more ways to build apps and services that are designed for the cloud world.

Nano Server is a smaller, faster, more secure option for installing the next Windows Server that won't need as many patches, won't need to be rebooted as often and requires fewer system resources. Nano Server would have had 92% fewer critical patches and 80% fewer reboots than Windows Server in the last year, according to Microsoft. If you put it in a virtual machine the VHD file would be 93% smaller and running 1001 virtual machines on a 160-core 1TB server would need less than 10% of the memory – leaving far more resources for the applications you're running the server for.

Out with the GUI

How did Microsoft make Nano so much smaller and more efficient? By taking things out, starting with the actual windows. It turns out that Windows Server uses a lot of resources just running the graphical user interface, so there's no GUI in Nano – you can't log into it locally and you can't get into it with Remote Desktop. Instead you do all the management remotely, using WMI and PowerShell and Desired State Configuration. Think of it as a smaller, heavily refactored version of Server Core, designed for the cloud.

"As we did the refactoring work in Nano Server, we went back and looked at what caused reboots historically," Mike Neil (the general manager of the enterprise cloud team at Microsoft) told TechRadar Pro. "What are the dependencies? There were pieces of functionality that frankly were not paramount capabilities for a server and lots of the refactoring was driven by how to reduce that.

"The trade-off with that is you want to make sure it can run people's applications and provide functionality in those environments, and also provide the necessary infrastructure to build out cloud-style environments. The key thing for us was to make sure it runs Hyper-V, because we want to be able to use that as the base operating system."

So Nano Server runs Hyper-V, and your applications run on Hyper-V, in virtual machines or in the new Hyper-V containers – and that's all installed and managed and even debugged remotely, Neil emphasised, which again reduces what's in Nano.

"You're going to use Nano Server as the base OS image for containers and workload machines and then Desired State Configuration provides the mechanism for the configuration of those things. We're moving away from the traditional Microsoft Installer approach and moving to using DSC to configure the server and make sure the right binaries are there and your app can run."

The Windows Server team also removed 'legacy' systems – like WOW64 for running 32-bit applications. "32-bit support isn't a primary concern for born-in-the-cloud applications," Neil says, and those are what the server team expects customers to run in Nano Server.

Hyper-V containers: between VMs and Docker

They might be running in the new Hyper-V containers that you can think of as a blend between traditional virtual machines and the higher-level abstractions of Docker containers. "The fundamental technologies are virtual machine technologies. That's an abstraction layer we're all very accustomed to and it's down at the hardware layer, handling disk blocks and network packets and that kind of thing," says Neil.

"Containers at the OS layer make that abstraction further up the stack. Instead of instructing at the disk block level, it's at the file level. Instead of being at the packet level, it's at the network interface. The advantage is that being further up the stack provides the ability to share more resources between containers. The Hyper-V container is a blend [of those].

"We use the hypervisor to provide the isolation mechanism; that's tried and tested, it uses VT, it's based in a hardware root of trust. It's very much a core function of the hypervisor to provide that isolation. We then provide higher level abstraction for network and file systems within that boundary. We blend the two together, so you get some of the benefits of virtual machines, that highly isolated hardware solution. But you also get the higher level abstraction in containers that have more shared resources and less overhead associated with them."

Contributor

Mary (Twitter, Google+, website) started her career at Future Publishing, saw the AOL meltdown first hand the first time around when she ran the AOL UK computing channel, and she's been a freelance tech writer for over a decade. She's used every version of Windows and Office released, and every smartphone too, but she's still looking for the perfect tablet. Yes, she really does have USB earrings.