Google slams Linux kernel
The tech behemoth believes the Linux kernel is in need of major investment to improve security.
GOOGLE’S GREAT EXPECTATIONS Employing more engineers may be something a huge company like Google can do, but for smaller projects and businesses, this “simple resource allocation problem” may not be as simple…”
Kees Cook’s recent blog post,
Linux Kernel Security Done Right (https://bit.ly/lxf281googleblog)
makes for interesting reading. Cook, who’s part of Google’s Open Source Security Team, spells out what he see as the major security issues facing the Linux kernel, while also offering advice on how to fix it.
While noting that “Linux still remains the largest collaborative development project in the history of computing,” and complimenting its large community, Cook says, “What’s still missing, though, is sufficient focus to make sure that Linux fails well too,” and that “When flaws do manifest, it’s important to handle them effectively.”
He highlights areas that need addressing, and starts by calling for substantial investment to make sure the code is as robust as possible. This should stop bugs from appearing as often as they currently do.
As Cook says, “Rather than only taking a onebug-at-a-time perspective, preemptive actions can stop bugs from having bad effects,” and that Linux will need to adapt to do this, especially with Linux being written in the C language. This, the author warns, will mean that it’ll continue to “have a long tail of associated problems.”
In the blog, Cook says that the stable bug-fix only release of the kernel comes out with about 100 new fixes every week. This high rate means that downstream vendors are faced with three choices: either to ignore all fixes, prioritise the “important” ones, or apply them all.
Applying them all, Cook argues, is the only realistic option – but taking this approach has some serious implications. While it means all important bug fixes are applied, it can also bring problems – and many vendors are unable to test the updates before applying them to check for any problems.
So, what can be done? Cook suggests this is a “simple resource allocation problem, and is more easily accomplished than might be imagined: downstream redundancy can be moved into greater upstream collaboration.” Essentially, he’s calling for more engineers to review the code and fix bugs earlier, as well as test the kernels during development. According to the blog, the Linux kernel and its toolchains are “underinvested by at least 100 engineers, so it’s up to everyone to bring their developer talent together upstream. This is the only solution that will ensure a balance of security at reasonable long-term cost.”
This is, of course, easier said than done. Employing more engineers may be something a huge company like Google can do, but for smaller projects and businesses, this “simple resource allocation problem” may not be as simple as Google makes out.