dextius, you are not waiting for process completion here. What was meant, I guess, is that in UNIX you cannot pass pids to select/poll. But in linux you can, see pidfd_open(2).
Great article, especially loved the focus on history! I’ve subscribed.
> Lastly, as much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system.
I’m curious as to why the NT kernel’s security guarantees don’t seem to result in Windows itself being more secure. Or maybe Windows is actually just as secure, it’s just much more of a juicy target? It’s impossible to ask about this without getting swarmed by people with unsubstantiated opinions, so I’d love your perspective.
There are various things involved here and I don't have great answers, but let me try to think through them.
One that comes to mind is that the kernel is often not the attack vector. Buffer overflows, untrusted code execution, etc. tend to impact user-space applications. If you look at many high-profile attacks, they took advantage of Word macros, hidden file extensions in the File Explorer, auto-run of user-supplied binaries on removable media, an OpenSSL security issue, etc.
Another is that, while the design may be good from a security perspective, people didn't write defensive code back in the 1990s. The Internet was a "safe space" so remote attacks were not a thing until later. For example, a buffer overflow could have been thought as merely a bug and not something that can actually lead to a vulnerability, but today we'd directly determine that it falls in the latter space. At some point in the early 2000s (I think?) there was a top-down mandate at Microsoft for everyone to read "Writing Secure Code" precisely to try to cover these gaps in implementation.
Another is that, until Windows Vista, consumer Windows editions didn't separate privileged operations from unprivileged ones. So it was trivial for an attacker to compromise a machine: all they had to do was convince you to click on a malicious link on a website and you were toast. UAC was not received well because the ecosystem wasn't ready for it (no apps expected a clear boundary between these two roles), but it was the solution to this issue. Actuall, UAC is still here today but nobody "notices" anymore because the ecosystem has improved.
And another, as you say, is Windows being a juicier target.
I'm probably missing other reasons. Hmm... and this could deserve its own full article ;)
Because the ACL security model isn't strong enough.
In NT, there is still ambient authority where, once a userspace program is compromised, rogue code can piggyback off of the process permissions. ACL based systems are very difficult to configure so you can't do this. The general solution require something as strong as an object-capabilities approach.
Love this. Showstopper is a fascinating book. Somehow what I learned from that keeps being relevant 20 years after I read it. Inside Windows NT and the Internals books were a revelation when I read them. They really made me appreciate the blue screen and crash dump analysis.
I learned several things I didn't know, and brought me some memories of the past.
And yes: "...bloat in the UI doesn’t let the design shine through. The sluggishness of the OS..."
What a mess!
One point I probably miss is a terminal comparison, which is not per se an OS internals thing, but being used by admins, power users and programmers, it is something really important in the design of an OS. It is difficult to understand how Windows does not have a decent native terminal in 2024...
Nice article, Julio, thank you! The design of the NT kernel was heavily influenced by the earlier DEC kernels developed by D. Cutler. In fact, "Vax/vms internals and data structures" in the best book on the NT design, try decrementing characters in "WNT". :-)
By the way, "paged executive" is in fact *older* not newer development: Multics and early Unixes used to have pageable u-area (I believe FreeBSD and NetBSD had pageable kernel stacks initially, as well as VMS), but the associated complexity became unreasonable by the time NT (and later Linux) started.
A little known bit of NT history:
https://www.techmonitor.ai/technology/dec_forced_microsoft_into_alliance_with_legal_threat
NT comes from VMS & MICA (and RSX-11, etc.), as Unix comes from Multics. Except, of course, Dennis, Brian and Ken didn't steal source.
Inside joke among DEC engineers (folks that developed VMS):
Windows NT or WNT is an upgrade of VMS. Bump up each character in VMS:
V->W
M->N
S->T
You get WNT!
Related: A 3 hour interview conducted Oct 21, 2023 with Dave Cutler is available on the YouTube channel 'Dave's Garage': https://www.youtube.com/watch?v=xi1Lq79mLeE
More on Dave Plummer at Wikipedia: https://en.wikipedia.org/wiki/Dave_Plummer
> Try to wait for I/O and process completion on a Unix system at once; it’s painful.
Oh yeah, super painful.. hmmph..
#!/usr/bin/env perl
use strict;
use Data::Dumper;
use IO::Select;
use IO::File;
my $sel = new IO::Select();
my %files;
my $pid = open(my $fh, "ls -al 2>&1 |");
$files{$fh} = "ls -al";
$sel->add($fh);
foreach my $arg ( @ARGV ) {
die("File not found $arg") unless ( -e $arg );
my $ff = new IO::File($arg);
$files{$ff} = $arg;
$sel->add($ff);
}
while ( 1 ) {
foreach my $fh ( $sel->can_read(1) ) {
my $buf;
sysread($fh, $buf, 20_000, length($buf));
if ( $buf ) {
print "$files{$fh} $buf\n";
} else {
print "Handle closed: $files{$fh}\n";
}
}
sleep(1);
}
dextius, you are not waiting for process completion here. What was meant, I guess, is that in UNIX you cannot pass pids to select/poll. But in linux you can, see pidfd_open(2).
(Reposting my comment from HN)
Great article, especially loved the focus on history! I’ve subscribed.
> Lastly, as much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system.
I’m curious as to why the NT kernel’s security guarantees don’t seem to result in Windows itself being more secure. Or maybe Windows is actually just as secure, it’s just much more of a juicy target? It’s impossible to ask about this without getting swarmed by people with unsubstantiated opinions, so I’d love your perspective.
Thanks for reading and subscribing!
There are various things involved here and I don't have great answers, but let me try to think through them.
One that comes to mind is that the kernel is often not the attack vector. Buffer overflows, untrusted code execution, etc. tend to impact user-space applications. If you look at many high-profile attacks, they took advantage of Word macros, hidden file extensions in the File Explorer, auto-run of user-supplied binaries on removable media, an OpenSSL security issue, etc.
Another is that, while the design may be good from a security perspective, people didn't write defensive code back in the 1990s. The Internet was a "safe space" so remote attacks were not a thing until later. For example, a buffer overflow could have been thought as merely a bug and not something that can actually lead to a vulnerability, but today we'd directly determine that it falls in the latter space. At some point in the early 2000s (I think?) there was a top-down mandate at Microsoft for everyone to read "Writing Secure Code" precisely to try to cover these gaps in implementation.
Another is that, until Windows Vista, consumer Windows editions didn't separate privileged operations from unprivileged ones. So it was trivial for an attacker to compromise a machine: all they had to do was convince you to click on a malicious link on a website and you were toast. UAC was not received well because the ecosystem wasn't ready for it (no apps expected a clear boundary between these two roles), but it was the solution to this issue. Actuall, UAC is still here today but nobody "notices" anymore because the ecosystem has improved.
And another, as you say, is Windows being a juicier target.
I'm probably missing other reasons. Hmm... and this could deserve its own full article ;)
Because the ACL security model isn't strong enough.
In NT, there is still ambient authority where, once a userspace program is compromised, rogue code can piggyback off of the process permissions. ACL based systems are very difficult to configure so you can't do this. The general solution require something as strong as an object-capabilities approach.
FYI, JFS in AIX 3.1 released in 1990 with a journaled file system. Veritas’ VxFS came out a year later.
Love this. Showstopper is a fascinating book. Somehow what I learned from that keeps being relevant 20 years after I read it. Inside Windows NT and the Internals books were a revelation when I read them. They really made me appreciate the blue screen and crash dump analysis.
This really took me back.
Wow! Very nice article, Julio!
I learned several things I didn't know, and brought me some memories of the past.
And yes: "...bloat in the UI doesn’t let the design shine through. The sluggishness of the OS..."
What a mess!
One point I probably miss is a terminal comparison, which is not per se an OS internals thing, but being used by admins, power users and programmers, it is something really important in the design of an OS. It is difficult to understand how Windows does not have a decent native terminal in 2024...
PS: Enjoyed Kubrick's nod in Nikita's comment
void main() { for (char *p = "WNT"; *p; p++) {printf("%c", *p - 1); }}
Nice article, Julio, thank you! The design of the NT kernel was heavily influenced by the earlier DEC kernels developed by D. Cutler. In fact, "Vax/vms internals and data structures" in the best book on the NT design, try decrementing characters in "WNT". :-)
By the way, "paged executive" is in fact *older* not newer development: Multics and early Unixes used to have pageable u-area (I believe FreeBSD and NetBSD had pageable kernel stacks initially, as well as VMS), but the associated complexity became unreasonable by the time NT (and later Linux) started.