When the Linux kernel boots, it needs to be told what to run next. That program, the “init process”, a.k.a. “PID 1”, is the first process that will run, and is responsible for readying the system for all of the other processes that will be run. Normally this is System V Init or System D, a fully-featured system that does quite a lot.

But at the core, what does the init process need to do, to provide a “fully-operational” Linux system? It needs to do a few things, in order:

  1. Mount /proc and /sys
  2. Dig through /proc and /sys to see what devices exist
  3. Mount /dev
  4. Populate /dev with nodes (not files) that represent hardware devices.
  5. (Maybe start some background processes (daemons)?)
  6. (Maybe do some login stuff?)

Having a /dev directory is a core part of POSIX, that Operating System standard that Unix and Linux conform to. But if having /dev is core to POSIX, and the Linux kernel doesn’t handle it itself, is the kernel really POSIX?

Note also that /proc and /sys don’t exist until the init process mounts them. This is a single Kernel API call. So why doesn’t the kernel do it?

I’m sure the answer to this is “because modularity”, but does it really make sense for such basic functionality to be swapped out or replaced? It seems that originally, Linux distros (if such a thing existed, and people weren’t cobbling Linux installs together from scratch) just had a hard-coded /dev directory. /dev/tty0/dev/tty1, etc. Or maybe people manually created them. Who knows. In about 1995, however, someone wrote devfs, a tool for auto-populating /dev. In 2000, it was included in the kernel itself. At some point later, it was replaced by devtmpfs, which seems functionally equivalent.

So if you're looking for an answer to the title... I think the answer is "nobody knows, but it's too late to change it now".

Sources: