Unix how does it work




















The heart of the operating system, the kernel controls the hardware and turns part of the system on and off at the programer's command. If you ask the computer to list ls all the files in a directory, the kernel tells the computer to read all the files in that directory from the disk and display them on your screen.

There are several types of shell, most notably the command driven Bourne Shell and the C Shell no pun intended , and menu-driven shells that make it easier for beginners to use. Whatever shell is used, its purpose remains the same -- to act as an interpreter between the user and the computer. The shell also provides the functionality of "pipes," whereby a number of commands can be linked together by a user, permitting the output of one program to become the input to another program.

There are hundreds of tools available to UNIX users, although some have been written by third party vendors for specific applications. Typically, tools are grouped into categories for certain functions, such as word processing, business applications, or programming. This real-time sharing of resources make UNIX one of the most powerful operating systems ever. Multiusers The same design that permits multitasking permits multiple users to use the computer.

System portability A major contribution of the UNIX system was its portability, permitting it to move from one brand of computer to another with a minimum of code changes.

UNIX tools UNIX comes with hundreds of programs that can divided into two classes: Integral utilities that are absolutely necessary for the operation of the computer, such as the command interpreter, and Tools that aren't necessary for the operation of UNIX but provide the user with additional capabilities, such as typesetting capabilities and e-mail.

UNIX Communications E-mail is commonplace today, but it has only come into its own in the business community within the last 10 years. Applications libraries UNIX as it is known today didn't just develop overnight. How UNIX is organized The UNIX system is functionally organized at three levels: The kernel, which schedules tasks and manages storage; The shell, which connects and interprets users' commands, calls programs from memory, and executes them; and The tools and applications that offer additional functionality to the operating system The three levels of the UNIX system: kernel, shell, and tools and applications.

The kernel The heart of the operating system, the kernel controls the hardware and turns part of the system on and off at the programer's command.

The kernel is the core responsible for interaction with file system and devices. It also handles process scheduling, task execution, memory management, and access control. The kernel exposes API calls for anything built on top to leverage.

The most popular ones are exec , fork , and wait. Another layer up are the unix utilities. These are super helpful processes that help us interact with the kernel. They do this via system calls like exec and fork , which the kernel provides. Others include: python , gcc , vi , sh , ls , cp , mv , cat , awk. You can invoke most of them from the shell. They do the same thing. Another utility that people find daunting is the text editor Vim. It covers the kernel in a protective … shell.

Remember how shell is a process? When run from the terminal, stdin is connected to the keyboard input. What you write is passed into the terminal. This happens via a file called tele typewriter , or tty. You can find out the file your terminal is attached to via the tty command. Now you can do something funky: since shell reads from this file, you can get another shell to write to this file too, or clobber the shells together. Remember how to redirect files from the process section above?

Try echoing ls , the command to list files this time. Remember, only input coming in via stdin is passed as input to the shell. Everything else is just displayed to the screen. The natural extension of the above then, is that when you redirect stdin , then the commands should run.

This is an undefined state, but on my Mac, one character went to one terminal, the other character went to the second, and this continued. Which was funny, because to exit the new shell I had to type eexxiitt. And then I lost both shells.

We never specified the output stream, only the input stream. This happens because processes inherit from their parent process. Every time you write a command on the terminal, the shell creates a duplicate process via fork. These descriptors reference the same underlying objects, so that, for instance, file pointers in file objects are shared between the child and the parent, so that an lseek 2 on a descriptor in the child process can affect a subsequent read or write by the parent.

This descriptor copying is also used by the shell to establish standard input and output for newly created processes as well as to set up pipes.

Once forked, this new child process inherits the file descriptors from the parent, and then calls exec execve to execute the command. This replaces the process image. From man 2 execve 8 :. File descriptors open in the calling process image remain open in the new process image, except for those for which the close-on-exec flag is set. Thus, our file descriptors are same as the original bash process, unless we change them via redirection.

While this child process is executing, the parent waits for the child to finish. When this happens, control is returned back to the parent process. With ls , the process returns as soon as it has output the list of files to stdout. Note: Not all commands on the shell result in a fork and exec.

You can find the list here. Have you ever thought how weird it is that while something is running and outputting stuff to the terminal, you can write your next commands and have them work as soon as the existing process finishes? I used sleep 10; to demonstrate because the other commands happen too quickly for me to type anything before control returns to the parent bash process. Now is a good time to try out the exec builtin command it replaces current process so it will kill your shell session.

Armed with the knowledge of how shell works, we can venture into the world of the pipe:. Remember the philosophy we began with? Do one thing, and do it well. Now that all our utilities work well, how do we make them work together? This is where the pipe, , pipes in. It represents the system call to pipe and all it does is redirect stdin and stdout for processes.

Since things have been designed so well, this otherwise complex function reduces to just this. This image is a bit of a simplification to explain the pipe redirection. You know how the shell works now, so you know that the top bash forks another bash connected to tty , which produces the output of ls.

You also know that the top bash was forked from the lower one, which is why it inherited the file descriptors of the lower one.

This pipeline figures out the largest file in the current directory and outputs its size. Who knew this was built into ls already. Notice how stderr is always routed directly to tty? What if you wanted to redirect stderr instead of stdout to the pipe?

You can switch streams before the pipe. Source: this beauty. Local variables are ones you can create in a shell. Environment variables env vars are like global variables. They are passed to children. Try this: call bash from bash from bash. The first bash is waiting on the second bash to exit, while the second one is waiting for the third one.

When you call exec , the exit happens automatically. If not, you want to type exit yourself to send the exit code to the parent. Now, thought experiment: What happens when you type ls into a shell? You know the fork , exec and wait cycle that occurs, along with tty. But, even before this happens, ls is just another utility function, right?

Remember how the file system tree is heirarchical? All the base level directories have a specific function. Bin stands for binaries. This is enough knowledge for us. So, you can do this:. But how did the shell know to look for ls in bin? This is where the magical environment variable, PATH comes in. The PATH is a colon separated list of directories. It executes the first file it finds. Nothing should work without an absolute path. Calendars and time systems measure time starting at some significant point in the past, such as a cosmological event, the founding of an empire, or the success of a revolution.

In operating systems, an arbitrary time and date are chosen as the point from which the counting starts. This is the epoch for that operating system. Unix used a bit unsigned integer to hold the count of 60ths of a second since the epoch. That sounds like a lot. With a rate of consumption of 60 numbers per second, the counter would have hit its maximum value on April 8, , a little less than days later. Needless to say, this was acted upon rapidly.

The signed integer was replaced with a bit unsigned integer. It might seem a surprising choice because a signed integer is able to hold a smaller number of positive values—2,,, 2 31 —than an unsigned integer. However, the speed of consumption was also reduced from 60ths of a second to whole seconds. It takes longer to count from 0 to 2,,, counting one number per second than it does to count from 0 to 4,,, at 60 counts per second.

And by quite a margin. This seemed so far in the future that the epoch was even reset to an earlier point in time. The new epoch was set to midnight on Jan. That point 68 years in the future is now unnervingly close. Using a single integer to count the number of time steps from a given point in time is an efficient way to store time.

Multiplying the number in the integer by the size of the time step—in this case, one second—gives you the time since the epoch, and converting from that to locale-specific formats with time-zone adjustments is relatively trivial. It does give you a built-in upper limit though. At the time of writing this article, the year is only 17 years away. There were some issues in the first few days of Jan. Because Linux and all Unix-lookalike operating systems share the same issue, the year issue has been taken seriously for some time, with fixes being added to the kernel since This is ongoing with fixes being added to the kernel as recently as Jan.

Of course, a working Linux computer contains a lot more than a kernel. All of the operating utilities and userland applications that make use of system time through the various APIs and interfaces need to be modified to expect bit values. File systems too must be updated to accept bit timestamps for files and directories. Linux is everywhere. A catastrophic failure in Linux would mean failures in all sorts of computer-based systems.

Linux runs most of the web, most of the public cloud, and even spacecraft. It runs smart homes and self-driving cars. Smartphones have a Unix-derived kernel at their heart. Practically anything—like network firewalls, routers, and broadband modems—that has embedded operating systems inside run on Linux.

But what are the chances that all of those devices will be patched and updated? But devices like that should be a tiny minority. The vast majority of systems will see the crunch time come and go without incident.

At least, until the year approaches, bringing with it the exact same problem for systems that use bit based integers to count the time since the epoch.

We can use the date command to verify Linux and other Unix derivatives still use the original, simple scheme of storing the time value as the number of seconds since the epoch. Using the date command without any parameters prints the current date and time to the terminal window.



0コメント

  • 1000 / 1000