The basics of working in the unix operating system. Linux basics. Linux command line

A good place to start would be to have a basic understanding of what Linux is and how it works.

And you can start with Introduction to Linux (sxw). There are other introductions though. For example this. Here is the doc of R.S.Klochkov and N.A. Korshenin UNIX and Linux Basics (SXW), (PDF).

UNIX basics. Training course ... (SXW) (PDF)
Copyleft (no c) - Fuck copyright! 1999-2003 V. Kravchuk, OpenXS Initiative
This short (presumably 16 hours, of which 6 are hands-on sessions) introductory course is designed to introduce you to the architecture, features, and basic features of UNIX. If successfully mastered, the course will allow you to freely and productively work in UNIX OS as a user and continue to study the administration or programming of this operating system.
The presentation is conducted mainly without reference to the peculiarities of any version of UNIX, but if it is necessary to specify it, it is done for SVR4 systems, in particular, Solaris 8 OS.
I also suggest a book Andrey Robachevsky "UNIX operating system"
Here is what the author writes: “This book is not a replacement for reference books and various manuals for the UNIX operating system. Moreover, the information presented in the book is sometimes difficult to find in the documentation supplied with the operating system. These publications are full of practical recommendations, a scrupulous description of the settings of certain subsystems, command calling formats, etc. At the same time, such issues as the internal architecture of individual system components, their interaction and principles of operation often remain behind the scenes. Without knowledge of this “anatomy”, working in the operating system turns into using learned commands, and inevitable mistakes lead to inexplicable consequences. On the other hand, much less attention is paid to UNIX administration, specific subsystem configuration, and commands used in this book. The purpose of this book is to outline the basic organization of the UNIX operating system. It should be borne in mind that the name UNIX denotes a significant family of operating systems, each of which has its own name and unique features. This book attempts to isolate what is common in the UNIX "genotype", namely: the basic user and programming interfaces, the purpose of the main components, their architecture and interaction, and on this basis represent the system as a whole. At the same time, references to a specific version of UNIX are provided where relevant. "

Personally, reading the excellent book by Viktor Alekseevich Kostromin helps me a lot Linux for the user”Which I can offer you (kos1, kos2, kos3, kos4, kos5, kos6, kos7, kos8, kos9, kos10, kos11, kos12, kos13, kos14, kos15, kos16, kos17, kos18).
And here is the same book, but in PDF (kos1, kos2, kos3, kos4, kos5, kos6, kos7, kos8, kos9, kos10, kos11, kos12, kos13, kos14, kos15, kos16, kos17, kos18).
And now also in SXW (kos1, kos2, kos3, kos4, kos5, kos6, kos7, kos8, kos9, kos10, kos11, kos12, kos13, kos14, kos15, kos16, kos17, kos18).
If you prefer documents in HTML format, then the links above will take you to a page where you are invited to download the archives of the book chapters in this format.

From fundamental books I can also advise the excellent guidance of Karl Schroeder Linux. Collection of recipes "... I warn you right away that this link is for the book in .pdf format, and it weighs 50 MB... But an alternative option is also possible - the same book, only in

FORMATE.TXTThe abstract to the book is as follows: »The proposed edition contains a unique collection of tips, tools and scripts; you will find a number of ready-made, debugged solutions to complex problems that any administrator who sets up a Linux server faces; these solutions will come in handy when setting up small networks and building powerful distributed data warehouses. The book is written in the already popular form of O'Reilly recipe books in the "Problem-Solution-Discussion" format. For experienced users, programmers, system administrators, university students, graduate students and teachers. " If the link suddenly ceases to exist, please inform - I will probably post the .pdf file on my website.

I really like the series of articles and notes by Alexey Fedorchuk, Vladimir Popov and a number of other authors, which I take from here: http://unix.ginras.ru/. Here are some interesting materials about Linux in general and its individual components in particular (Linux-all.zip, Linux-all2.zip, Linux-all3.zip, Linux-all4.zip).
SXW - (Linux-all.zip, Linux-all2.zip, Linux-all3.zip, Linux-all4.zip),
And also a book by Alexey Fedorchuk " The POSIX Saga or An Introduction to POSIX'ivism»Which covers general questions of a number of systems, primarily about UNIX-like ones. The name speaks for itself. According to the authors, the book is intended for users (including beginners). Here are the files - Part 1, Part 2, Part 3, Part 4.
AND SXW - Part1, Part2, Part3, Part4.

And if you are interested in the history of FREE SYSTEMS, then you can read the Selection of articles, under the general title A road open to all"(Sxw) and, according to the author, covers general issues of Open Sources, POSIX systems, history of UNIX, BSD, Linux

Also, for understanding the operating principles of the OS, the concept of a process, along with the concept of a file, is, of course, one of the most important concepts. This is the subject of an article by V.A. Kostromina " Linux Processes and Daemons"(SXW.

Text-Terminal-HOWTO (SXW) v 0.05, June 1998
This document explains what text terminals are, how they work, how to install and configure them, and gives some information on how to repair them. It can be used somewhat even if you don't have a terminal manual. Although this work was written for real terminals on a Linux system, some of it also applies to terminal emulators and / or other Unix-like systems.

It is also very useful to read the beautifully illustrated guide for faster and easier mastering of the console - Working with team history (SXW).

Here are the materials on command shells, or command interpreters, also referred to simply as shells. First of all, a selection of articles that are combined under the title Shell and utilities (SXW), (PDF).

The most popular shell today is Bash, which is an abbreviation for Bourne Again SHell. I advise you to read BASH synopsis, (SXW), (PDF)
Date of creation: 16.12.97.

And How bash works (SXW), (PDF).
The document briefly summarizes what Bash inherits from the Born shell: shell control structures, builtin commands, variables, and other features. It also lists the most significant differences between Bash and the Born shell.

Shell shell interpreter (SXW), (PDF) is a command language that can execute both commands entered from the terminal and commands stored in a file.

Shell Programming (UNIX) (SXW), (PDF)

If Windows freezes, the user makes some gestures, and then, convinced of the "vanity and vanity of this world", presses RESET with a calm heart. This is not the case in Linux. About this article - Hanging? Let's take it off! (SXW)

Kppp faq (SXW)

Article by V.A. Kostromin “ Hierarchy of directories and filesystems in Linux»(SXW), which tells about the standard developed within the Open Source project for the directory structure of UNIX-like operating systems (meaning Linux and BSD systems).

About files (which in Linux, in fact, are directories, and even devices), but from a slightly different angle, the manual tells Files and access rights to them (SXW).
Highly recommend. Chewed great.

Linux Commands and Abbreviations (SXW).
This is a practical collection of programs that we use most often, find useful, and which are present in our Linux distributions (RedHat or Mandrake).

UNIX consoles (SXW) - Notes on various consoles.

And here is a sickly guide Mandrake Linux 9.0 Command Line Guide (SXW).

Mount filesystems from devices and files (SXW) (PDF)
Document creation date: 26.07.2004
Date of last change: 20.08.2004
Author: Knyazev Alexey.

UNIX (Unix, Unix) - a group of portable, multitasking and multiuser operating systems. The first Unix operating system was developed in the late 1960s and early 1970s by the American research firm Bell Laboratories. Initially, it was focused on minicomputers, and then began to be applied on computers of all classes, including mainframes and microcomputers. This was facilitated by the adaptation of Unix to 32-bit microprocessors by Intel Corporation, which was carried out in 1990. The functionality and flexibility of Unix has made it suitable for use in heterogeneous automated systemsas well as creating dozens of standards for computer manufacturers. Unix family operating systems:

Linux is a version of the Unix operating system for computing platforms based on Intel processors;
HP-UX - version of Hewlett-Packard; constantly evolving and distinguished by compatibility with IE-64, which is the new standard for 64-bit architecture;
SGI Irix is \u200b\u200ba Silicon Graphics PC operating system based on System V Release 3.2 with BSD elements. On this version of Unix, Industrial Light & Magic created the films Terminator 2, Jurassic Park.
SCO Unix - version of Santa Cruz Operation for Intel platform independent of hardware manufacturers;
IBM AIX - Based on System V Release 2 with some BSD extensions
DEC Unix - an operating system with cluster support; focused on working together with Windows NT;
NeXTStep-4.3 BSD - OS based on the Mach kernel used in NeXT computers; owned by Apple Computer and serves as the operating system for Macintosh computers;
Sun Solaris is an operating system for SPARC stations based on System V Release 4 with numerous additions.

The Unix operating system appeared during the development of mini-computers. In 1969, research firm Bell Labs began developing a compact operating system for the Digital Equipment Corporation's 18-bit DEC PDP-7 mini-computer. The system was originally written in assembler and Unix was born on January 1, 1970. In 1973, it was rewritten in the C language, which was developed at Bell Labs. Then the official presentation of the operating system took place. Its authors - Bell Labs employees Ken Tompson and Dennis M. Ritchie - called their brainchild "universal operating system with time-sharing."

Unix is \u200b\u200bbased on a hierarchical file system. Each process was viewed as a sequential execution of program code within an autonomous address space, and working with devices was treated as working with files. In the first version, the key concept of the process was implemented, later system calls appeared (fork, wait, exec, exit). In 1972, the introduction of pipes provided data pipelining.

By the late 1970s, Unix had become a popular operating system, aided by favorable terms for its distribution in the university environment. Unix was ported to many hardware platforms and flavors began to emerge. Over the years, Unix has become the standard not only for professional workstations, but also for large corporate systems. The reliability and flexibility of UNIX configuration have earned it popularity, especially among system administrators. She played an active role in disseminating global networks, and, above all, the Internet.

Thanks to the source code disclosure policy, numerous free Unix dialects that run on the Intel x86 platform (Linux, FreeBSD, NetBSD, OpenBSD) have become widespread. Full control over the texts made possible creation systems with special requirements for performance and safety. Unix also assimilated elements of other operating systems, resulting in the POSIX, X / Oren programming interfaces.

There are two independently evolved branches of UNIX, System V and Berkeley, from which the Unix dialects and Unix-like systems are formed. BSD 1.0, which became the basis for non-commercial UNIX dialects, was released in 1977 at the University of California, Berkeley, based on the UNIX V6 source code. In 1982-1983, the first commercial dialects of Unix, System III and System V, were released at Unix System Laboratories (USL). The Unix System V version formed the basis for most subsequent commercial variants. In 1993, AT&T sold the rights to Unix with Novell's USL lab, which developed a System V-based dialect of UNKWare from the Santa Cruz Operation called SCO UNIXWare. The Unix trademark is owned by the X / Open Company.

Unix gained popularity due to its ability to run on different hardware platforms - portability, or portability. The problem of mobility in UNIX was solved by unifying the architecture of the operating system and using a single language environment. Developed at Bell Labs, the C language has become the link between the hardware platform and the operating environment.

Many portability issues on Unix have been addressed with a single software and user interface. Two organizations tackle the problem of harmonizing multiple dialects of Unix: the IEEE Portable Applications Standards Committee (PASC) and the X / Open Company (The Open Group). These organizations develop standards that enable the integration of heterogeneous operating systems, including those not related to Unix (IEEE PASC - POSIX 1003, X / Open - Common API). So, POSIX-compatible systems are Open-VMS, Windows NT, OS / 2.

As a system that targets a wide range of hardware platforms, Unix's portability is based on a modular structure with a central core. Initially, the UNIX kernel contained a set of tools responsible for process dispatching, memory allocation, working with the file system, driver support external devices, network and security tools.

Later, by separating the minimum required set of tools from the traditional kernel, a microkernel was formed. The most famous Unix microkernel implementations are Amoeba, Chorus (Sun Microsystems), QNX (QNX Software Systems). Chorus microkernel is 60 KB, QNX - 8 KB. Based on QNX, a 30KB POSIX-compatible microkernel Neutrino has been developed. The Mach microkernel was developed at Carnegie Mellon University in 1985, used in NeXT OS (NeXT), MachTen (Mac), OS / 2, AIX (for IBM RS / 6000), OSF / 1, Digital UNIX (for Alpha), Windows NT, BeOS.

In Russia, the Unix operating system is used as a network technology and operating environment for various computer platforms. The infrastructure of the Russian Internet was formed on the basis of Unix. Since the early 1980s, domestic work on the Unix operating system was carried out at the Institute of Atomic Energy named after IV Kurchatov (KIAE) and the Institute of Applied Cybernetics of the Ministry of Aviation Industry. The result of the merger of these teams was the birth of the DEMOS operating system (Dialogue Unified Mobile Operating System), which, in addition to the domestic analogues of the PDP-11 (CM-4, CM-1420), was transferred to the ES EVM and Elbrus. Despite its versatility, Unix ceded the personal computer market to Microsoft's Windows family. The Unix operating system maintains its position in the field of mission-critical systems with a high degree of scalability and fault tolerance.

In 1965, Bell Telephone Laboratories (a division of AT&T), together with peneral jlectric qompang and the Massachusetts Institute of Technology (rIT), began developing a new operating system called rULTIqS (rULTipleoed Information and qomputing Service). The goal of the project participants was to create a time-sharing multitasking operating system capable of supporting several hundred users. Bell Labs was represented by two contributors s Ken Thompson (ten Tompson) and Dmnnis Ritchie (Dennis uitchie). Although the rULTIqS system was never completed (in 1969, Bell Labs withdrew from the project), it became the forerunner of the operating system later called Unio.

However, Thompson, Ritchie and a number of other employees continued to work on creating a convenient programming system. Using ideas and developments that emerged from the work on rULTIqS, they created in 1969 a small operating system that included a file system, a process control subsystem, and a small set of utilities. The system was written in assembler and used on the nDn-7 computer. This operating system is called UNIX, which is similar to rULTIqS and coined by another member of the development team, Brian ternigan.

While the early version of UNIX promised a lot, it could not have realized its full potential without being applied to some real project. And such a project was found. When Bell Labs' patent department needed a word processing system in 1971, UNIX was chosen as the operating system. By that time, it was ported to the more powerful nDn-11, and the system itself had grown a little. 16K was occupied by the system itself, 8K was allocated to application programs, the maximum file size was set at 64K with 512K of disk space.

Soon after the first assembly versions were created, Thomson began working on a compiler for the FxuTuAN language, and as a result developed the language B. It was an interpreter with all the limitations inherent in the interpreter, and Ritchie reworked it into another language called q, which allowed machine code generation. In 1973, the operating system kernel was rewritten in high-level C, an unheard-of step that had a huge impact on the popularity of UNIX. This meant that UNIX could now be ported to other hardware platforms in a matter of months and changes were not too difficult. Bell Labs now has more than 25 running UNIX systems, and the UNIX Sgstem proup (USp) has been formed to maintain UNIX.

Research Versions (AT&T Bell Labs)

In accordance with US federal law, AT&T did not have the right to commercialize UNIX and used it for its own needs, but starting in 1974, the operating system was transferred to universities for educational purposes.

The operating system was modernized, each new version was supplied with the corresponding edition of the Programmer's Manual, from where the versions themselves were called editions (jdition). A total of 10 editions were issued from 1971 to 1989. The most important revisions are listed below.

Edition 1 (1971)

The first version of UNIX written in assembler for nDn-11. Includes language B and many well-known commands and utilities, including cat, chdir, chmod, cp, ed, find, mail, mkdir, mkfs, mount, mv, rm, rmdir, wc, who. Mainly used as a word processing tool for the Bell Labs patent department.

Revision 3 (1973)

The cc command appeared in the system, which launched the C compiler. The number of installed systems reached 16.

Revision 4 (1973)

The first system in which the kernel is written in a high-level C language.

Revision 6 (1975)

The first version of UNIX available outside Bell Labs. The system was completely rewritten in C. Since then, new versions not developed by Bell Labs began to appear and the popularity of UNIX grew. This version of the system was installed at the University of California, Berkeley, and the first version of BSD (Berheleg Softkare Distributuion) UNIX was soon released on its basis.

Revision 7 (1979)

Includes the Bourne Shell and C compiler from Kernighan and Ritchie. The kernel has been rewritten for portability to other platforms. The license for the mth version was bought by ricrosoft, which developed the XjNIX operating system on its basis.

UNIX grew in popularity, and by 1977 there were over 500 systems running. In the same year, the system was first ported to a computer other than nDn.

The UNIX genealogy

There is no “standard” UNIX system; all UNIX-like systems have their own specific features and capabilities. But behind the different names and features, it is still easy to see the architecture, user interface and UNIX programming environment. The explanation is quite simple s all these operating systems are close or distant relatives. The brightest representatives of this family are described below.

System III (1982)

Not wanting to lose the UNIX initiative, AT&T merged several existing OS versions in 1982 and created a version called Sgstem III.

This release was intended to be distributed outside Bell Labs and AT&T, and marked the beginning of a powerful branch of UNIX that is still alive and well today.

System V (1983)

In 1983, System V was released, and later - several more releases (Release) to it:

  • SVR2 (1984): InterProcess Communication (IPC) shared memory, semaphores
  • SVR3 (1987): System I / O Streams, File System Switch, Shared Libraries
  • SVR4 (1989): NFS, FFS, BSD sockets. SVR4 combines the capabilities of several well-known UNIX versions - SunOS, BSD UNIX, and previous System V releases.

Many components of this system were supported by the ANSI, POSIX, X / Open, and SVID standards.

UNIX BSD (1978) (Based on UNIX 6th Edition)

  • 1981 by order of DAunA Tqn / In stack was built into BSD UNIX (in 4.2BSD)
  • 1983 actively used networking technology and could connect to ARPANET
  • 1986 version 4.3BSD released
  • 1993 4.4BSD and BSD Lite released (latest versions released).

OSF / 1 (1988) (Open Software Foundation)

In 1988, IBM, DEC, HP teamed up to create a version of UNIX independent of AT&T and SUN and created an organization called OSF. The result of the activities of this organization was the OSF / 1 operating system.

Standards

The more different variants of UNIX appeared, the more obvious the need for system standardization became. Having standards facilitates application portability and protects both users and manufacturers. As a result, several standards organizations have emerged and a number of standards have been developed that have influenced the development of UNIX.

IEEE POSIX (Institute of Electrical and Electronics Engineers Portable Operating System Interface)

  • 1003.1 (1988) API (Application Programming Interface) OC standardization
  • 1003.2 (1992) define the shell and utilities
  • 1003.1b (1993) Real-Time Application API
  • 1003.1c (1995) definitions of threads

ANSI (American National Standards Institute)

  • X3.159 standard (1989)
  • C syntax and semantics
  • Content of the standard libc library

X / Open

  • 1992 Xwindow standard
  • 1996 creation, together with OSF, of the CDE (Common Desktop Environment) user interface and its interface with the Motiff graphical shell

SVID (System V Interface Definition)

Describes the external interfaces of UNIX versions of System V. In addition to SVID, SVVS (System V Verification Suite) was released - a set of text programs that allows you to determine whether a system is SVID-compliant and worthy of the proud System V name.

Known UNIX versions

  • IBM AIX based on SVR2 with many features of SVR4, BSD, OSF / 1
  • HP-UX version from HP
  • IRIX version by Silicon Graphics, similar to SVR4
  • Digital UNIX version of DEC based on OSF / 1
  • SCO UNIX (1988) one of the first UNIX systems for the PC based on SVR3.2
  • Solaris version of UNIX SVR4 from Sun Microsystems

Before you can master, you must be fluent in basic Linux system concepts. Learning to work with Linux will be a very useful skill because there are a lot of websites, email and other internet services running on Linux servers.

In this section we are going to explain the basic concepts related to Linux. In carrying out our task, we assume that you already have an idea of \u200b\u200bcomputer systems in general, including such components as the central processing unit (CPU), random access memory (RAM), motherboard, hDDas well as other controllers and related devices.

3.1

The term "Linux" is often used to refer to the entire operating system, but in reality Linux is the kernel of the operating system that is started by a boot loader that is launched by the BIOS / UEFI. The core takes on a role similar to that of a conductor in an orchestra, it ensures that hardware and software work together. This role involves the management of equipment, users and file systems. The kernel is a common base for other programs running on a given system, and most often runs ring zero,also known as kernel space

User space

We use the term "user space" to combine everything that happens outside of the kernel.

User-space programs include many of the core utilities from the GNU Project, most of which are designed to be run from the command line. You can use them in scripts to automate various tasks. For more information on the most important commands, see section 3.4 "".

Let's take a quick look at the various tasks performed by the Linux kernel.

3.1.1 Equipment launch

The purpose of the kernel, first of all, is to manage and control the main components of the computer. It detects and configures them when the computer is turned on, and when a device is mounted or removed (for example, a USB device). It also makes them accessible to higher-level software through a simplified programming interface, so applications can take advantage of the devices without having to go into details like the expansion slot where the card is inserted. The API also provides some level of abstraction; this allows you to use equipment for video conferencing, for example, use a webcam regardless of its model and manufacturer. The software can use the interface Video for Linux(V4L) and the kernel will translate the interface calls into real hardware commands required for a particular webcam to work.

The kernel exports data about detected hardware via the / proc / and / sys / virtual systems. Applications often access devices using files created in / dev /. Special files representing disks (e.g. / dev / sda), partitions (/ dev / sdal), mice (/ dev / input / mouse0), keyboards (/ dev / input / event0), sound cards (/ dev / snd / *), serial ports (/ dev / ttyS *) and other components.

There are two types of device files: block and character. The former have the characteristics of a block of data: they have a finite size, and you can access bytes at any position in the block. The latter behave like a stream of symbols. You can read and write characters, but you cannot search for a given position and change arbitrary bytes. To find out the file type of the device, check the first letter of the Is -1 command output. It can be either b for block devices or c for character devices:

As you might have guessed, disks and partitions use device block files, while mouse, keyboard, and serial ports use device symbol files. In both cases, the programming interface includes special commands that can be activated via a system call ioctl.

3.1.2 Merging filesystems

File systems are an important aspect of the kernel. Unix-based systems combine all file stores into one hierarchy, which allows users and applications to access data by knowing where it is located within that hierarchy.

The starting point of this hierarchical tree is called root, represented by the “/” character. This directory can contain named sub-directories. For example, the home subdirectory "/" is called / home /. This sub-directory, in turn, may contain other sub-directories, etc. Each directory can also contain files that will store the files. So home / buxy / Desktop / hello.txt refers to a file called hello.txt, which is stored in the Desktop subdirectory, which is in the buxy subdirectory of the home directory, which is present in root... The kernel compiles between a given naming system and a storage location on disk.

Unlike other systems, Linux has only one such hierarchy and can integrate data from multiple disks. One of these drives becomes root, while the others mounted to directories in the hierarchy (this command is called mount in Linux). These other drives are then made available under mount points ( mount points ) This allows the user's home directories (which are usually stored in / home /) to be stored on a separate hard disk that will contain the buxy directory (along with other user's home directories). Once you've mounted the drive to / home /, these directories are available in their normal location, and various paths such as /home/buxy/Desktop/hello.txt continue to work.

There are many file system formats, corresponding to the many ways in which data is physically stored on disks. The most widely known are ext2, ext3, and ext4, but others exist. For instance, VFAT is the file system that was historically used by DOS and Windows operating systems. Linux support for VFAT allows hard drives to be accessible under both Kali and Windows. In any case, you must prepare the filesystem on the disk before you can mount it, and this operation is called formatting.

Commands such as mkfs.ext3 (where mkfs stands for MaKe FileSystem) handles formatting. As a parameter, these commands require a device file representing the partition to be formatted (for example, / dev / sdal, the first partition on the first disk). This operation destroys all data and should only be run once, unless of course you want to erase the file system and start a new job.

There are also network filesystems such as NFSthat do not store data on the local disk. Instead, the data is sent over the network to a server that stores it and delivers it on demand. With the abstraction of the filesystem, you don't have to worry about how that drive is mounted, as the files remain accessible in their normal hierarchical path.

3.1.3 Process management

A process is an executable instance of a program that needs to store memory, both the program itself and its operating data. The kernel is responsible for creating and tracking processes. When a program starts up, the kernel first allocates some memory, loads executable code from the filesystem into that memory, and then runs the code. It contains information about this process, the most notable of which is an identification number known as process id (process identifier (PID)).

Most modern operating systems, namely those that run on the Unix kernel, including Linux, are capable of performing a variety of tasks. In other words, they allow the system to run many processes at the same time.

In reality, there is only one running process at any given time, but the kernel divides the processor's time into small chunks and starts each process in turn. Since these time slices are very short (in milliseconds), they create appearance processes running in parallel, although they are active only during their time interval and idle the rest of the time. The main task of the kernel is to tune the scheduling mechanisms in such a way as to maintain this appearance while increasing system performance. If the time span is too long, it may stop responding as expected. Well, if they are too short, the system will waste too much time switching between them.

Such decisions can be adjusted through process priorities, where higher priority processes will run for longer periods of time and at more frequent time slices than lower priority processes.

Multiprocessor Systems (and other options)

The limitations described above that only one process can run at a time do not apply in all situations. It would be more correct to say that one core can only work with one process. Multiprocessor, multicore, or hyperthreading systems allow multiple processes to run in parallel. However, the same time reduction system is used to handle situations where there are more active processes than the available processor cores. This is not unusual: the base system, even completely dormant, almost always has dozens of processes running.

The kernel allows multiple independent instances of the same program to run, but each is only allowed access to its own time slices and memory. Thus, their data remains independent.

3.1.4 Rights management

Unix systems support many users and groups and allow you to control access rights. In most cases, the process is defined by the user who starts it. This process can only perform those actions that are allowed to its owner. For example, opening a file requires the kernel to check the process for the necessary rights (for more information on this particular example, see section 3.4.4, “Rights Management”)

3.2 Linux command line

By "command line" we mean a text-based interface that allows you to enter commands, execute them, and view the results. You can launch a terminal (a text screen inside a graphical desktop, or a text console outside of any graphical interface) and a command interpreter inside it ( shell).

3.2.1

When your system is working properly, the most in a simple way accessing the command line is to launch a terminal in a graphical desktop session.


Figure 3.1 Starting the GNOME Terminal

For example, on a Kali Linux system by default, the GNOME terminal can be launched from a list of favorites. Alternatively, you can enter “terminal” in the Activities window (the window that is activated when you move the mouse to the upper left corner) and click on the desired application icon that will appear (Figure 3.1, ““).

In case of any irregularities or incorrect operation of your GUI, you can still launch the command line on virtual consoles (up to six of them can be accessed via six key combinations, starting with CTRL + ALT + F1 and ending with CTRL + ALT + F6 - the CTRL key may not be pressed if you are already in text mode outside of the GUI Xorg or Wayland).

You get a regular login screen where you enter your username and password before accessing the command line with its shell:

The program that processes the data you enter and the execution of your commands is called shell(shellor the command line interpreter). The default shell provided in Kali Linux is Bash (it means Bourne Again SHell). The trailing $ or # indicates that the shell is expecting your input. These symbols also indicate how Bash perceives you as a regular user (the first case with a dollar sign) or as a superuser (the last case with a hash).

3.2.2

This section provides only short review some commands, each of which has many different options and capabilities not covered here, so please refer to the extensive documentation available in the respective man pages. In penetration testing, more often than not, you will be accessing the system through the shell, after successful exploitation, rather than through the user graphical interface. Knowing how to use the command line wisely is essential if you want to be successful as a security professional.

Once the session is started, the pwd command (which stands for print working directory (display working directory)) will display your current location on the filesystem. Your current location can be changed using the cd command directory name(where cd means (change directory)). In the event that you did not specify the directory where you want to go, you will automatically return to your home directory. If you enter cd -, then you will return to your previous working directory (the one you were in before entering the last cd command). The parent directory is always named .. (dots), while the current directory is indicated. (one point). The ls command allows you transfer directory contents. If you do not specify additional parameters, the ls command will display the contents of the current directory.

You can create a new directory with the mkdir command directory name,and also delete the existing (empty) directory using the rmdir command directory name. The mv command will let you move and rename files and directories; delete the file can be using rm file name,and copying the file is done with cp sourcefile targetfile.

The shell executes each command, running the first program with the given name that it finds in the directory specified in the environment variable PATH... Most often, these programs are located in / bin, / sbin, / usr / bin or / usr / sbin... For example, the ls command is located in / bin / ls; Sometimes the command is directly processed by the shell, in which case it is called a built-in shell command (among them are cd and pwd); the type command allows you to query the type of each command.

Note the use of the echo command, which simply displays the string in the terminal. In this case, it is used to display the contents of the environment variable on the screen. the shell automatically replaces variables with their values \u200b\u200bbefore executing the command line.

Environment Variables

Environment variables allow you to store global settings for the shell or other programs. They are contextual, but inheritable. For example, each process has its own set of environment variables (they are contextual). Shells such as login shells can declare variables that will be passed on to other executable programs (they are inherited).

These variables can be defined both for the system in / etc / profile and for the user in ~ / .profile, but variables that are not specific to command line interpreters are better placed in / etc / environment, as these variables will be injected into all user sessions thanks to the Pluggable Authentication Module (PAM) - even if no wrapper is running.

3.3 Linux file system

3.3.1 Filesystem Hierarchy Standard

Like other Linux distributions, Kali Linux is organized according to the standard Filesystem Hierarchy Standard (FHS), allowing users of others linux distributions navigate Kali with ease. The FHS defines the purpose of each directory. Top-level directories are described as follows.

  • / bin /: main programs
  • / boot /: Kali Linux kernel and other files needed for its early boot process
  • / dev /: device files
  • / etc /: config files
  • / home /: personal files of users
  • / lib /: core libraries
  • / media / *: mount points for removable devices (CD-ROMs, USB drives, etc.)
  • / mnt /: temporary mount points
  • / opt /: additional applications provided by third parties
  • / root /: administrator's personal files (root files)
  • / run /: non-persistent workflow files that don't persist across reboots (not yet included in the FHS)
  • / sbin /: system programs
  • / srv /: data used by servers located on this system
  • / tmp /: temporary files (this directory is often emptied after reboot)
  • / usr /: applications (this directory is further divided into bin, sbin, lib according to the same logic as in the root directory). In addition, / usr / share / contains architecture independent data. The / usr / local / directory is intended for use by an administrator to manually install applications without overwriting the files processed by the packaging system (dpkg).
  • / var /: variable data processed by the daemon. This includes log files, queues, buffers, and caches.
  • / proc / and / sys / are specific to the Linux kernel (and not part of the FHS). They are used by the kernel to export data to user space.

3.3.2 User home directory

The contents of the user directory are not standardized, but there are a few noteworthy conventions nonetheless. One is that the user's home directory is often denoted by a tilde (“~”). This is very useful to know because the command interpreters automatically replace the tilde with the correct directory (which is in the environment variable HOME and whose usual meaning is / home / user /).

Traditionally, application configuration files are often stored directly in your home directory, but their filenames usually start with a period (e.g. email client mutt stores configuration in ~ / .muttrc). Note that filenames starting with a period are hidden by default; the ls command will only list them if the –a option is specified, and graphical file managers must be explicitly configured to show hidden files.

Some programs also use multiple configuration files organized in the same directory (for example ~ / .ssh /). Some applications (like the Firefox web browser) also use their directory to store a cache of downloaded data. This means that these directories can end up consuming a lot of disk space.

These configuration files, which are stored directly in your home directory, are often collectively called dotfiles,expand for a long time to the point that these directories can become cluttered with them. Fortunately, joint work under the auspices of FreeDesktop.org led to the creation of the XDG Base Directory Specification (XDG Base Directory Specification) convention, which aims to clean up these files and directories. This specification states that configuration files should be stored in ~ / .config, cache files in /. Cache, and application data files in /. Local (or their sub-directories). This convention is slowly gaining momentum.

The graphical desktop most often uses shortcuts to display the contents of the / Desktop / directory (or any other word that is an exact translation of this, on systems that do not use English). Finally, the email system sometimes stores incoming emails in the / Mail / directory.

It is interesting:

Linux basics

Linux is inspired by the Unix operating system, which appeared in 1969 and is still used and developed today. Much of the internals of UNIX exist in Linux, which is key to understanding the fundamentals of the system.

Unix focused primarily on the command line interface, which is what Linux inherited. Thus, the graphical user interface with its windows, images and menus is built on top of the main interface - the command line. It also means that the Linux file system is built to be easily manageable and accessible from the command line.

Directories and file system

File systems in Linux and Unix are organized in a hierarchical, tree-like structure. Filesystem top level - / or root directory ... This means that all other files and directories (including also other drives and partitions) are inside the root directory. In UNIX and Linux, everything is considered a file — including hard drives, their partitions, and removable media.

For example, /home/jebediah/cheeses.odt shows the full path to the cheeses.odt file. The file is located in the jebediah directory, which is located in the home directory, which in turn is located in the root directory (/).

Within the root directory (/), there are a number of important system directories that are found in most Linux distributions. The following is a list of the shared directories that are directly under the root directory (/):

Access rights

All files in Linux have permissions that allow or deny reading, modifying, or executing them. The super user "root" has access to any file on the system.

Each file has the following three access sets, in order of importance:

    owner

    refers to the user who owns the file

    group

    refers to the group associated with the file

    others

    applies to all other users of the system

Each of the three sets defines access rights. The rights, as well as how they are applied to various files and directories, are shown below:

    reading

    files can be displayed and opened for reading

    directory contents are available for viewing

    recording

    files can be changed or deleted

    the contents of the kalog are available for change

    performance

    executable files can be run as programs

    directories can be opened

To view and edit the permissions on files and directories, open the Applications → Accessories → Home Folder and right-click on a file or directory. Then select Properties. The permissions exist under the Permissions tab and allow for the editing of all permission levels, if you are the owner of the file.

To learn more about file permissions in Linux, read the file permissions page in the Ubuntu Wiki.

Terminals

Working at the command line is not as daunting a task as you would think. There is no special knowledge needed to know how to use the command line. It is a program like everything else. Most things in Linux can be done using the command line, although there are graphical tools for most programs. Sometimes they are just not enough. This is where the command line comes in handy.

The Terminal is located in Applications → Terminal. The terminal is often called the command prompt or the shell. In days gone by, this was the way the user interacted with the computer. However, Linux users have found that the use of the shell can be quicker than a graphical method and still holds some merit today. Here you will learn how to use the terminal.

The terminal was originally used for file management, and indeed it is still used as a file browser if the graphical environment does not work. You can use the terminal as a browser to manage files and undo changes that have been made.

Basic commands

View directory contents: ls

Command ls shows a list of files in different colors with full text formatting

Create directories: mkdir (directory name)

Command mkdir creates a new directory.

Go to directory: cd (/ address / directory)

Command cd allows you to go to any directory you specify.

Copying a file or directory: cp (what is the name of the file or directory) (where is the name of the directory or file)

Command cp copies any selected file. Command cp -r copies any selected directory with all contents.

Remove files or directories: rm (file or folder name)

Command rm deletes any selected file. Command rm -rf deletes any selected directory with all contents.

Rename file or directory: mv (file or directory name)

Command mv renames or moves the selected file or directory.

Find directories and files: locate (directory or file name)

Command locate allows you to find a given file on your computer. To speed up the work, file indexing is used. To update the index, enter the command updatedb ... It starts automatically every day if the computer is turned on. Superuser rights are required to run this command (see "The root user and the sudo command").

You can also use masks to specify more than one file, such as "*" (matches all characters) or "?" (match with one character).

For a more thorough introduction to the Linux command line, please read the command line introduction on the Ubuntu wiki.

Editing text

All of the configurations and settings in Linux are saved in text files. Even though you most often can edit configurations through the graphical interface, you may occasionally have to edit them by hand. Mousepad is the default Xubuntu text editor, which you can launch by clicking Applications → Accessories → Mousepad on the desktop menu system.

Sometimes, Mousepad run from the command line using the application gksudo which launches Mousepad with administrative privileges, allowing configuration files to be modified.

If you need text editor on the command line, you can use nano - easy to use text editor. When running from the command line, always use the following command to disable automatic word wrap:

Nano -w

For more information about how to use nano , refer to the guide on the wiki.

There are also quite a few other terminal-based editors available in Ubuntu. Popular ones include VIM and Emacs (the pros and cons of each are cause for much friendly debate within the Linux community). These are often more complex to use than nano , but are also more powerful.

Root user and sudo command

The root user in GNU / Linux is the user which has administrative access to your system. Normal users do not have this access for security reasons. However, Ubuntu does not enable the root user. Instead, administrative access is given to individual users, who may use the "sudo" application to perform administrative tasks. The first user account you created on your system during installation will, by default, have access to sudo. You can restrict and enable sudo access to users with the Users and Groups application (see "Managing Users and Groups" for more information).

When you open a program that requires superuser rights, sudo will ask you for your password. This will ensure that malicious applications cannot damage your system, and it will also remind you that you are about to perform actions that require extra care!

To use sudo on the command line, just type "sudo" before the command you want to run. You will then need to enter your password.

Sudo will remember your password for 15 minutes (by default). This feature was designed to allow users to perform multiple administrative tasks without being asked for a password each time.

Be careful when doing administrative tasks - you might damage your system!

Some other tips for using sudo include:

    To use the terminal as a super user (root), type "sudo -i" at the command line

    The entire suite of default graphical configuration tools in Ubuntu already use sudo, so they will prompt you for your password if needed.

    When running graphical applications, “gksudo” is used instead of “sudo”. This allows you to prompt the user for a password in a small graphical window. The "gksudo" command is handy if you want to install a start button Synaptic to your panel or something similar.

    For more information on the sudo program and the absence of a root user in Ubuntu, read the sudo page on the Ubuntu wiki.