The Concept of Process

|

  • An operating system executes a variety of programs:

-Batch system – jobs
-Time-shared systems – user programs or tasks

  • Textbook uses the terms job and process almost interchangeably
  • Process – a program in execution; process execution must progress in sequential fashion
  • A process includes:
    -program counter
    -stack
    -data section

a.) Process State - As a process executes, it changes state:
new: The process is being created

running: Instructions are being executed waiting:

The process is waiting for some event to occur

  • ready: The process is waiting to be assigned to a processor
  • terminated: The process has finished execution

b.) Process Control Block - a process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including:

  • The current state of the process i.e., whether it is ready, running, waiting, or whatever.
  • Unique identification of the process in order to track "which is which" information.
  • A pointer to parent process.
  • Similarly, a pointer to child process (if it exists).
  • The priority of process (a part of CPU scheduling information).
  • Pointers to locate memory of processes.
  • A register save area.
  • The processor it is running on.

The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems. During a context switch, the running process is stopped and another process is given a chance to run. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process. Since the PCB contains the critical information for the process, it must be kept in an area of memory protected from normal user access. In some operating systems the PCB is placed in the beginning of the kernel stack of the process since that is a convenient protected location.

c.) Threads - a thread is a single sequence stream within in a process. Because threads have some of the properties of processes, they are sometimes called lightweight processes. In a process, threads allow multiple executions of streams. In many respect, threads are popular way to improve application through parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will generally call different procedures and thus a different execution history. This is why thread needs its own stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of one other like processes as a result threads shares with other threads their code section, data section, OS resources also known as task, such as open files and signals.

Process Scheduling

|

The assignment of physical processors to processes allows processors to accomplish work. The problem of determining when processors should be assigned and to which processes is called processor scheduling or CPU scheduling.

When more than one process is runable, the operating system must decide which one first. The part of the operating system concerned with this decision is called the scheduler, and algorithm it uses is called the scheduling algorithm.

Goals of Scheduling (objectives)
In this section we try to answer following question: What the scheduler try to achieve?
Many objectives must be considered in the design of a scheduling discipline. In particular, a scheduler should consider fairness, efficiency, response time, turnaround time, throughput, etc., Some of these goals depends on the system one is using for example batch system, interactive system or real-time system, etc. but there are also some goals that are desirable in all systems.
a.)Scheduling Queues
  • Job queue – set of all processes in the system
  • Ready queue – set of all processes residing in main memory, ready and waiting to execute
  • Device queues – set of processes waiting for an I/O device
  • Processes migrate among the various queues

Ready Queue And Various I/O Device Queues

b.) Schedulers

  • Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue.
  • Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU.
  • Mid term scheduler

Addition of Medium Term Scheduling

  • Short-term scheduler is invoked very frequently (milliseconds) Þ (must be fast)
  • Long-term scheduler is invoked very infrequently (seconds, minutes) Þ (may be slow)
  • The long-term scheduler controls the degree of multiprogramming
  • Processes can be described as either:
    - I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
    - CPU-bound process – spends more time doing computations; few very long CPU bursts

c.) Context Switch

  • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process
  • Context-switch time is overhead; the system does no useful work while switching
  • Time dependent on hardware support

CPU switch form process0 to Process1

Operations on Processes

|

a.) Process Creation
  • Parent process create children processes, which, in turn create other processes, forming a tree of processes
  • Resource sharing
    - Parent and children share all resources
    - Children share subset of parent’s resources
    - Parent and child share no resources
  • Execution
    - Parent and children execute concurrently
    - Parent waits until children terminate
  • Address space
    - Child duplicate of parent
    - Child has a program loaded into it
  • UNIX examples
    - fork system call creates new process
    - exec system call used after a fork to replace the process’ memory space with a new program

Process Creation

b.) Process Termination

  • Process executes last statement and asks the operating system to delete it (exit)
    - Output data from child to parent (via wait)
    - Process’ resources are deallocated by operating system
  • Parent may terminate execution of children processes (abort)
    - Child has exceeded allocated resources
    - Task assigned to child is no longer required
    - If parent is exiting
    - Some operating system do not allow child to continue if its parent terminates
    –All children terminated - cascading termination

Cooperating Process

|

  • Independent process cannot affect or be affected by the execution of another process
  • Cooperating process can affect or be affected by the execution of another process
  • Advantages of process cooperation
    - Information sharing
    - Computation speed-up
    - Modularity
    - Convenience

Interprocess Communication (IPC)

|

  • Mechanism for processes to communicate and to synchronize their actions
  • Message system – processes communicate with each other without resorting to shared variables
  • IPC facility provides two operations:
    - send(message) – message size fixed or variable
    - receive(message)
  • If P and Q wish to communicate, they need to:
    - establish a communication link between them
    - exchange messages via send/receive
  • Implementation of communication link
    - physical (e.g., shared memory, hardware bus)
    - logical (e.g., logical properties)

Quiz #3

|

1.) Q: What are the major activities of the OS with regards to process management?
A: The major activities of the OS with regards to process management are the ff:
-process creation and deletion
-process suspension and resumption
-provision mechanisms for:
-process synchronization
-process communication
-deadlock handling
2.) Q: What are the major activities of the OS with regards to memory management?
A: The major activities of the OS with regards to memory management are the ff:
-keep track of which parts of the memory are currently being used and by whom
-decide which process to load when memory space is available
-allocate and deallocate memory space as needed
3.) Q: What are the major activities of the OS with regards to storage management?
A: The major activities of the OS with regards to storage management are the ff:
-free space management
-storage allocation
-disk scheduling
4.) Q: What are the major activities of the OS with regards to file management?
A: The major activities of the OS with regards to file management are the ff:
-file creation and deletion
-directory creation and deletion
-support of primative for manipulating files and directories
-mapping files onto secondary storage
-file backup on stable(non-volatile) storage media
5.) Q: What is the purpose of the command interpreter?
A: It serves as the interface between the OS and the user
-user friendly, mouse based windows envirorment in the Macintosh and Micrososft
OS
-in MS-DOS and UNIX commands are typed on the keyboard and displayed on screen
Many commands are given to the OS by control statements which deals with:
-process creation and management
-I/O handling
-secondary - storage management
-main - memory management
-file - system access
-protection
-networking
The progarms that reads interprets control statement is called:
-comand - line interpreter
-shell(UNIX)
Its function is to get and execute teh next command statement

Virtual Machine

|

A virtual machine is a tightly isolated software container that can run its own operating systems and applications as if it were a physical computer. A virtual machine behaves exactly like a physical computer and contains it own virtual (ie, software-based) CPU, RAM hard disk and network interface card (NIC).An operating system can’t tell the difference between a virtual machine and a physical machine, nor can applications or other computers on a network. Even the virtual machine thinks it is a “real” computer. Nevertheless, a virtual machine is composed entirely of software and contains no hardware components whatsoever. As a result, virtual machines offer a number of distinct advantages over physical hardware.
Implementation
  • Compact. This is the biggest requirement: that the implementation be compact. How compact you need it to be depends entirely on what you want to do with it. But in general, the smaller the footprint, the more you can do with it. In the past, I’ve targeted 64K-128K of payload size, and considered that pretty good. But the smaller the implementor makes it, the more likely it is to be useful in more contexts such as first stage injection. This also extends to the libraries and bytecode that is generated for the virtual machine, as the virtual machine by itself is useless without logic to drive it. This is why I’m targeting a much smaller payload size in my next implementation, of a few kilobytes.
  • Portability. This is another huge design goal that I think many people who operate in this space miss out on. The implementation of the virtual machine should be portable between platforms. Behaviors should be as similar as possible and abstracted out so as not to distract the programmer-user from the task at hand. A task such as opening a socket on a Windows machine should work on a Unix machine as well. When we have multiple implementations of a virtual machine, we run into the risk of different behaviors, defeating the entire purpose of a portable bytecode virtual machine.
  • Dynamic. The virtual machine should be dynamic. We should be able to modify the behavior of the program and language while executing. By being self-modifying, the language and core functionality can start out very small, but be extended upon. Features that are not needed early in the process such as garbage collection can be added. Squeeze a payload in, and watch it expand, much like a ship in the bottle.
  • Secure. Using the virtual machine for injection purposes should never introduce a state of insecurity to the target machine or network! While it is all but impossible to make a virtual machine immune to examination by a local user who controls the machine, the virtual machine should not be easily attackable from the outside or use a cleartext control channel.
  • Elegance. This cannot be understated. While it is easy to write a functional and spartan language for the purpose, it is not going to be pleasant for the programmer. Programmers are happiest when they have a language that does not get in their way. If the language and virtual machine environment does not have this characteristic, then people are not going to use this tool. Elegant design also lends itself well to other desired traits such as dynamism and compactness.
Benefits
Virtual Machines Benefits
In general, VMware virtual machines possess four key characteristics that benefit the user:
  • Compatibility: Virtual machines are compatible with all standard x86 computers
  • Isolation: Virtual machines are isolated from each other as if physically separated
  • Encapsulation: Virtual machines encapsulate a complete computing environment
  • Hardware independence: Virtual machines run independently of underlying hardware
Examples
Language IR Implementation(s)
Java JVM bytecode Interpreter, JIT
C# MSIL JIT (but may be pre -c ompiled)
Prolog WAM code compiled, interpreted
Forth bytecode interpreted
Smalltalk bytecode interpreted
Pascal p - code interpreted
-- compiled
C, C++ -- compiled (usually)
Perl 6 PVM interpreted
Parrot interpreted, JIT
Python -- interpreted
sh, bash, csh original text interpreted

System Generation

|

An operational system is a combination of the z/TPF system, application programs, and people. People assign purpose to the system and use the system. The making of an operational system depends on three interrelated concepts:

  • System definition: The necessary application and z/TPF system knowledge required to select the hardware configuration and related values used by thez/TPF system software.
  • System initialization: The process of creating the z/TPF system tables and configuration-dependent system software.
  • System restart and switchover: The procedures used by the z/TPF system software to ready the configuration for online use.

The first two items are sometimes collectively called system generation; also installing and implementing. System definition is sometimes called design. System restart is the component that uses the results of a system generation to place the system in a condition to process real-time input. The initial startup is a special case of restart and for this reason system restart is sometimes called initial program load, or IPL. System restart uses values found in tables set up during system generation and changed during the online execution of the system. A switchover implies shifting the processing load to a different central processing complex (CPC), and requires some additional procedures on the part of a system operator. A restart or switchover may be necessary either for a detected hardware failure, detected software failure, or operator option. In any event, system definition (design), initialization, restart, and switchover are related to error recovery. This provides the necessary background to use this information, which is the principal reference to be used to install the z/TPF system.

System Boot

|

The typical computer system boots over and over again with no problems, starting the computer's operating system (OS) and identifying its hardware and software components that all work together to provide the user with the complete computing experience. But what happens between the time that the user powers up the computer and when the GUI icons appear on the desktop?

In order for a computer to successfully boot, its BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence.

When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction, which is the instruction to run the power-on self test (POST), in a predetermined memory address. POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other hardware devices, such as the keyboard and mouse, to ensure they are functioning properly.

Once the POST has determined that all components are functioning properly and the CPU has successfully initialized, the BIOS looks for an OS to load.

The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in most PCs, the OS loads from the C drive on the hard drive even though the BIOS has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of drives that the CMOS looks to in order to locate the OS is called the boot sequence, which can be changed by altering the CMOS setup. Looking to the appropriate boot drive, the BIOS will first encounter the boot record, which tells it where to find the beginning of the OS and the subsequent program file that will initialize the OS.

Once the OS initializes, the BIOS copies its files into memory and the OS basically takes over control of the boot process. Now in control, the OS performs another inventory of the system's memory and memory availability (which the BIOS already checked) and loads the device
drivers that it needs to control the peripheral devices, such as a printer, scanner, optical drive, mouse and keyboard. This is the final stage in the boot process, after which the user can access the system’s applications to perform tasks.

Systems Components

|

Operating Systems Process Management


In operating systems, process is defined as “A program in execution”. Process can be considered as an entity that consists of a number of elements, including: identifier, state, priority, program counter, memory pointer, context data, and I/O request. The above information about a process is usually stored in a data structure, typically called process block.

Figure 1 shows a simplified process block. Because process management involves scheduling (CPU scheduling, I/O scheduling, and so on), state switching, and resource management, process block is one of the most commonly accessed data type in operating system. Its design directly affects the efficiency of the operating system. As a result, in most operating systems, there is a data object that contains information about all the current active processes. It is called process controller.


Figure 2 shows the structure of a process controller, which is implemented as a linked-list of process blocks.

In order to achieve high efficiency, process controller is usually implemented as a global variable that can be accessed by both the kernel modules and nonkernel modules. For example, any time a new process (task) is created, the module that created this process should be able to access the process controller to add this new process. Therefore, process controller – the data object that controls the current active process – is usually implemented as a category-5 global variable. This means, both the kernel modules and nonkernel modules can access process controller to change its fields and these changes can affect the uses of process controller in kernel modules.


Main Memory Management

Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. Main memory is a volatile storage device. It loses its contents in the case of system failure.

The operating system is responsible for the following activities in connections with memory management:
  • Keep track of which parts of memory are currently being used and by whom.
  • Decide which processes to load when memory space becomes available.
  • Allocate and deallocate memory space as needed.

Another strategy for managing memory is the virtual memory, which allows programs to run even when they are only partially in main memory. The basic idea behind this strategy is that the combine size of the program, data, and stack may exceed the amount of physical memory available for it. The operating system keeps those parts of the program currently in use in main memory, and the rest on the disk.

Virtual memory can also work in a multiprogramming system, with bits and pieces pf many programs in memory at once. While a program is waiting for a part of itself to be brought in, it is waiting for I/O and cannot run, so the CPU can be given to another process, the same way as for any other multiprogramming system. Most virtual memory systems use a technique called paging.

File Management

Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.

Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

The operating system is responsible for the following activities in connections with file management:
  • File creation and deletion.
  • Directory creation and deletion.
  • Support of primitives for manipulating files and directories.
  • Mapping files onto secondary storage.
  • File backup on stable (nonvolatile) storage media.

I/O System Management

The I/O system consists of:

  • A buffer-caching system
  • A general device-driver interface
  • Drivers for specific hardware devices

Secondary Storage Management


Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory.


Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.


The operating system is responsible for the following activities in connection with disk management:

  • Free space management
  • Storage allocation
  • Disk scheduling

Protection System

Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources.

The protection mechanism must:

  • distinguish between authorized and unauthorized usage.
  • specify the controls to be imposed.
  • provide a means of enforcement.

Command Interpreter System

Many commands are given to the operating system by control statements which deal with:

  • process creation and management
  • I/O handling
  • secondary-storage management
  • main-memory management
  • file-system access
  • protection
  • networking

The program that reads and interprets control statements is called variously:

  • command-line interpreter
  • shell (in UNIX)

Its function is to get and execute the next command statement.

Slide 9
n

Operating System Services

|

1.) Program execution – system capability to load a program into memory and to run it.
2.) I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O.
3.) File-system manipulation – program capability to read, write, create, and delete files.
4.) Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.
5.) Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs.

System Calls

|

System calls provide the interface between a running program and the operating system.

  • Generally available as assembly-language instructions.
  • Languages defined to replace assembly language for systems programming allow system calls to be made directly (e.g., C, C++)

Three general methods are used to pass parameters between a running program and the operating system.

  • Pass parameters in registers.
  • Store the parameters in a table in memory, and the table address is passed as a parameter in a register.
  • Push (store) the parameters onto the stack by the program, and pop off the stack by operating system.

Types of System Calls

Process Control
create/terminate a process (including self)

File Management
Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.

Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

Device Management
Device Management is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.

Information Maintenance

Slide 16

System Structure

|

Simple Structure

MS-DOS – written to provide the most functionality in the least space
not divided into modules

Although MS-DOS has some structure, its interfaces and levels of functionality are not well separated.





UNIX – limited by hardware functionality, the original UNIX operating system had limited structuring. The UNIX OS consists of two separable parts.
  • Systems programs
  • The kernel
  • Consists of everything below the system-call interface and above the physical hardware
  • Provides the file system, CPU scheduling memory management, and other operating-system functions; a large number of functions for one level.
Layered Approach

The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.

With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.