Showing posts with label OS3. Show all posts
Showing posts with label OS3. Show all posts

Virtual Machine

|

A virtual machine is a tightly isolated software container that can run its own operating systems and applications as if it were a physical computer. A virtual machine behaves exactly like a physical computer and contains it own virtual (ie, software-based) CPU, RAM hard disk and network interface card (NIC).An operating system can’t tell the difference between a virtual machine and a physical machine, nor can applications or other computers on a network. Even the virtual machine thinks it is a “real” computer. Nevertheless, a virtual machine is composed entirely of software and contains no hardware components whatsoever. As a result, virtual machines offer a number of distinct advantages over physical hardware.
Implementation
  • Compact. This is the biggest requirement: that the implementation be compact. How compact you need it to be depends entirely on what you want to do with it. But in general, the smaller the footprint, the more you can do with it. In the past, I’ve targeted 64K-128K of payload size, and considered that pretty good. But the smaller the implementor makes it, the more likely it is to be useful in more contexts such as first stage injection. This also extends to the libraries and bytecode that is generated for the virtual machine, as the virtual machine by itself is useless without logic to drive it. This is why I’m targeting a much smaller payload size in my next implementation, of a few kilobytes.
  • Portability. This is another huge design goal that I think many people who operate in this space miss out on. The implementation of the virtual machine should be portable between platforms. Behaviors should be as similar as possible and abstracted out so as not to distract the programmer-user from the task at hand. A task such as opening a socket on a Windows machine should work on a Unix machine as well. When we have multiple implementations of a virtual machine, we run into the risk of different behaviors, defeating the entire purpose of a portable bytecode virtual machine.
  • Dynamic. The virtual machine should be dynamic. We should be able to modify the behavior of the program and language while executing. By being self-modifying, the language and core functionality can start out very small, but be extended upon. Features that are not needed early in the process such as garbage collection can be added. Squeeze a payload in, and watch it expand, much like a ship in the bottle.
  • Secure. Using the virtual machine for injection purposes should never introduce a state of insecurity to the target machine or network! While it is all but impossible to make a virtual machine immune to examination by a local user who controls the machine, the virtual machine should not be easily attackable from the outside or use a cleartext control channel.
  • Elegance. This cannot be understated. While it is easy to write a functional and spartan language for the purpose, it is not going to be pleasant for the programmer. Programmers are happiest when they have a language that does not get in their way. If the language and virtual machine environment does not have this characteristic, then people are not going to use this tool. Elegant design also lends itself well to other desired traits such as dynamism and compactness.
Benefits
Virtual Machines Benefits
In general, VMware virtual machines possess four key characteristics that benefit the user:
  • Compatibility: Virtual machines are compatible with all standard x86 computers
  • Isolation: Virtual machines are isolated from each other as if physically separated
  • Encapsulation: Virtual machines encapsulate a complete computing environment
  • Hardware independence: Virtual machines run independently of underlying hardware
Examples
Language IR Implementation(s)
Java JVM bytecode Interpreter, JIT
C# MSIL JIT (but may be pre -c ompiled)
Prolog WAM code compiled, interpreted
Forth bytecode interpreted
Smalltalk bytecode interpreted
Pascal p - code interpreted
-- compiled
C, C++ -- compiled (usually)
Perl 6 PVM interpreted
Parrot interpreted, JIT
Python -- interpreted
sh, bash, csh original text interpreted

System Generation

|

An operational system is a combination of the z/TPF system, application programs, and people. People assign purpose to the system and use the system. The making of an operational system depends on three interrelated concepts:

  • System definition: The necessary application and z/TPF system knowledge required to select the hardware configuration and related values used by thez/TPF system software.
  • System initialization: The process of creating the z/TPF system tables and configuration-dependent system software.
  • System restart and switchover: The procedures used by the z/TPF system software to ready the configuration for online use.

The first two items are sometimes collectively called system generation; also installing and implementing. System definition is sometimes called design. System restart is the component that uses the results of a system generation to place the system in a condition to process real-time input. The initial startup is a special case of restart and for this reason system restart is sometimes called initial program load, or IPL. System restart uses values found in tables set up during system generation and changed during the online execution of the system. A switchover implies shifting the processing load to a different central processing complex (CPC), and requires some additional procedures on the part of a system operator. A restart or switchover may be necessary either for a detected hardware failure, detected software failure, or operator option. In any event, system definition (design), initialization, restart, and switchover are related to error recovery. This provides the necessary background to use this information, which is the principal reference to be used to install the z/TPF system.

System Boot

|

The typical computer system boots over and over again with no problems, starting the computer's operating system (OS) and identifying its hardware and software components that all work together to provide the user with the complete computing experience. But what happens between the time that the user powers up the computer and when the GUI icons appear on the desktop?

In order for a computer to successfully boot, its BIOS, operating system and hardware components must all be working properly; failure of any one of these three elements will likely result in a failed boot sequence.

When the computer's power is first turned on, the CPU initializes itself, which is triggered by a series of clock ticks generated by the system clock. Part of the CPU's initialization is to look to the system's ROM BIOS for its first instruction in the startup program. The ROM BIOS stores the first instruction, which is the instruction to run the power-on self test (POST), in a predetermined memory address. POST begins by checking the BIOS chip and then tests CMOS RAM. If the POST does not detect a battery failure, it then continues to initialize the CPU, checking the inventoried hardware devices (such as the video card), secondary storage devices, such as hard drives and floppy drives, ports and other hardware devices, such as the keyboard and mouse, to ensure they are functioning properly.

Once the POST has determined that all components are functioning properly and the CPU has successfully initialized, the BIOS looks for an OS to load.

The BIOS typically looks to the CMOS chip to tell it where to find the OS, and in most PCs, the OS loads from the C drive on the hard drive even though the BIOS has the capability to load the OS from a floppy disk, CD or ZIP drive. The order of drives that the CMOS looks to in order to locate the OS is called the boot sequence, which can be changed by altering the CMOS setup. Looking to the appropriate boot drive, the BIOS will first encounter the boot record, which tells it where to find the beginning of the OS and the subsequent program file that will initialize the OS.

Once the OS initializes, the BIOS copies its files into memory and the OS basically takes over control of the boot process. Now in control, the OS performs another inventory of the system's memory and memory availability (which the BIOS already checked) and loads the device
drivers that it needs to control the peripheral devices, such as a printer, scanner, optical drive, mouse and keyboard. This is the final stage in the boot process, after which the user can access the system’s applications to perform tasks.

Systems Components

|

Operating Systems Process Management


In operating systems, process is defined as “A program in execution”. Process can be considered as an entity that consists of a number of elements, including: identifier, state, priority, program counter, memory pointer, context data, and I/O request. The above information about a process is usually stored in a data structure, typically called process block.

Figure 1 shows a simplified process block. Because process management involves scheduling (CPU scheduling, I/O scheduling, and so on), state switching, and resource management, process block is one of the most commonly accessed data type in operating system. Its design directly affects the efficiency of the operating system. As a result, in most operating systems, there is a data object that contains information about all the current active processes. It is called process controller.


Figure 2 shows the structure of a process controller, which is implemented as a linked-list of process blocks.

In order to achieve high efficiency, process controller is usually implemented as a global variable that can be accessed by both the kernel modules and nonkernel modules. For example, any time a new process (task) is created, the module that created this process should be able to access the process controller to add this new process. Therefore, process controller – the data object that controls the current active process – is usually implemented as a category-5 global variable. This means, both the kernel modules and nonkernel modules can access process controller to change its fields and these changes can affect the uses of process controller in kernel modules.


Main Memory Management

Memory management is the act of managing computer memory. In its simpler forms, this involves providing ways to allocate portions of memory to programs at their request, and freeing it for reuse when no longer needed. The management of main memory is critical to the computer system. Main memory is a volatile storage device. It loses its contents in the case of system failure.

The operating system is responsible for the following activities in connections with memory management:
  • Keep track of which parts of memory are currently being used and by whom.
  • Decide which processes to load when memory space becomes available.
  • Allocate and deallocate memory space as needed.

Another strategy for managing memory is the virtual memory, which allows programs to run even when they are only partially in main memory. The basic idea behind this strategy is that the combine size of the program, data, and stack may exceed the amount of physical memory available for it. The operating system keeps those parts of the program currently in use in main memory, and the rest on the disk.

Virtual memory can also work in a multiprogramming system, with bits and pieces pf many programs in memory at once. While a program is waiting for a part of itself to be brought in, it is waiting for I/O and cannot run, so the CPU can be given to another process, the same way as for any other multiprogramming system. Most virtual memory systems use a technique called paging.

File Management

Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.

Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

The operating system is responsible for the following activities in connections with file management:
  • File creation and deletion.
  • Directory creation and deletion.
  • Support of primitives for manipulating files and directories.
  • Mapping files onto secondary storage.
  • File backup on stable (nonvolatile) storage media.

I/O System Management

The I/O system consists of:

  • A buffer-caching system
  • A general device-driver interface
  • Drivers for specific hardware devices

Secondary Storage Management


Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory.


Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.


The operating system is responsible for the following activities in connection with disk management:

  • Free space management
  • Storage allocation
  • Disk scheduling

Protection System

Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources.

The protection mechanism must:

  • distinguish between authorized and unauthorized usage.
  • specify the controls to be imposed.
  • provide a means of enforcement.

Command Interpreter System

Many commands are given to the operating system by control statements which deal with:

  • process creation and management
  • I/O handling
  • secondary-storage management
  • main-memory management
  • file-system access
  • protection
  • networking

The program that reads and interprets control statements is called variously:

  • command-line interpreter
  • shell (in UNIX)

Its function is to get and execute the next command statement.

Slide 9
n

System Calls

|

System calls provide the interface between a running program and the operating system.

  • Generally available as assembly-language instructions.
  • Languages defined to replace assembly language for systems programming allow system calls to be made directly (e.g., C, C++)

Three general methods are used to pass parameters between a running program and the operating system.

  • Pass parameters in registers.
  • Store the parameters in a table in memory, and the table address is passed as a parameter in a register.
  • Push (store) the parameters onto the stack by the program, and pop off the stack by operating system.

Types of System Calls

Process Control
create/terminate a process (including self)

File Management
Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files. For example, a hierarchical file system is one that uses directories to organize files into a tree structure.

Although the operating system provides its own file management system, you can buy separate file management systems. These systems interact smoothly with the operating system but provide more features, such as improved backup procedures and stricter file protection.

Device Management
Device Management is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.

Information Maintenance

Slide 16

System Structure

|

Simple Structure

MS-DOS – written to provide the most functionality in the least space
not divided into modules

Although MS-DOS has some structure, its interfaces and levels of functionality are not well separated.





UNIX – limited by hardware functionality, the original UNIX operating system had limited structuring. The UNIX OS consists of two separable parts.
  • Systems programs
  • The kernel
  • Consists of everything below the system-call interface and above the physical hardware
  • Provides the file system, CPU scheduling memory management, and other operating-system functions; a large number of functions for one level.
Layered Approach

The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.

With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.