1.) Bootstrapping Program

|

In computing, bootstrapping (from an old expression "to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.

A different use of the term bootstrapping is to use a compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.

2.) Trap and Interrupt

|

Trap

In computing and operating systems, a trap is a type of synchronous interrupt typically caused by an exceptional condition (e.g. division by zero or invalid memory access) in a user process. A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. In some usages, the term trap refers specifically to an interrupt intended to initiate a context switch to a monitor program or debuggr.

Interrrupt

A signal informing a program that an event has occurred. When a program receives an interrupt signal, it takes a specified action (which can be to ignore the signal). Interrupt signals can cause a program to suspend itself temporarily to service the interrupt.

Interrupt signals can come from a variety of sources. For example, every keystroke generates an interrupt signal. Interrupts can also be generated by other devices, such as a printer, to indicate that some event has occurred. These are called hardware interrupts. Interrupt signals initiated by programs are called software interrupts.

PCs support 256 types of software interrupts and 15 hardware interrupts. Each type of software interrupt is associated with an interrupt handler -- a routine that takes control when the interrupt occurs. For example, when you press a key on your keyboard, this triggers a specific interrupt handler. The complete list of interrupts and associated interrupt handlers is stored in a table called the interrupt vector table, which resides in the first 1 K of addressable memory.

3.) Monitor Mode

|

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

4.) User Mode

|

User-level (user-mode): On an operating system, there are fundamental two contexts a program can run in. The kernel context is within the core of the operating system, and no checks are performed to see if accesses to system resources are legal. The other context is user-level, where full access to the system is walled-off. Key point: Many network services these days now run as restricted user-level processes. This means when a remote hacker breaks into such a service, they do not get full control over the machine. They might be able to deface a webpage or cause other havoc, but they do not own the box. At this point, the intruder will need to run some sort of privilege escalation exploit in order to root the system. From Hacking-Lexicon

5.) Storage Hierarchy

|

To clarify the "guarantees" provided at different settings of the persistence spectrum without binding the application to a specific environment or set of storage devices, MBFS implements the continuum, in part, with a logical storage hierarchy. The hierarchy is defined by N levels:

1.LM (Local Memory storage): very high-speed volatile storage located on the machine creating the file.

2.LCM (Loosely Coupled Memory storage): high-speed volatile storage consisting of the idle memory space available across the system.

3.-N DA (Distributed Archival storage): slower speed stable storage space located across the system.
Caching
Caching is a well known concept in computer science: when programs continually access the same set of instructions, a massive performance benefit can be realized by storing those instructions in RAM. This prevents the program from having to access the disk thousands or even millions of times during execution by quickly retrieving them from RAM. Caching on the web is similar in that it avoids a roundtrip to the origin web server each time a resource is requested and instead retrieves the file from a local computer's browser cache or a proxy cache closer to the user.

The most commonly encountered caches on the web are the ones found in a user's web browser such as Internet Explorer, Mozilla and Netscape. When a web page, image, or JavaScript file is requested through the browser each one of these resources may be accompanied by HTTP header directives that tell the browser how long the object can be considered fresh, that is for how long the resource can be retrieved directly from the browser cache as opposed to from the origin or proxy server. Since the browser represents the cache closest to the end user it offers the maximum performance benefit whenever content can be stored there.

Coherency and Consistency

Consistency
Instead of attempting to impose some sort of ordering rules between individual memory reference instructions, as with most consistency models, TCC just imposes a sequential ordering between transaction commits. This can drastically reduce the number of latency-sensitive arbitration and synchronization events required by low-level protocols in a typical multiprocessor system. As far as the global memory state and software is concerned, all memory references from a processor that commits earlier happened “before” all memory references from a processor that commits afterwards, even if the references actually executed in an interleaved fashion. A processor that reads data that is subsequently updated by another processorʼs commit, before it can commit itself, is forced to violate and rollback in order to enforce this model. Interleaving between processorsʼ memory references is only allowed at transaction boundaries, greatly simplifying the process of writing programs that make fine-grained access to shared variables. In fact, by imposing an original sequential programʼs original transaction order on the transaction com-mits, we can effectively let the TCC system provide an illusion of uniprocessor execution to the sequence of memory refer-ences generated by parallel software.

Coherency
Stores are buffered and kept within the processor node for the duration of the transaction in order to maintain the atomicity of the transaction. No conventional, MESI-style cache protocols are used to maintain lines in “shared” or “ex-clusive” states at any point in the system, so it is legal for many processor nodes to hold the same line simultaneously in either an unmodified or speculatively modified form. At the end of ••Figure 1: A sample 3-node TCC system.Local Cache HierarchyProcessor CoreStoresOnlyLoads andStoresCommitsto other nodesWriteBufferData Tag V M Read Re-nameSnoopingfrom other nodesCommit ControlPhase SequenceNode 0:Node 1:Node 2:Broadcast Bus or NetworkNode#0each transaction, the broadcast notifies all other processors about what state has changed during the completing transac-tion. During this process, they perform conventional invalida-tion (if the commit packet only contains addresses) or update (if it contains addresses and data) to keep their cache state coherent. Simultaneously, they must determine if they may have used shared data too early. If they have read any data modified by the committing transaction during their currently executing transaction, they are forced to restart and reload the correct data. This hardware mechanism protects against true data dependencies automatically, without requiring program-mers to insert locks or related constructs. At the same time, data antidependencies are handled simply by the fact that later processors will eventually get their own turn to flush out data to memory. Until that point, their “later” results are not seen by transactions that commit earlier (avoiding WAR dependen-cies) and they are able to freely overwrite previously modified data in a clearly sequenced manner (handling WAW dependen-cies in a legal way). Effectively, the simple, sequentialized consistence model allows the coherence model to be greatly simplified, as well.

6.) DMA

|

Short for direct memory access, a technique for transferring data from main memory to a device without passing it through the CPU. Computers that have DMA channels can transfer data to and from devices much more quickly than computers without a DMA channel can. This is useful for making quick backups and for real-time applications.

Some expansion boards, such as CD-ROM cards, are capable of accessing the computer's DMA channel. When you install the board, you must specify which DMA channel is to be used, which sometimes involves setting a jumper or DIP switch.

7.) Difference of RAM and DRAM

|

RAM (Random Access Memory) is a generic name for any sort of read/write memory that can be, well, randomly accessed. All computer memory functions as arrays of stored bits, "0" and "1", kept as some kind of electrical state. Some sorts support random access, others (such as the flash memory used in MP3 players and digital cameras) has a serial nature to it.

A CPU normally runs through a short sequence of memory locations for instructions, then jumps to another routine, jumps around for data, etc. So CPUs depend on dynamic RAM for their primary memory, since there's little or no penalty for jumping all around in such memory.

There are many different kinds of RAM. DRAM is one such sort, Dynamic RAM. This refers to a sort of memory that stores data very efficiently, circuit-wise. A single transistor (an electronic switch) and a capacitor (charge storage device) store each "1" or "0". An alternate sort is called Static RAM, which usually has six transistors used to store each bit.

The advantage of the DRAM is that each bit can be very small, physically. The disadvantage is that the stored charge doesn't last really long, so it has to be "refreshed" perodically. All modern DRAM types have on-board electronics that makes the refresh process pretty simple and efficient, but it is one additional bit of complexity.

There are various sorts of DRAM around: plain (asynchronous) DRAM, SDRAM (synchronous, meaning all interactions are synchronized by a clock signal), DDR (double-data rate... data goes to/from the memory at twice the rate of the clock), etc. These differences are significant to hardware designers, but not usually a big worry for end-users... other than ensuring you buy the right kind of DRAM, if you plan to upgrade you system.

8.) Hardware Protection

|

Dual-Mode Operation
Sharing system resources requires operating system to ensurethat an incorrect program cannot cause other programs toexecute incorrectly.

Provide hardware support to differentiate between at least twomodes of operations.
1. User mode – execution done on behalf of a user.
2. Monitor mode (also supervisor mode or system mode) –execution done on behalf of operating system.



Mode bit added to computer hardware to indicate the currentmode: monitor (0) or user (1). When an interrupt or fault occurs hardware switches to monitor mode. Privileged instructions can be issued only in monitor mode.


I/O Protection


All I/O instructions are privileged instructions. Must ensure that a user program could never gain control ofthe computer in monitor mode (i.e., a user program that, aspart of its execution, stores a new address in the interruptvector).







Memory Protection

Must provide memory protection at least for the interrupt vectorand the interrupt service routines. In order to have memory protection, add two register determine the range of legal addresses a program may access:


– base register – holds the smallest legal physicaladdress.

– limit register – contains the size of the range.


Memory outside the defined range is protected.




CPU Protection

Timer – interrupts computer after specified period to ensure operating system maintains control:
  • Timer is decremented every clock tick
  • When timer reaches the value 0, an interrupt occurs

    Timer commonly used to implement time sharing.

Time also used to compute the current time.

Load-timer is a privileged instruction.

9.) Storage Structure

|

Main Memory
Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such asdisk drives. Another term for main memory is RAM.

The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must becopied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines how many programs can be executed at one time and how much data can be readily available to a program.

Because computers often have too little main memory to hold all the data they need, computer engineers invented a technique called swapping, in which portions of data are copied into main memory as they are needed. Swapping occurs when there is no room in memory for needed data. When one portion of data is copied into memory, an equal-sized portion is copied (swapped) out to make room.

Now, most PCs come with a minimum of 32 megabytes of main memory. You can usually increase the amount of memory by inserting extra memory in the form of chips.
Magnetic Disk
A round plate on which data can be encoded. There are two basic types of disks: magnetic disks and optical disks.

On magnetic disks, data is encoded as microscopic magnetized needleson the disk's surface. You can record and erase data on a magnetic disk any number of times, just as you can with a cassette tape. Magnetic disks come in a number of different forms:

  • floppy disk : A typical 5¼-inch floppy disk can hold 360K or 1.2MB (megabytes). 3½-inch floppies normally store 720K, 1.2MB or 1.44MB of data.

  • hard disk : Hard disks can store anywhere from 20MB to more than 200GB. Hard disks are also from 10 to 100 times faster than floppy disks.

  • removable cartridge : Removable cartridges are hard disks encased in a metal or plastic cartridge, so you can remove them just like a floppy disk. Removable cartridges are very fast, though usually not as fast as fixed hard disks.
  • Moving Head Disk Mechanism
    The machine that spins a disk is called a disk drive. Within each disk drive is one or more heads (often called read/write heads) that actually read and write data.

    Accessing data from a disk is not as fast as accessing data from main memory, but disks are much cheaper. And unlike RAM, disks hold on to data even when the computer is turned off.

    Rotation Speeds: 60 to 200 rotations per second
    Head Crash: read-write head makes contact with the surface

    Consequently, disks are the storage medium of choice for most types of data. Another storage medium is magnetic tape. But tapes are used only for backup and archiving because they are sequential-access devices (to access data in the middle of a tape, the tape drive must pass through all the preceding data).

    A new disk, called a blank disk, has no data on it. Before you can store data on a blank disk, however, you must format it.

    Magnetic Tapes

    A magnetically coated strip of plastic on which data can be encoded. Tapes for computers are similar to tapes used to store music.


    Storing data on tapes is considerably cheaper than storing data on disks. Tapes also have large storage capacities, ranging from a few hundred kilobytes to several gigabytes. Accessing data on tapes, however, is much slower than accessing data on disks. Tapes are sequential-access media, which means that to get to a particular point on the tape, the tape must go through all the preceding points. In contrast, disks are random-access media because a disk drive can access any point at random without passing through intervening points.

    Because tapes are so slow, they are generally used only for long-term storage and backup. Data to be used regularly is almost always kept on a disk. Tapes are also used for transporting large amounts of data.

    Tapes come in a variety of sizes and formats.
    Tapes are sometimes called streamers or streaming tapes.

    User's view and Systems view

    |

    User's View

    The user view of the computer varies by the interface being used. Most computer users sit in front of a PC, consisting of a monitor, keyboard, mouse and system unit. Such a system is designed for one user to monopolize its resources, to maximize the work that the user is performing. In this case,the operating system is designed mostly for ease of use, with some attention paid to performance, and none paid to resource utilization.

    Some users sit at a terminal connected to a mainframe or minicomputer. Other users are accessing the same computer through other terminals. These users share resources and may exchange information. The operating system is designed to maximize resource utilization.

    Other users sit at workstations, connected to networks of other workstations and servers. These users have dedicated resources at their disposal, but they also share resources such as networking and servers.

    Recently, many varieties of handheld computers have come into fashion. These devices are mostly standalone, used singly by individual users. Some are connected to networks, either directly by wire or through wireless modems. Due to power and interface limitations they perform relatively few remote operations. These operating systems are designed mostly for individual usability, but performance per amount of battery life is important as well.

    Some computers have little or no user view. For example, embedded computers in home devices and automobiles may have numeric keypad, and may turn indicator lights on or off to show status, but mostly they and their operating systems are designed to run without user intervention.

    Systems View

    We can view an operating system as a resource allocator. A computer system has many resources - hardware and software - that may be required to solve a problem. The operating system acts as the manager of these resources.

    An operating system can also be viewed as a control program that manages the execution of user programs to prevent errors and improper use of the computer. It is especially concerned with the operation and control of I/O devices.

    We have no universally accepted definition of what is part of the operating system. A simple viewpoint is that it includes everything a vendor ships when you order “the operating system”.

    A more common definition is that the operating system is the one program running at all times on the computer (usually called the kernel), with all else being application programs. This is the one that we generally follow.

    Goals of OS

    |

    It is easier to define an operating system by what it does than what it is, but even this can be tricky. The primary goal of some operating system is convenience for the user. The primary goal of other operating system is efficient operation of the computer system. Operating systems and computer architecture have influenced each other a great deal. To facilitate the use of the hardware, researchers developed operating systems. Users of the operating systems then proposed changes in hardware design to simplify them. In this short historical review, notice how identification of operating-system problems led to the introduction of new hardware features.

    • To hide details of hardware by creating abstraction
      An abstraction is software that hides lower level details and provides a set of higher-level functions. An operating system transforms the physical world of devices, instructions, memory, and time into virtual world that is the result of abstractions built by the operating system. There are several reasons for abstraction.
      First, the code needed to control peripheral devices is not standardized. Operating systems provide subroutines called device drivers that perform operations on behalf of programs for example, input/output operations.
      Second, the operating system introduces new functions as it abstracts the hardware. For instance, operating system introduces the file abstraction so that programs do not have to deal with disks.
      Third, the operating system transforms the computer hardware into multiple virtual computers, each belonging to a different program. Each program that is running is called a process. Each process views the hardware through the lens of abstraction.
      Fourth, the operating system can enforce security through abstraction.

    • To allocate resources to processes (Manage resources)
      An operating system controls how processes (the active agents) may access resources (passive entities).

    • Provide a pleasant and effective user interface
      The user interacts with the operating systems through the user interface and usually interested in the “look and feel” of the operating system. The most important components of the user interface are the command interpreter, the file system, on-line help, and application integration. The recent trend has been toward increasingly integrated graphical user interfaces that encompass the activities of multiple processes on networks of computers.

    |

    Batch processing - is execution of a series of programs ("jobs") on a computer without human interaction.

    Batch jobs are set up so they can be run to completion without human interaction, so all input data is preselected through scripts or command line - parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program.


    Multiprogrammed Systems - a number of programs were resident in memory and the CPU could choose which one to run. One way to choose is to just keep executing the current program until an I/O delay is pending - instead of just waiting, the CPU would move onto the next program ready to be run.
    • Keep multiple jobs resident in memory
    • OS chooses which job to run
    • When job waits for I/O switch to another resident job
    Result: Job scheduling policies, Memory management and protection, improves throughput and utilization, still not interactive.


    Time-Sharing Systems Interactive Computing - while multiprogrammed systems used resources more efficiently i.e. minimized CPU idle time, a user could not interact with a program. By having the CPU switch between jobs at relatively short intervals, we can obtain an interactive system.That is, a system in which a number of users are sharing the CPU (or other critical resource) with a timing interval small enough not to be noticed e.g. no more than 1 second. We say that a time-sharing system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer.

    • The CPU multiplexed among several jobs in memory and on disk (CPU allocated only to jobs in memory). The CPU switches to the next job that can be run whenever the current job enters a wait state or after the current job has used a standard unit of time.When viewed over a relatively long time frame, we obtain the appearance that the CPU is simultaneously running multiple programs.
    • Job swapped in and out of memory to disk. If the time-sharing computer does not have enough semiconductor memory installed to hold all of the desired programs, then a backing store must be used to temporarily hold the contents relating to some programs when other programs are present in semiconductor memory. In effect, we are now ``memory sharing'' between competing users (programs). This idea leads to a mechanism called virtual memory.
    • On-line communication between user and system provided; when OS finishes execution of a command, it awaits next ``control statement'' from user.
    The sensible sharing of resources such as CPU time and memory must be handled by the operating system, which is just another program running on the computer. For this control program to always be in control, we require that it never be blocked from running. The operating system, which might in fact be organized like a small number of cooperating programs, will lock itself into memory and then control CPU allocation priority in order that it never be blocked from running.

    Advantages of Parallel System

    |

    In terms of disproportionality, Parallel systems usually give results which fall somewhere between pure plurality/majority and pure PR systems. One advantage is that, when there are enough PR seats, small minority parties which have been unsuccessful in the plurality/majority elections can still be rewarded for their votes by winning seats in the proportional allocation. In addition, a Parallel system should, in theory, fragment the party system less than a pure PR electoral system.

    It is much faster than sequential processing when it comes to doing repetitive calculations on vast amounts of data. This is because a parallel processor is capable of multithreading on a large scale, and can therefore simultaneously process several streams of data. This makes parallel processors suitable for graphics cards since the calculations required for generating the millions of pixels per second are all repetitive. GPUs can have over 200 cores to help them in this.The CPU of a normal computer is a sequential processor - it's good in processing data one step at a time. This is needed in cases where the calculation the processor is performing depends on the result of the previous calculation and so on; in parallel processing these kinds of calculations will slow it down, which is why CPUs are generally optimised for sequential operations and have only 1-8 cores.In summary, the one advantage of parallel processing is that it is much faster (about 200 times faster in the best cases) for simple, repetitive calculations on vast amounts of similar data.



    Symmetric and Asymmetric Multiprocessing

    |

    The difference between symmetric and asymmetric multiprocessing:

    Asymmetric multiprocessing - In asymmetric multiprocessing (ASMP), the operating system typically sets aside one or more processors for its exclusive use. The remainder of the processors run user applications. As a result, the single processor running the operating system can fall behind the processors running user applications. This forces the applications to wait while the operating system catches up, which reduces the overall throughput of the system. In the ASMP model, if the processor that fails is an operating system processor, the whole computer can go down.

    Symmetric Multiprocessing - Symmetric multiprocessing (SMP) technology is used to get higher levels of performance. In symmetric multiprocessing, any processor can run any type of thread. The processors communicate with each other through shared memory.

    SMP systems provide better load-balancing and fault tolerance. Because the operating system threads can run on any processor, the chance of hitting a CPU bottleneck is greatly reduced. All processors are allowed to run a mixture of application and operating system code. A processor failure in the SMP model only reduces the computing capacity of the system.

    SMP systems are inherently more complex than ASMP systems. A tremendous amount of coordination must take place within the operating system to keep everything synchronized. For this reason, SMP systems are usually designed and written from the ground up.


    All processors of symmetric multiprocessing are peers; the relationship between processors of asymmetric multiprocessing is a master-slave relationship. More specifically, each CPU in symmetric multiprocessing runs the same copy of the OS, while in asymmetric multiprocessing, they split responsibilities typically, therefore each may have specialized (different) software and roles.

    Client Server and Peer-to-Peer Systems

    |

    Client-Server System

    Client-server systems are not limited to traditional computers. An example is an automated teller machine (ATM) network. Customers typically use ATMs as clients to interface to a server that manages all of the accounts for a bank. This server may in turn work with servers of other banks (such as when withdrawing money at a bank at which the user does not have an account). The ATMs provide a user interface and the servers provide services, such as checking on account balances and transferring money between accounts.
    A server based network.

    To provide access to servers not running on the same machine as the client, middleware is usually used. Middleware serves as the networking between the components of a client-server system; it must be run on both the client and the server. It provides everything required to get a request from a client to a server and to get the server's response back to the client. Middleware often facilitates communication between different types of computer systems. This communication provides cross-platform client-server computing and allows many types of clients to access the same data.
    or groupware system.


    Peer-to-Peer System

    Peer-to-peer (P2P) networking is a method of delivering computer network services in which the participants share a portion of their own resources, such as processing power, disk storage, network bandwidth, printing facilities. Such resources are provided directly to other participants without intermediary network hosts or servers. Peer-to-peer network participants are providers and consumers of network services simultaneously, which contrasts with other service models, such as traditional client-server computing.

    A peer-to-peer based network.

    P2P networks are typically used for connecting nodes via largely ad hoc connections. Such networks are useful for many purposes. Sharing content files containing audio, video, data or anything in digital format is very common, and real time data, such as telephony traffic, is also passed using P2P technology.

    Stand Alone PC and Workstation

    |

    Stand Alone PC - stand alone operating system is a system that is independent of another for example windows 3.1 ,95,98 were all a shell based on the ms-dos operating system to put it in simpler terms windows was a dressed up version of ms-dos with all the bells and whistles to be more appealing to the eye and be more user friendly . later versions of windows were independent of the ms-dos operating system hence they are known as "stand alone operating systems" .

    Workstation - a type of computer used for engineering applications (CAD/CAM),desktop publishing, software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities.

    Workstations generally come with a large, high-resolution graphics screen, at least 64 MB (megabytes) of RAM, built-in network support, and a graphical user interface. Most workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT.

    In terms of computing power, workstations lie between personal computers and minicomputers, although the line is fuzzy on both ends. High-end personal computers are equivalent to low-end workstations. And high-end workstations are equivalent to minicomputers.

    Like personal computers, most workstations are single-<user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems.

    In networking, workstation refers to any computer connected to a local-area network. It could be a workstation or a personal computer.