Showing posts with label OS2. Show all posts
Showing posts with label OS2. Show all posts

1.) Bootstrapping Program

|

In computing, bootstrapping (from an old expression "to pull oneself up by one's bootstraps") is a technique by which a simple computer program activates a more complicated system of programs. In the start up process of a computer system, a small program such as BIOS, initializes and tests that hardware, peripherals and external memory devices are connected, then loads a program from one of them and passes control to it, thus allowing loading of larger programs, such as an operating system.

A different use of the term bootstrapping is to use a compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language. This solves the "chicken and egg" causality dilemma.

2.) Trap and Interrupt

|

Trap

In computing and operating systems, a trap is a type of synchronous interrupt typically caused by an exceptional condition (e.g. division by zero or invalid memory access) in a user process. A trap usually results in a switch to kernel mode, wherein the operating system performs some action before returning control to the originating process. In some usages, the term trap refers specifically to an interrupt intended to initiate a context switch to a monitor program or debuggr.

Interrrupt

A signal informing a program that an event has occurred. When a program receives an interrupt signal, it takes a specified action (which can be to ignore the signal). Interrupt signals can cause a program to suspend itself temporarily to service the interrupt.

Interrupt signals can come from a variety of sources. For example, every keystroke generates an interrupt signal. Interrupts can also be generated by other devices, such as a printer, to indicate that some event has occurred. These are called hardware interrupts. Interrupt signals initiated by programs are called software interrupts.

PCs support 256 types of software interrupts and 15 hardware interrupts. Each type of software interrupt is associated with an interrupt handler -- a routine that takes control when the interrupt occurs. For example, when you press a key on your keyboard, this triggers a specific interrupt handler. The complete list of interrupts and associated interrupt handlers is stored in a table called the interrupt vector table, which resides in the first 1 K of addressable memory.

3.) Monitor Mode

|

Monitor mode, or RFMON (Radio Frequency Monitor) mode, allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

4.) User Mode

|

User-level (user-mode): On an operating system, there are fundamental two contexts a program can run in. The kernel context is within the core of the operating system, and no checks are performed to see if accesses to system resources are legal. The other context is user-level, where full access to the system is walled-off. Key point: Many network services these days now run as restricted user-level processes. This means when a remote hacker breaks into such a service, they do not get full control over the machine. They might be able to deface a webpage or cause other havoc, but they do not own the box. At this point, the intruder will need to run some sort of privilege escalation exploit in order to root the system. From Hacking-Lexicon

5.) Storage Hierarchy

|

To clarify the "guarantees" provided at different settings of the persistence spectrum without binding the application to a specific environment or set of storage devices, MBFS implements the continuum, in part, with a logical storage hierarchy. The hierarchy is defined by N levels:

1.LM (Local Memory storage): very high-speed volatile storage located on the machine creating the file.

2.LCM (Loosely Coupled Memory storage): high-speed volatile storage consisting of the idle memory space available across the system.

3.-N DA (Distributed Archival storage): slower speed stable storage space located across the system.
Caching
Caching is a well known concept in computer science: when programs continually access the same set of instructions, a massive performance benefit can be realized by storing those instructions in RAM. This prevents the program from having to access the disk thousands or even millions of times during execution by quickly retrieving them from RAM. Caching on the web is similar in that it avoids a roundtrip to the origin web server each time a resource is requested and instead retrieves the file from a local computer's browser cache or a proxy cache closer to the user.

The most commonly encountered caches on the web are the ones found in a user's web browser such as Internet Explorer, Mozilla and Netscape. When a web page, image, or JavaScript file is requested through the browser each one of these resources may be accompanied by HTTP header directives that tell the browser how long the object can be considered fresh, that is for how long the resource can be retrieved directly from the browser cache as opposed to from the origin or proxy server. Since the browser represents the cache closest to the end user it offers the maximum performance benefit whenever content can be stored there.

Coherency and Consistency

Consistency
Instead of attempting to impose some sort of ordering rules between individual memory reference instructions, as with most consistency models, TCC just imposes a sequential ordering between transaction commits. This can drastically reduce the number of latency-sensitive arbitration and synchronization events required by low-level protocols in a typical multiprocessor system. As far as the global memory state and software is concerned, all memory references from a processor that commits earlier happened “before” all memory references from a processor that commits afterwards, even if the references actually executed in an interleaved fashion. A processor that reads data that is subsequently updated by another processorʼs commit, before it can commit itself, is forced to violate and rollback in order to enforce this model. Interleaving between processorsʼ memory references is only allowed at transaction boundaries, greatly simplifying the process of writing programs that make fine-grained access to shared variables. In fact, by imposing an original sequential programʼs original transaction order on the transaction com-mits, we can effectively let the TCC system provide an illusion of uniprocessor execution to the sequence of memory refer-ences generated by parallel software.

Coherency
Stores are buffered and kept within the processor node for the duration of the transaction in order to maintain the atomicity of the transaction. No conventional, MESI-style cache protocols are used to maintain lines in “shared” or “ex-clusive” states at any point in the system, so it is legal for many processor nodes to hold the same line simultaneously in either an unmodified or speculatively modified form. At the end of ••Figure 1: A sample 3-node TCC system.Local Cache HierarchyProcessor CoreStoresOnlyLoads andStoresCommitsto other nodesWriteBufferData Tag V M Read Re-nameSnoopingfrom other nodesCommit ControlPhase SequenceNode 0:Node 1:Node 2:Broadcast Bus or NetworkNode#0each transaction, the broadcast notifies all other processors about what state has changed during the completing transac-tion. During this process, they perform conventional invalida-tion (if the commit packet only contains addresses) or update (if it contains addresses and data) to keep their cache state coherent. Simultaneously, they must determine if they may have used shared data too early. If they have read any data modified by the committing transaction during their currently executing transaction, they are forced to restart and reload the correct data. This hardware mechanism protects against true data dependencies automatically, without requiring program-mers to insert locks or related constructs. At the same time, data antidependencies are handled simply by the fact that later processors will eventually get their own turn to flush out data to memory. Until that point, their “later” results are not seen by transactions that commit earlier (avoiding WAR dependen-cies) and they are able to freely overwrite previously modified data in a clearly sequenced manner (handling WAW dependen-cies in a legal way). Effectively, the simple, sequentialized consistence model allows the coherence model to be greatly simplified, as well.

6.) DMA

|

Short for direct memory access, a technique for transferring data from main memory to a device without passing it through the CPU. Computers that have DMA channels can transfer data to and from devices much more quickly than computers without a DMA channel can. This is useful for making quick backups and for real-time applications.

Some expansion boards, such as CD-ROM cards, are capable of accessing the computer's DMA channel. When you install the board, you must specify which DMA channel is to be used, which sometimes involves setting a jumper or DIP switch.

7.) Difference of RAM and DRAM

|

RAM (Random Access Memory) is a generic name for any sort of read/write memory that can be, well, randomly accessed. All computer memory functions as arrays of stored bits, "0" and "1", kept as some kind of electrical state. Some sorts support random access, others (such as the flash memory used in MP3 players and digital cameras) has a serial nature to it.

A CPU normally runs through a short sequence of memory locations for instructions, then jumps to another routine, jumps around for data, etc. So CPUs depend on dynamic RAM for their primary memory, since there's little or no penalty for jumping all around in such memory.

There are many different kinds of RAM. DRAM is one such sort, Dynamic RAM. This refers to a sort of memory that stores data very efficiently, circuit-wise. A single transistor (an electronic switch) and a capacitor (charge storage device) store each "1" or "0". An alternate sort is called Static RAM, which usually has six transistors used to store each bit.

The advantage of the DRAM is that each bit can be very small, physically. The disadvantage is that the stored charge doesn't last really long, so it has to be "refreshed" perodically. All modern DRAM types have on-board electronics that makes the refresh process pretty simple and efficient, but it is one additional bit of complexity.

There are various sorts of DRAM around: plain (asynchronous) DRAM, SDRAM (synchronous, meaning all interactions are synchronized by a clock signal), DDR (double-data rate... data goes to/from the memory at twice the rate of the clock), etc. These differences are significant to hardware designers, but not usually a big worry for end-users... other than ensuring you buy the right kind of DRAM, if you plan to upgrade you system.

8.) Hardware Protection

|

Dual-Mode Operation
Sharing system resources requires operating system to ensurethat an incorrect program cannot cause other programs toexecute incorrectly.

Provide hardware support to differentiate between at least twomodes of operations.
1. User mode – execution done on behalf of a user.
2. Monitor mode (also supervisor mode or system mode) –execution done on behalf of operating system.



Mode bit added to computer hardware to indicate the currentmode: monitor (0) or user (1). When an interrupt or fault occurs hardware switches to monitor mode. Privileged instructions can be issued only in monitor mode.


I/O Protection


All I/O instructions are privileged instructions. Must ensure that a user program could never gain control ofthe computer in monitor mode (i.e., a user program that, aspart of its execution, stores a new address in the interruptvector).







Memory Protection

Must provide memory protection at least for the interrupt vectorand the interrupt service routines. In order to have memory protection, add two register determine the range of legal addresses a program may access:


– base register – holds the smallest legal physicaladdress.

– limit register – contains the size of the range.


Memory outside the defined range is protected.




CPU Protection

Timer – interrupts computer after specified period to ensure operating system maintains control:
  • Timer is decremented every clock tick
  • When timer reaches the value 0, an interrupt occurs

    Timer commonly used to implement time sharing.

Time also used to compute the current time.

Load-timer is a privileged instruction.

9.) Storage Structure

|

Main Memory
Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such asdisk drives. Another term for main memory is RAM.

The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must becopied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines how many programs can be executed at one time and how much data can be readily available to a program.

Because computers often have too little main memory to hold all the data they need, computer engineers invented a technique called swapping, in which portions of data are copied into main memory as they are needed. Swapping occurs when there is no room in memory for needed data. When one portion of data is copied into memory, an equal-sized portion is copied (swapped) out to make room.

Now, most PCs come with a minimum of 32 megabytes of main memory. You can usually increase the amount of memory by inserting extra memory in the form of chips.
Magnetic Disk
A round plate on which data can be encoded. There are two basic types of disks: magnetic disks and optical disks.

On magnetic disks, data is encoded as microscopic magnetized needleson the disk's surface. You can record and erase data on a magnetic disk any number of times, just as you can with a cassette tape. Magnetic disks come in a number of different forms:

  • floppy disk : A typical 5¼-inch floppy disk can hold 360K or 1.2MB (megabytes). 3½-inch floppies normally store 720K, 1.2MB or 1.44MB of data.

  • hard disk : Hard disks can store anywhere from 20MB to more than 200GB. Hard disks are also from 10 to 100 times faster than floppy disks.

  • removable cartridge : Removable cartridges are hard disks encased in a metal or plastic cartridge, so you can remove them just like a floppy disk. Removable cartridges are very fast, though usually not as fast as fixed hard disks.
  • Moving Head Disk Mechanism
    The machine that spins a disk is called a disk drive. Within each disk drive is one or more heads (often called read/write heads) that actually read and write data.

    Accessing data from a disk is not as fast as accessing data from main memory, but disks are much cheaper. And unlike RAM, disks hold on to data even when the computer is turned off.

    Rotation Speeds: 60 to 200 rotations per second
    Head Crash: read-write head makes contact with the surface

    Consequently, disks are the storage medium of choice for most types of data. Another storage medium is magnetic tape. But tapes are used only for backup and archiving because they are sequential-access devices (to access data in the middle of a tape, the tape drive must pass through all the preceding data).

    A new disk, called a blank disk, has no data on it. Before you can store data on a blank disk, however, you must format it.

    Magnetic Tapes

    A magnetically coated strip of plastic on which data can be encoded. Tapes for computers are similar to tapes used to store music.


    Storing data on tapes is considerably cheaper than storing data on disks. Tapes also have large storage capacities, ranging from a few hundred kilobytes to several gigabytes. Accessing data on tapes, however, is much slower than accessing data on disks. Tapes are sequential-access media, which means that to get to a particular point on the tape, the tape must go through all the preceding points. In contrast, disks are random-access media because a disk drive can access any point at random without passing through intervening points.

    Because tapes are so slow, they are generally used only for long-term storage and backup. Data to be used regularly is almost always kept on a disk. Tapes are also used for transporting large amounts of data.

    Tapes come in a variety of sizes and formats.
    Tapes are sometimes called streamers or streaming tapes.