Thursday, June 25, 2009

2.STORAGE HIERARCHY

Caching:

->In computer science, a cache (pronounced /kæʃ/) is a collection of data duplicating original values stored elsewhere or computed earlier, where the original data is expensive to fetch (owing to longer access time) or to compute, compared to the cost of reading the cache. In other words, a cache is a temporary storage area where frequently accessed data can be stored for rapid access. Once the data is stored in the cache, it can be used in the future by accessing the cached copy rather than re-fetching or recomputing the original data.

->Most modern microprocessors contain some form of instruction and data caching where a small but fast bit of memory is used to speed up access to main memory. At its simplest level, a cache can pre-load blocks of memory from main memory so that the processor need not stall when performing a load. This is possible because (a) a processor usually accesses sequential memory locations and (b) loading more than one sequential memory location at the same time is faster than loading each sequential memory location one by one. Hence when the processor accesses an uncached bit of memory, the cache reads a full cache line in the hope it will be used (which it usually is).
Again at its simplest level, a cache must be write-through cache whenever the processor performs a write to main memory. Quite simply, when the write is performed, the relevant cache entry is updated, and a write to main memory is issued. One could say that cache coherency has been maintained ie; the cache accurately reflects the contents of main memory.


Coherency and Consistency:

->(Or "cache consistency") /kash koh-heer'n-see/ The synchronisation of data in multiple caches such that reading a memory location via any cache will return the most recent data written to that location via any (other) cache.Some parallel processors do not cache accesses to shared memory to avoid the issue of cache coherency. If caches are used with shared memory then some system is required to detect when data in one processor's cache should be discarded or replaced because another processor has updated that memory location. Several such schemes have been devised. Coherency defines what value is returned on a read.
->Consistency defines when it is available.

3.HARDWARE PROTECTION

Dual-mode Operation:

Sharing system resources requires operating system to ensurethat an incorrect program cannot cause other programs toexecute incorrectly.
• Provide hardware support to differentiate between at least twomodes of operations.
1. User mode – execution done on behalf of a user.
2. Monitor mode (also supervisor mode or system mode) –execution done on behalf of operating system.


Mode bit added to computer hardware to indicate the currentmode: monitor (0) or user (1).
• When an interrupt or fault occurs hardware switches to monitormodeuser monitorinterrupt/faultset user mode.


• Privileged instructions can be issued only in monitor mode.


I/O Protection:

->All I/O instructions are privileged instructions.

->Must ensure that a user program could never gain control ofthe computer in monitor mode (i.e., a user program that, aspart of its execution, stores a new address in the interruptvector).



Memory Protection:

Must provide memory protection at least for the interrupt vectorand the interrupt service routines.
• In order to have memory protection, add two registers thatdetermine the range of legal addresses a program may access:
–> base register – holds the smallest legal physical memoryaddress.
– >limit register – contains the size of the range.
• Memory outside the defined range is protected.



CPU Protection:

->To prevent a user programs gets stuck in infinite loop and never returning back to the os

1.STORAGE STRUCTURE

Main memory:

Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM.

Magnetic disk:


A memory device, such as a floppy disk or a hard disk, that is covered with a magnetic coating. Digital information is stored on magnetic disks in the form of microscopically small, magnetized needles, each of which encodes a single bit of information by being polarized in one direction (representing 1) or the other (representing 0).



Moving head disk mechanism:





Magnetic tapes:

Magnetic tape is a medium for magnetic recording generally consisting of a thin magnetizable coating on a long and narrow strip of plastic. Nearly all recording tape is of this type, whether used for recording audio or video or for computer data storage. It was originally developed in Germany, based on the concept of magnetic wire recording. Devices that record and playback audio and video using magnetic tape are generally called tape recorders and video tape recorders respectively. A device that stores computer data on magnetic tape can be called a tape drive, a tape unit, or a streamer.
Magnetic tape revolutionized the broadcast and recording industries. In an age when all
radio (and later television) was live, it allowed programming to be prerecorded. In a time when gramophone records were recorded in one take, it allowed recordings to be created in multiple stages and easily mixed and edited with a minimal loss in quality between generations. It is also one of the key enabling technologies in the development of modern computers. Magnetic tape allowed massive amounts of data to be stored in computers for long periods of time and rapidly accessed when needed.








Wednesday, June 24, 2009



1.Bootstrap Program
In computing, booting is a bootstrapping process that starts operating systems when the user turns on a computer system. Most computer systems can only execute code found in the memory (ROM or RAM); modern operating systems are mostly stored on hard disk drives, LiveCDs and USB flash drive. Just after a computer has been turned on, it doesn't have an operating system in memory. The computer's hardware alone cannot perform complicated actions of the operating system, such as loading a program from disk on its own; so a seemingly irresolvable paradox is created: to load the operating system into memory, one appears to need to have an operating system already installed.

2.Difference of interrupt and trap and thier use.
Trap is actually a software generated interrupt caused either by an error (for example division by zero, invalid memory access etc.), or by an specific request by an operating system service generated by a user program. Trap is sometimes called Exception. The hardware or software can generate these interrupts. When the interrupt or trap occurs, the hardware therefore, transfer control to the operating system which first preserves the current state of the system by saving the current CPU registers contents and program counter's value. after this, the focus shifts to the determination of which type of interrupt has occured. For each type of interrupt, separate segmants of code in the operating system determine what action should be taken and thus the system keeps on functioning by executing coputational instruction, I/O instruction, torage instruction etc.

3. Monitor mode,
or RFMON (Radio Frequency Monitor)mode,allows a computer with a wireless network interface card (NIC) to monitor all traffic received from the wireless network. Unlike promiscuous mode, which is also used for packet sniffing, monitor mode allows packets to be captured without having to associate with an access point or ad-hoc network first. Monitor mode only applies to wireless networks, while promiscuous mode can be used on both wired and wireless networks. Monitor mode is one of the six modes that 802.11 wireless cards can operate in: Master (acting as an access point), Managed (client, also known as station), Ad-hoc, Mesh, Repeater, and Monitor mode.

4.User mode
In User mode, the executing code has no ability to directly access hardware or reference memory. Code running in user mode must delegate to system APIs to access hardware or memory. Due to the protection afforded by this sort of isolation, crashes in user mode are always recoverable. Most of the code running on your computer will execute in user mode.

5.Device Status Table
Device-status table contains entry for each I/O deviceindicating its type, address, and state.



6.Direct memory access
Direct memory access (DMA) is a feature of modern computers and microprocessors that allows certain hardware subsystems within the computer to access system memory for reading and/or writing independently of the central processing unit. Many hardware systems use DMA including disk drive controllers, graphics cards, network cards and sound cards. DMA is also used for intra-chip data transfer in multi-core processors, especially in multiprocessor system-on-chips, where its processing element is equipped with a local memory (often called scratchpad memory) and DMA is used for transferring data between the local memory and the main memory. Computers that have DMA channels can transfer data to and from devices with much less CPU overhead than computers without a DMA channel. Similarly a processing element inside a multi-core processor can transfer data to and from its local memory without occupying its processor time and allowing computation and data transfer concurrency.

7.Difference of RAM and DRAM
Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data.[1] By contrast, storage devices such as tapes, magnetic discs and optical discs rely on the physical movement of the recording medium or a reading head. In these devices, the movement takes longer than data transfer, and the retrieval time varies based on the physical location of the next item. The word RAM is often associated with volatile types of memory (such as DRAM memory modules), where the information is lost after the power is switched off. Many other types of memory are RAM, too, including most types of ROM and flash memory called NOR-Flash.

Dynamic random access memory (DRAM) is a type of random access memory that stores each bit of data in a separate capacitor within an integrated circuit. Since real capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory.

8.Main memory
Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM.

9.Magnetic Disk
Magnetic storage and magnetic recording are terms from engineering referring to the storage of data on a magnetized medium. Magnetic storage uses different patterns of magnetization in a magnetizable material to store data and is a form of non-volatile memory. The information is accessed using one or more read/write heads. As of 2009, magnetic storage media, primarily hard disks, are widely used to store computer data as well as audio and video signals. In the field of computing, the term magnetic storage is preferred and in the field of audio and video production, the term magnetic recording is more commonly used. The distinction is less technical and more a matter of preference.

10.Storage Hierarchy
The hierarchical arrangement of storage in current computer architectures is called the memory hierarchy. It is designed to take advantage of memory locality in computer programs. Each level of the hierarchy has the properties of higher bandwidth, smaller size, and lower latency than lower levels.Most modern CPUs are so fast that for most program workloads, the locality of reference of memory accesses and the efficiency of the caching and memory transfer between different levels of the hierarchy are the practical limitation on processing speed. As a result, the CPU spends much of its time idling, waiting for memory I/O to complete. This is sometimes called the space cost, as a larger memory object is more likely to overflow a small/fast level and require use of a larger/slower level.

Monday, June 22, 2009

6.Differentiate client-server systems and peer-to-peer systems.
Client-server computing or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients.[1] Often clients and servers operate over a computer network on separate hardware. A server is a high-performance host that is a registering unit and shares its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests.
Peer-to-peer (P2P) networking is a method of delivering computer network services in which the participants share a portion of their own resources, such as processing power, disk storage, network bandwidth, printing facilities. Such resources are provided directly to other participants without intermediary network hosts or servers.[1] Peer-to-peer network participants are providers and consumers of network services simultaneously, which contrasts with other service models, such as traditional client-server computing.
5.Differentiate Symmetric Multiprocessing and Asymmetric Multiprocessing.
Symmetric multiprocessing treats all processors as equals, and I/O can be processed on any CPU. Asymmetric multiprocessing has one master CPU and the remainder CPUs are slaves. The master distributes tasks among the slaves, and I/O is usually done by the master only. Multiprocessors can save money, by not duplicating power supplies, housings, and peripherals. They can execute programs more quickly, and can have increased reliability. They are also more complex in both hardware and software than uniprocessor systems.
4.Advantages of Parallel Systems?
Parallel systems usually give results which fall somewhere between pure plurality/majority and pure PR systems. One advantage is that, when there are enough PR seats, small minority parties which have been unsuccessful in the plurality/majority elections can still be rewarded for their votes by winning seats in the proportional allocation. In addition, a Parallel system should, in theory, fragment the party system less than a pure PR electoral system.
3.What's the difference between Batch systems, Multiprgrammed systems, and time-sharing systems?
Batch. A job was originally presented to the machine (and its human operator) in the form of a set of cards - these cards held information according to how ``punched'' out of the cardboard. The operator grouped all of the jobs into various batches with similar characteristics before running them(all the quick jobs might run, then the slower ones, etc.).While multiprogrammed systems used resources more efficiently i.e. minimized CPU idle time, a user could not interact with a program. By having the CPU switch between jobs at relatively short intervals, we can obtain an interactive system.That is, a system in which a number of users are sharing the CPU (or other critical resource) with a timing interval small enough not to be noticed e.g. no more than 1 second. We say that a time-sharing system uses CPU scheduling and multiprogramming to provide each user with a small portion of a time-shared computer and Time-sharing is sharing a computing resource among many users by multitasking. Its introduction in the 1960s, and emergence as the prominent model of computing in the 1970s, represents a major historical shift in the history of computing. By allowing a large number of users to interact simultaneously on a single computer, time-sharing dramatically lowered the cost of providing computing, while at the same time making the computing experience much more interactive.
1.What is the difference of OS in terms of user's view and system's view?
User's view:
Operating system is designed mostly for ease of use.The operating system is designed mostly for ease of use, with some attention paid to performance, and none paid to resource utilization.
System View:
We can view an operating system as a resource allocator. The operating system acts as the manager of these resources.An operating system can also be viewed as a control program that manages the execution of user programs to prevent errors and improper use of the computer.

Sunday, June 21, 2009

f. HANDHELD
A Mobile operating system, also known as , a Mobile OS, a Mobile platform, or a Handheld operating system, is the
operating system that controls a mobile device—similar in principle to an operating system such as Linux or Windows that controls a desktop computer.
However, they are currently somewhat simpler, and deal more with the wireless versions of broadband and local connectivity, mobile multimedia formats, and different input methods.The ongoing shift away from voice-intensive cellular technology to data-intensive mobile broadband is a significant issue for many large industriesMobile platforms are in the nascent stage, and any projection regarding the market growth is hard to make at the present time. It is noteworthy that Intel is taking the initiative to focus on portable devices other than mobile phones. They are
Mobile Internet Devices (MID) and Ultra-Mobile PC (UMPC). Meantime, Palm abandoned its plan to develop Foleo, which was to be a companion device for a smartphone.
2.GOALS OF THE OPERATING SYSTEM:

It is easier to define an operating system by what it does than what it is, but even this can be tricky. The primary goal of some operating system is convenience for the user. The primary goal of other operating system is efficient operation of the computer system. Operating systems and computer architecture have influenced each other a great deal. To facilitate the use of the hardware, researchers developed operating systems. Users of the operating systems then proposed changes in hardware design to simplify them. In this short historical review, notice how identification of operating-system problems led to the introduction of new hardware features.

Saturday, June 20, 2009

7. Differentiate the design issues of OS between a stand alone PC and a workstation connected to a network

stand-alone PC
A desktop or laptop computer that is used on its own without requiring a connection to a local area network (LAN) or wide area network (WAN). Although it may be connected to a network, it is still a stand-alone PC as long as the network connection is not mandatory for its general use.

workstation
A significant segment of the desktop market are computers expected to perform as workstations, but using PC operating systems and components. PC component manufacturers will often segment their product line, and market premium components which are functionally similar to the cheaper "consumer" models but feature a higher level of robustness and/or performance. Notable examples of this are the
AMD Opteron, Intel Xeon processors, and the ATI FireGL and Nvidia Quadro graphics processors.A workstation class PC may have some of the following features:
support for ECC memory
a larger number of memory sockets which use registered (buffered) modules
multiple processors
multiple displays
run a "business" or "professional" operating system version

Thursday, June 18, 2009

8.Define the essential properties of the following types of OS:

a.BATCH

Jobs with similar needs are batched together and run through the computer as a group by an operator or automatic job sequencer. Performance is increased by attempting to keep CPU and I/O devices busy at all times through buffering, off-line operation, spooling, and multiprogramming. Batch is good for executing large jobs that need little interaction; it can be submitted and picked up later.

b.TIME SHARING

Uses CPU scheduling and multiprogramming to provide economical interactive use of a system. The CPU switches rapidly from one user to another. Instead of having a job de?ned by spooled card images, each program reads its next control card from the terminal, and output is normally printed immediately to the screen.

c.REAL TIME

Often used in a dedicated application. The system reads information from sensors and must respond within a ?xed amount of time to ensure correct performance.


d.NETWORK

A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and also presents the basic components of a network.
A computer network is a collection of computers and devices connected to each other. The network allows computers to communicate with each other and share resources and information.


e.DISTRIBUTED

Distributes computation among several physical processors. The processors do not share memory or a clock. Instead, each processor has its own local memory. They communicate with each other through various communication lines, such as a high-speed bus or telephone line