Thursday, July 30, 2009

INTERPROCESS COMMUNICATION


Direct Communication

Direct communication can be defined as speech that specifically states and directs an action. Most of us grew up hearing direct speech from our parents or teachers: "Get that homework done before you go out to play," or, from our boss today we might hear: "I need this on my desk by Friday."

When to Use Direct Communication:
Direct communication is often necessary in working environments. There are plenty of situations when a direct style is the only appropriate option. The following situations call for this form of speech.


Indirect Communication

Unlike direct communication, a indirect style of speech is not typically authoritative, rather it encourages input from the listener. By using this method, you give the other person the opportunity to speak up. An indirect style, makes them feel as if their ideas are important. This style of communication places the listener in the "one-up" position.

When to Use Indirect Communication:
Like direct communication, indirect communication can be very useful in the workplace. This method can make teams run more smoothly and create an environment of friendly respect.


Synchronization

refers to one of two distinct but related concepts: synchronization of processes, and synchronization of data. Process synchronization refers to the idea that multiple processes are to join up or handshake at a certain point, so as to reach an agreement or commit to a certain sequence of action. Data synchronization refers to the idea of keeping multiple copies of a dataset in coherence with one another, or to maintain data integrity. Process synchronization primitives are commonly used to implement data synchronization.

=>processes takes place by calls to send and receive primitives

  • Blocking send

A blocking send returns as soon as the send buffer is free for reuse, that is, as soon as the last byte of data has been sent or placed in an internal buffer.

  • Nonblocking send

A non-blocking send returns as soon as possible, that is, as soon as it has posted the send. The buffer might not be free for reuse.

  • Blocking receive

A blocking receive returns as soon as the data is ready in the receive buffer.

  • Nonblocking receive

A non-blocking receive returns as soon as possible, that is, either with a flag that the data has not arrived yet or with the data in the receive buffer.


Buffering

=>Buffering Messages reside in a temporary queue Zero capacity Bounded capacity Unbounded capacity.

Link may have some capacity that determines the number of message that can be temporarily queued in it .

  • Zero Capacity

Explicit buffering – Zero-capacity (blocking sender, receiver)

Zero capacity: (queue of length 0)

  1. No messages wait.
  2. Sender must wait until receiver receives the message — this synchronization to exchange data is called a rendezvous.
  • Bounded Capacity

– Bounded capacity: when queue is not full, message is copied into buffer (or a pointer is kept).

Bounded capacity: (queue of length)

  1. If receiver’s queue is not full, new message is put on queue,and sender can continue executing immediately.
  2. If queue is full, sender must block until space is available in the queue.
  • Unbounded Capacity

Unbounded capacity: (infinite queue)

  1. Sender can always continue


Producer-Consumer Example

  • Procedure

A producer which generates data items and puts them in a buffer e.g. from a file

  • Consumer

A consumer which removes items from the buffer e.g. to a printer.

Thursday, July 16, 2009

Interprocess Communication
  • For communication and synchronization
    –Shared memory
    –OS provided IPC
  • Message system
    –no need for shared variable
    – two operations
    •send(message) – message size fixed or variable
    •receive(message)
  • If P and Q wish to communicate, they need to
    –establish a communication link between them
    –exchange messages via send/receive
  • Implementation of communication link
    –physical (e.g., shared memory, hardware bus)
    –logical (e.g., logical properties)
Cooperating Process

  • Advantages of process cooperation
    –Information sharing
    –Computation speed-up
    –Modularity
    –Convenience
  • Independent process cannot affect/be affected by the execution of another process, cooperating ones can
  • Issues
    –Communication
    –Avoid processes getting into each other’s way
    –Ensure proper sequencing when there are dependencies
  • Common paradigm: producer-consumer
    –unbounded-buffer - no practical limit on the size of the buffer
    –bounded-buffer - assumes fixed buffer size
The Concept of Process

a. Process State

In a multitasking computer system, processes may occupy a variety of states. These distinct states may not actually be recognized as such by the operating system kernel, however they are a useful abstraction for the understanding of processes.

Primary Process States

The following typical process states are possible on computer systems of all kinds. In most of these states, processes are "stored" on main memory.






b. Process Control Block

A Process Control Block (PCB, also called Task Control Block or Task Struct) is a data structure in the operating system kernel containing the information needed to manage a particular process. The PCB is "the manifestation of a process in an operating system".

Included information:


Implementations differ, but in general a PCB will include, directly or indirectly:

  • The identifier of the process (a process identifier, or PID)
  • Register values for the process including, notably,
    the
    Program Counter value for the process
  • The address space for the process
  • Priority (in which higher priority process gets first preference. eg., nice value on Unix operating systems)
  • Process accounting information, such as when the process was last run, how much CPU time it has accumulated, etc.
  • Pointer to the next PCB i.e. pointer to the PCB of the next process to run

  • I/O Information (i.e. I/O devices allocated to this process, list of opened files, etc)


During a context switch, the running process is stopped and another process is given a chance to run. The kernel must stop the execution of the running process, copy out the values in hardware registers to its PCB, and update the hardware registers with the values from the PCB of the new process.

c. Threads

A thread of execution results from a fork of a computer program into two or more concurrently running tasks. The implementation of threads and processes differs from one operating system to another, but in most cases, a thread is contained inside a process. Multiple threads can exist within the same process and share resources such as memory, while different processes do not share these resources.

On a single processor, multithreading generally occurs by time-division multiplexing (as in multitasking): the processor switches between different threads. This context switching generally happens frequently enough that the user perceives the threads or tasks as running at the same time. On a multiprocessor or multi-core system, the threads or tasks will generally run at the same time, with each processor or core running a particular thread or task. Support for threads in programming languages varies: a number of languages simply do not support having more than one execution context inside the same program executing at the same time.


Processing Scheduling

a. Scheduling Queues
  • Job queue – set of all processes in the system.
  • Ready queue – set of all processes residing in main memory,
    ready and waiting to execute.
  • Device queues – set of processes waiting for an I/O device.
  • Process migration between the various queues.

b.Schedulers

Scheduler is a tool that is intented to help understand how real-time algorithms work.

As of release 10.0, HP-UX implements four schedulers, two time-share and two real-time.
To choose a scheduler, you can use the user command, rtsched(1), which executes processes with your choice of scheduler and enables you to change the real-time priority of currently executing process ID.

rtsched -s scheduler -p priority command [arguments] rtsched [ -s scheduler ] -p priority -P pid

Likewise, the system call rtsched(2) provides programmatic access to POSIX real-time scheduling operations.

c. Context Switch

A context switch is the computing process of storing and restoring the state (context) of a CPU such that multiple processes can share a single CPU resource. The context switch is an essential feature of a multitasking operating system. Context switches are usually computationally intensive and much of the design of operating systems is to optimize the use of context switches. A context switch can mean a register context switch, a task context switch, a thread context switch, or a process context switch. What constitutes the context is determined by the processor and the operating system.



Operation on Process

a. Process Creation

Process 0 is created and initialized at system boot time but all other processes are created by a fork() or vfork() system call.


  • The fork() system call causes the creation of a new process. The new (child) process is an exact copy of the calling (parent) process.
  • vfork() differs from fork() only in that the child process can share code and data with the calling process (parent process). This speeds cloning activity significantly at a risk to the integrity of the parent process if vfork() is misused.
b. Process Termination

Processes terminate in one of two ways:

  • Normal Termination occurs by a return from main or when requested by an explicit call to exit or _exit.
  • Abnormal Termination occurs as the default action of a signal or when requested by abort.


When a process finishes executing, HP-UX terminates it using the exit system call.
Circumstances might require a process to synchronize its execution with a child process. This is done with the wait system call, which has several related routines.
During the exit system call, a process enters the zombie state and must dispose of child processes. Releasing process and thread structures no longer needed by the exiting process or thread is handled by three routines
-- freeproc(), freethread(), and kissofdeath().

Thursday, July 9, 2009

Quiz #3

1.What are the major activities of the OS with regards to process management?

=>Process creation and deletion

=>Process suspenion and resumption

=>Provision of mechanisms for:

  • * process synchronization
  • *process communication
  • *deadlock handling

2.What are the major activities of the OS with regards to main-memory management?

=>Keep track which parts of memory are currently being used and by whom.

=>Decide which processes to load when memory space becomes available.

=>Allocate and deallocate memory space as needed.

3.What are the major activities of the OS with regards to secondary-storage management?

=>Free space management

=>Storage allocation

=>Disk scheduling

4.What are the major activities of the OS with regards to file management?

=>File creation and deletion

=>Directory creation and deletion

=>Support of primitives for manipulating files and directories

=>File backup on stable (nonvolatile) storage media

=>Mapping files onto secondary storage

5.What is the purposeof the command interpreter?

=> It reads commands from the user or from a file of commandsand executes them, usually by turning them into one or more systemcalls. It is usually not part of the kernel since the command interpreteris subject to changes.

Tuesday, July 7, 2009

SYSTEM BOOT



Set The System's Boot Device Sequence




AMI-BIOS: For most versions, managing a PC's boot device sequence appears under the "Advanced BIOS Features" menu
If your PC has a relatively new motherboard, its BIOS will already include the functions necessary to support USB-attached boot media. If so, you need only make the right selections in that BIOS menu to boot from a USB flash drive. Older PCs, on the other hand, won't accept USB drives as valid boot devices. This means a BIOS update that supports USB boot options is necessary. You can find information about where to obtain such updates from your PC's (or motherboard's) user manual, on the driver CD included with the PC (or motherboard) or on the vendor's Website.
Normally, the hard disk precedes the USB flash drive (which falls under the heading of USB-HDD in most BIOS menus) in the boot order. If the hard disk contains a viable boot sector, the PC will start up automatically using the information it contains. Only when the hard disk suffers from a boot sector defect or an operating system can't be found will the PC boot from the USB flash drive instead.
Change this boot order. Plug the flash drive in, boot the computer and enter the BIOS setup utility. Normally, this means holding down the DEL or F2 key just as the computer powers up and begins the boot process. If you read the initial startup screen on your PC carefully, it will tell you exactly what you must do to access and alter your BIOS settings.
If your PC uses AMI-BIOS from American Megatrends, there are two possible ways to alter the boot device sequence. Each varies depending on the version of AMI-BIOS that's installed.
For the first variant, there is no menu entry named "Boot." Navigate to the sub-menu named "Advanced BIOS Features." Navigate to the item named "Boot Device Select... " and designate the USB flash drive as the first device in the "Boot Device Priority" list by selecting "1st" as its value. Then, hit the Esc key and set both the "Quick Boot" and "Full Screen LOGO Show" items to "Disabled" (this lets you see the BIOS messages during startup on the monitor). Exit the BIOS Setup utility using the "Save and Exit Setup" item in the main menu.
For the second variant, use the "Boot" menu to select the USB flash drive. It will show up under one of the following headings: "Hard Disk Drive", "Removable Device" or "Removable Storage Device. " Next, select the USB flash drive as "1st Drive" in the first position, then hit the Esc key. That device should appear in the menu named "Boot Device Priority" which might also show up as "Boot Sequence". Inside that menu, designate the USB flash drive as the "1st Boot Device", hit the Esc key and save all changes in the "Exit" menu by selecting "Exit and Save Changes".
The Phoenix BIOS that's so popular in notebook computers also lists the USB flash drive in its "Boot" menu (which might also appear as "Boot Device Priority"). In this case, the flash drive may show up as an entry in the "-HDD" or "-Removable Devices" sub-menu. Select the device class ("-Hard Drive" or "-HDD" for example) and use the F6 key to move the flash drive to the top of that list. Exit the BIOS Setup program by striking the F10 key, followed by the Enter key, to save all settings.
SYSTEM GENERATION

An operational system is a combination of the z/TPF system, application programs, and people. People assign purpose to the system and use the system. The making of an operational system depends on three interrelated concepts:
*System definition: The necessary application and z/TPF system knowledge required to select the hardware configuration and related values used by thez/TPF system software.
*System initialization: The process of creating the z/TPF system tables and configuration-dependent system software.
*System restart and switchover: The procedures used by the z/TPF system software to ready the configuration for online use.


The first two items are sometimes collectively called system generation; also installing and implementing. System definition is sometimes called design. System restart is the component that uses the results of a system generation to place the system in a condition to process real-time input. The initial startup is a special case of restart and for this reason system restart is sometimes called initial program load, or IPL. System restart uses values found in tables set up during system generation and changed during the online execution of the system. A switchover implies shifting the processing load to a different central processing complex (CPC), and requires some additional procedures on the part of a system operator. A restart or switchover may be necessary either for a detected hardware failure, detected software failure, or operator option. In any event, system definition (design), initialization, restart, and switchover are related to error recovery. This provides the necessary background to use this information, which is the principal reference to be used to install the z/TPF system.

Performing a system generation requires a knowledge of the z/TPF system structure, system tables, and system conventions, a knowledge of the applications that will be programmed to run under the system, and a user's knowledge of z/OS. Knowledge of the z/TPF system, Linux, and the application are required to make intelligent decisions to accomplish the system definition of a unique z/TPF system environment. The use of z/OS and Linux is necessary because many programs used to perform system generation run under control of z/OS or Linux. Although this information does not rely on much z/OS or Linux knowledge, when the moment arrives to use the implementation information, the necessary z/OS and Linux knowledge must be acquired. You are assumed to have some knowledge of the S/370 assembly program as well as jargon associated with the z/OS and Linux operating systems. Some knowledge of C language is also helpful, because some of the programs that are used to generate the system are written in C.

VIRTUAL MACHINE

Implementation

Virtual machines are usually written in “portable” (in the sense that compilers for most architectures already exists) programming languages such as C or C++.
For performance critical components assembly language can be used.
Some VMs (Lisp, Forth, Smalltalk) are largely written in the language itself.
Many VMs are written specifically for gcc, for reasons that will become clear in later slides.


Benefits

Partitioning – Multiple
application and OS
instances in a single
machine
Isolation – Each virtual
machine is isolated from
the host and other virtual
machines
Encapsulation – Each
entire virtual machine state
is contained in software;
standard virtual hardware
guarantees compatibility.


Examples


Examples of Authorizing Virtual Machinesz/VM V5R4.0 ConnectivitySC24-6080-07

The following examples show how to explicitly authorize server virtual machines, the AVS virtual machine, and requester virtual machines.
Example 1: Figure 92 is an example of an explicitly authorized TSAF collection involving two z/VM systems sharing global resources. The entries within each box represent the CP directory entries for each CMS virtual machine.
Figure 92. TSAF Collection with Authorized Global Resource Managers and User Programs







In Figure 92, users have the following authorization:
USERa on VMSYS1 can connect only to RES2 on VMSYS2.
USERb on VMSYS1 can connect only to RES1 on VMSYS1.
USERc on VMSYS2 can connect to RES1 on VMSYS1 and to RES2 on VMSYS2.
USERd on VMSYS2 can connect only to RES2 on VMSYS2.

Example 2: Figure 93 shows a TSAF collection in which the server and requester virtual machines are explicitly authorized to share local and private resources. The entries within each box represent the CP directory entries of each CMS virtual machine.
Figure 93. TSAF Collection with Authorized Local and Private Resource Managers and User Programs
In this figure, users have the following authorization:



USERa on VMSYS3 can connect only to RMGR4 on VMSYS4 to access a private resource managed by RMGR4.
USERb on VMSYS3 can connect only to RES1 on VMSYS3.
USERc on VMSYS4 can connect only to RMGR4 on VMSYS4 to access a private resource managed by RMGR4.


Example 3: Figure 94 shows an explicitly authorized TSAF collection involving two z/VM systems and one AVS virtual machine. The entries within each box represent the CP directory entries for each CMS virtual machine and the AVS virtual machine.
Figure 94. TSAF Collection with an AVS Virtual Machine


In this figure, users have the following authorization:
USERa on VMSYS5 can only connect out to the SNA network through GAT2 on VMSYS6.
USERb on VMSYS5 can only connect out to the SNA network through GAT1 on VMSYS6.
USERc on VMSYS6 can connect out to the SNA network through any gateway defined on VMSYS6 because it is authorized to connect to any virtual machine, resource, or gateway on the local system.

Thursday, July 2, 2009

SYSTEM COMPONENTS

Process Management

A process is a program in execution. A process needs certain resources, including CPU time, memory, files, and I/O devices, to accomplish its task. The operating system is responsible for the following activities in connection with process management.
*Process creation and deletion.
*process suspension and resumption.
*Provision of mechanisms for:
=>a. process synchronization
=>b. process communication

Main-Memory Management

Memory is a large array of words or bytes, each with its own address. It is a repository of quickly accessible data shared by the CPU and I/O devices. Main memory is a volatile storage device. It loses its contents in the case of system failure. The operating system is responsible for the following activities in connections with memory management:
*Keep track of which parts of memory are currently being used and by whom.
*Decide which processes to load when memory space becomes available.
*Allocate and deallocate memory space as needed.


File Management

A file is a collection of related information defined by its creator. Commonly, files represent programs (both source and object forms) and data.

The operating system is responsible for the following activities in connections with file management:
*File creation and deletion.
*Directory creation and deletion.
*Support of primitives for manipulating files and directories.
*Mapping files onto secondary storage.
*File backup on stable (nonvolatile) storage media.


I/O System Management

=> I/O system consists of:
*A buffer-caching system
*A general device-driver interface
*Drivers for specific hardware devices

Secondary-Storage Management

Since main memory (primary storage) is volatile and too small to accommodate all data and programs permanently, the computer system must provide secondary storage to back up main memory. Most modern computer systems use disks as the principle on-line storage medium, for both programs and data.

The operating system is responsible for the following activities in connection with disk management:

*Free space management
*Storage allocation
*Disk scheduling

Networking (Distributed Systems)

A distributed system is a collection processors that do not share memory or a clock. Each processor has its own local memory. The processors in the system are connected through a communication network. Communication takes place using a protocol. A distributed system provides user access to various system resources.
Access to a shared resource allows:
*Computation speed-up
*Increased data availability
*Enhanced reliabilit
y


Protection System

Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources.
The protection mechanism must:
*distinguish between authorized and unauthorized usage.
*specify the controls to be imposed.
*Provide a means of enforcement.


Command-Interpreter System

Many commands are given to the operating system by control statements which deal with:
process creation and management
*I/O handling
*secondary-storage management
*main-memory management
*file-system access
*protection
*networking
The program that reads and interprets control statements is called variously:
*command-line interpreter
*shell (in UNIX)
OPERATING SYSTEM SERVICES



Operating systems are responsible for providing essential services within a computer system:
Initial loading of programs and transfer of programs between secondary storage and main memory

  • Program execution – system capability to load a program into memory and to run it.
  • I/O operations – since user programs cannot execute I/O operations directly, the operating system must provide some means to perform I/O.
  • File-system manipulation – program capability to read, write, create, and delete files.
  • Communications – exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.
  • Error detection – ensure correct computing by detecting errors in the CPU and memory hardware, in I/O devices, or in user programs.
SYSTEM CALLS

Process control

Process control is a statistics and engineering discipline that deals with architectures, mechanisms, and algorithms for controlling the output of a specific process.

For example, heating up the temperature in a room is a process that has the specific, desired outcome to reach and maintain a defined
temperature (e.g. 20°C), kept constant over time. Here, the temperature is the controlled variable. At the same time, it is the input variable since it is measured by a thermometer and used to decide whether to heat or not to heat. The desired temperature (20°C) is the setpoint. The state of the heater (e.g. the setting of the valve allowing hot water to flow through it) is called the manipulated variable since it is subject to control actions.
A commonly used control device called a
programmable logic controller, or a PLC, is used to read a set of digital and analog inputs, apply a set of logic statements, and generate a set of analog and digital outputs. Using the example in the previous paragraph, the room temperature would be an input to the PLC. The logical statements would compare the setpoint to the input temperature and determine whether more or less heating was necessary to keep the temperature constant. A PLC output would then either open or close the hot water valve, an incremental amount, depending on whether more or less hot water was needed. Larger more complex systems can be controlled by a Distributed Control System (DCS) or SCADA system.
In practice, process control systems can be characterized as one or more of the following forms:
Discrete – Found in many manufacturing, motion and packaging applications. Robotic assembly, such as that found in automotive production, can be characterized as discrete process control. Most discrete manufacturing involves the production of discrete pieces of product, such as metal stamping.
Batch – Some applications require that specific quantities of raw materials be combined in specific ways for particular durations to produce an intermediate or end result. One example is the production of adhesives and glues, which normally require the mixing of raw materials in a heated vessel for a period of time to form a quantity of end product. Other important examples are the production of food, beverages and medicine. Batch processes are generally used to produce a relatively low to intermediate quantity of product per year (a few pounds to millions of pounds).
Continuous – Often, a physical system is represented though variables that are smooth and uninterrupted in time. The control of the water temperature in a heating jacket, for example, is an example of continuous process control. Some important continuous processes are the production of fuels, chemicals and plastics. Continuous processes, in manufacturing, are used to produce very large quantities of product per year(millions to billions of pounds).



File Management


Also referred to as simply a file system or filesystem. The system that an operating system or program uses to organize and keep track of files.


Device Management

Device Management is a set of technologies, protocols and standards used to allow the remote management of mobile devices, often involving updates of firmware over the air (FOTA). The network operator, handset OEM or in some cases even the end-user (usually via a web portal) can use Device Management, also known as Mobile Device Management, or MDM, to update the handset firmware/OS, install applications and fix bugs, all over the air. Thus, large numbers of devices can be managed with single commands and the end-user is freed from the requirement to take the phone to a shop or service center to refresh or update.

Information Maintenance







SYSTEM STRUCTURE

Simple Structure

These are structures with low degree of departmentalisation and a wide span of control. The authority is largely centralised in a single person with very little formalisation. It is also called 'flat structure'. It usually has only two or three vertical levels, a flexible set of employees, and generally one person in whom the power of decision-making is invested. This simple structure is most widely practiced in small business settings where manager and owner happens to be the same person. Its advantage lies in its simplicity. This makes it responsive, fast, accountable and easy to maintain. However, it becomes grossly inadequate as and when the organisation grows in size. Such a simple structure is becoming popular becauseof its flexibility, responsiveness and high degree of adaptability to change.

Layered approach

The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.
With modularity, layers are selected such that each uses functions (operations) and services of only lower-level layers.

MS-DOS Layered Structure