- Basic computing
- GUI building
- Dates and times
- With other languages
- Porting instructions
- Concurrent computing
- Database access
- Web technology
- Text processing
- IRON: Eiffel package repository
Concurrent Eiffel with SCOOP
SCOOP is Simple Concurrent Object-Oriented Programming. SCOOP allows developers to create object-oriented software systems which will take advantage of multiple, concurrently active execution vehicles. Additionally, SCOOP programming is done at a level of abstraction above the specific details of these implementation vehicles. Read further to get a better idea of what all this means, but for now, the primary message should be: SCOOP is concurrent software development made easy. The basic SCOOP ideas were first published as early as 1993. Since that time, considerable research and development has refined the SCOOP into the model that is implemented in EiffelStudio today.
Concurrency in computation is a situation in which we can expect that a running computer system will have multiple computations executing simultaneously in a controlled fashion to achieve the goals of the system. The simultaneous executions can be handled by widely diverse computational vehicles: separate networked computer systems, separate processors in the same CPU, separate processor cores on a single chip, separate processor threads within a process, separate processes on the same CPU, etc.
Concurrent systems would not cause much trouble if the portions of the systems on different processors, processes, or threads were completely independent, that is, they shared no resources. But that would be a rare case indeed. In a concurrent system, simultaneously executing software elements can and do share resources and communicate with each other. This is where the problems can arise; problems in the form of various synchronization issues such as race conditions, atomicity violations, and deadlocks. The issues boil down to two essential problems in allowing access to shared resources:
- Avoid deadlocks: make certain that no two executing threads of control wait perpetually because each needs a resource which is under the control of the other.
- Ensure fairness: make certain that every participating thread of control eventually gets the opportunity to execute.
Concurrency control is a rich research area in computer science. Consequently, many schemes have been designed to control concurrent computation.
Indeed, SCOOP is such a model for concurrent computation. SCOOP differs from many other research efforts.
First, it is a goal of SCOOP to abstract the notion of concurrency to a level above the tools and techniques that are currently available in the target concurrency environment. What this means is that if you were writing a system with multiple process threads, you could do that without SCOOP, using the tools that are currently used in multi-threaded programming, like semaphores and mutexes. Or you could write it in SCOOP using only the SCOOP mechanisms. Likewise with SCOOP, a system intended to run on multiple processors or multiple processor cores also could be written using only those same SCOOP mechanisms that you used for the multi-threaded system.
Second, the SCOOP model, as it is implemented in Eiffel, depends primarily upon Design by Contract with slightly changed contract semantics, and a single new keyword
separate added to the Eiffel programming language. As you will see, the semantics of preconditions differ with concurrent execution versus sequential. Also, there are other underlying concepts and rules that need to be understood, but the point is that concurrent Eiffel using SCOOP will look a lot like sequential Eiffel.
Third, SCOOP uses the common act of argument passing to identify the necessity for guaranteeing exclusive access.
We will examine the details of how all this fits together and what it means to you as you begin to build concurrent software in Eiffel using SCOOP.
Eiffel’s familiar model for object-oriented computation:
continues to be valid in SCOOP. But the way we understand the model differs slightly. In sequential Eiffel we would refer to this as a feature call, with a client calling feature
f on a supplier object (the call’s target) currently attached to the entity
x, and possibly passing arguments represented by
a. We might alternatively refer to
x.f (a) as a feature application; specifically the application of feature
f to the object associated with
x. This is fine in sequential Eiffel, but as you will see, in SCOOP we have to make a distinction between feature call and feature application. The distinction will become clear as we discuss the notions of processors and separate calls.
In the context of SCOOP, processor is an abstract notion.
In traditional, sequential Eiffel, although we realize that there is some processor which executes our systems, we don’t usually give it much thought. When we do, we generally regard it as a hardware entity on which our software can run.
The term processor (or, interchangeably, handler) is vital to SCOOP and thought of in a slightly different way than in traditional Eiffel, i. e., not just as a hardware processor. In a concurrent system, there may be any number of processors. Here the term is used in a more abstract sense than before. In SCOOP we think of a processor as any autonomous thread of control capable of applying features to objects. At the level of the SCOOP model, processors are not restricted to a particular type of hardware or software. So, if you were writing software for a hardware implementation with multiple processors, those real processors might correspond to the processors of SCOOP. But if you were writing a system using multiple process threads, then those threads might correspond to SCOOP processors.
Multiple processors in SCOOP come into play when feature calls on a particular object may actually be applied by a different processor than the one on which the feature call was issued. Of course, this is the important distinction between feature call and feature application that was mentioned above. In SCOOP, the processor which does the feature application may be different from the one that does the feature call. So you can think of feature call as being the logging or queuing of a request to have a feature applied.
Separate types and separate calls
SCOOP introduces the notion of separateness.
The determining factor for the use of multiple processors is the use of separate types and separate calls. In a running system, every object is handled by a processor, but in the case in which there are no separate types or separate calls in a system, then only one processor will be used during execution, i. e., all calls will be non-separate ... and, consequently, there is no SCOOP-based concurrent processing present.
If an entity uses the keyword
separate in its declaration, such as:
my_x: separate X
it indicates that the application of features to an object attached to
my_x may occur on a different processor than the one on which the feature call was made. Such calls,
my_x.f, would be considered separate calls. Additionally, the type of
my_x is the separate type
A feature call on
would generally be considered a separate call, simply because it is a feature call on an object of a separate type, and therefore could be applied on a different processor. You will see now that separate calls are valid only in certain contexts.
Access to shared resources
As mentioned above, the main issue with concurrent systems is the proper control of access to resources that can be shared among simultaneously executing processors.
Traditional solutions to the problem involve the use of “critical sections” of code. These are sections of code in which the shared resource is accessed. Only one processor is allowed to be executing the critical section at a time. So if one process wants to execute the critical section and another is already doing so, then the first must wait. Process synchronization schemes ensure this “mutual exclusion” of access to the critical section.
Rather than using critical sections, SCOOP relies on the mechanism of argument passing to assure controlled access. As a result, there is a restriction placed on separate calls.
So, according to this rule, for a separate call to be valid, the target of the call must be a formal argument of the routine in which the call occurs. The code below contains both an invalid separate call and a valid one.
my_separate_attribute: separate SOME_TYPE ... calling_routine -- One routine do my_separate_attribute.some_feature -- Invalid call: Feature call on separate attribute enclosing_routine (my_separate_attribute) -- Separate attribute passed as argument end enclosing_routine (a_arg: separate SOME_TYPE) -- Another routine do a_arg.some_feature -- Valid call: Feature call on separate argument end
In the code above,
my_separate_attribute is a class attribute declared as a separate type. In the first line in
calling_routine a direct feature call is made to apply
my_separate_attribute. This is an invalid separate call. The second line calls feature
enclosing_routine and passes
my_separate_attribute as an argument.
enclosing_routine takes an argument of type
separate SOME_TYPE. Within
enclosing_routine it is valid to call
calling_routine above, the call to
enclosing_routine has a separate argument:
enclosing_routine (my_separate_attribute) -- Separate attribute passed as argument
Because the argument
my_separate_argument is of a separate type, then it is subject to be handled by a processor different from the one on which the call to
enclosing_routine occurs. As a result, the execution of
enclosing_routine will be delayed until the time that the processor which handles
my_separate_argument is available for exclusive access. This type of delay is described by the Wait rule.
Valid targets for separate calls, like
enclosing_routine above are said to be controlled.
What the definition of controlled expression means is that such an expression is controlled with respect to the processor handling the context in which the expression is used (the current context) ... and that means that all objects necessary to the expression are under control of (available for exclusive access by) the current processor and cannot be modified by other processors.
Synchronous and asynchronous feature calls
As stated above, when we think of the execution of sequential Eiffel, we tend to equate feature call and feature application. That is, it is expected that for a sequence of two feature calls:
that the feature application of
x.f will complete before
In concurrent Eiffel with SCOOP things are different. This is because a particular feature call,
x.f, may occur on one processor, and the consequent feature application (of feature
x) may occur on a different processor.
After an asynchronous feature call, the execution of the client proceeds immediately, possibly in parallel with the application of the feature on some other processor. We'll revisit this point after a look at what it takes for a call to be synchronous or asynchronous.
What makes a call synchronous or asynchronous?
First, every feature call is either a synchronous feature call or an asynchronous feature call. For a particular call, the following rules determine which it is:
A feature call is synchronous in the following cases:
- S1 It is a non-separate call.
- S2 It is a separate call:
- S2.1 To a query, or
- S2.2 To a command which has at least one actual argument which is of a reference type and either
- S2.2.1 A separate argument of the enclosing routine, or
A feature call is asynchronous in the following case:
- A1 It is a separate call to a command with no arguments, or arguments not meeting the criteria of S2.2 above.
Let’s look a little closer at those cases determining synchronous calls.
Case S1 is the case of typical sequential Eiffel, where all calls are non-separate, and therefore synchronous. Of course, even in concurrent Eiffel with SCOOP, plenty of non-separate calls will occur, and these will be synchronous.
Case S2.1 says that if a separate call is a query it must be synchronous. This is because even though the feature application will probably occur on a different processor, the instructions following the query will likely depend up on the result of the query, so they must wait until the feature application completes. This situation is known as wait by necessity.
Case S2.2 describes a situation in which a call provides at least one actual argument that is
Current or is a separate formal argument of the call’s enclosing routine. In this case the client is calling a procedure and passing arguments which are controlled in the context of the calling routine. That is, the actual arguments are objects upon which the client processor has exclusive access in the enclosing routine. In order for the supplier processor to be able to apply the feature (presumably accessing the argument objects in the process), the client must pass its exclusive access to these objects on to the supplier. This is done through a mechanism called access passing. Because the client has passed its exclusive access to the supplier processor, it cannot continue execution until the called feature has been applied by the supplier processor, and the supplier processor has restored the exclusive access back to the client. Therefore, this type of call must be synchronous.
Now consider the only case, Case A1, determining asynchronous calls.
Separate calls to commands are asynchronous (except as in case S2.2). This means that when a client executes an asynchronous feature call, it “logs” the need for its associated feature application. But then rather than waiting for the feature application to complete, the client routine continues execution of instructions beyond the asynchronous call.
It is in this case that concurrent computation is achieved. The processor of the client object is free to continue processing while the processor handling the target of the asynchronous feature call applies that feature.
Design by Contract and SCOOP
The backbone of the Eiffel Method is design by contract. Preconditions, postconditions, and class invariants are used in Eiffel for extending software interfaces into software specification. This is essentially the same in concurrent Eiffel with SCOOP as it is in traditional, sequential Eiffel. However, because of the concurrent nature of processing under SCOOP, the runtime semantics of the elements of Design by Contract are different for concurrent systems.
The role of the precondition is somewhat different in SCOOP than in sequential Eiffel. In non-concurrent Eiffel we view the precondition of a routine as defining a set of obligations on potential callers of the routine. That is, the set of conditions that must be true before correct execution of the routine can be expected. So, we could look at the precondition clauses in sequential Eiffel as correctness conditions. A typical example might be a square root routine that returns the square root of a passed argument value. A precondition clause, i. e., a correctness condition, for this routine will be that the argument must be non-negative. It is the responsibility of the caller to ensure that this property of the argument holds at the time of the feature call.
In concurrent Eiffel, the same correctness conditions are still valid, but there are cases in which we must view the clients role here a little differently. In the case of a precondition clause that depends upon an uncontrolled object, even if the client tests the condition ahead of the call, there is no assurance that action by some other concurrent processor may have invalidated the precondition clause between the time that the check was made and the time that the feature application takes place. So, the client cannot be held responsible establishing that this clause holds. This type of precondition clause is called an uncontrolled precondition clause.
So, the determination of whether a particular precondition or postcondition clause is controlled or uncontrolled depends upon the context of the calling routine. That means that a particular clause on feature
f might be considered controlled when
f is called by one caller, but uncontrolled when called by a different caller.
Uncontrolled precondition clauses demand an adaptation of precondition semantics:
So, the client's responsibility is limited to those precondition clauses that are controlled. Uncontrolled precondition clauses become wait conditions.
As with preconditions the effect of concurrent execution can make a difference in how postconditions are viewed.
If a routine has executed correctly, then the postcondition of the routine will hold at the time that it terminates ... this is true whether or not concurrency is involved. However, when a postcondition involves separate calls or entities, clients must be cautious about how they depend upon the state guaranteed by postconditions.
The separate argument rule above tells us that separate calls are valid only on targets which are formal arguments of their enclosing routines. Because class invariants are not routines and therefore have no arguments, separate calls are not allowed in class invariants.
The semantics of class invariants will be the same as in sequential Eiffel, precisely because invariants must include only non-spearate calls. To put it the terms of SCOOP, the class invariant ensuring the validity of any particular object will be evaluated entirely by the processor handling that object.