Table of Contents
- Programming Guide Overview
- Getting Started with Panini
- Capsule-oriented Design
- Panini Language
- Implicit Parallelism
- Installing and Running the Panini compiler
- Profiling Panini Programs
- Technical Publications
The Panini programming language is designed to enable implicit concurrency as a direct result of modularization of a system into capsules and to maintain modular reasoning in the presence of concurrency.
There is no escape: all programmers will soon be forced to consider concurrency decisions in software design. Most modern software systems tend to be distributed, event-driven, and asynchronous, often requiring components to maintain multiple threads for message and event handling. There is also increasing pressure on developers to introduce concurrency into applications in order to take advantage of multicore processors to improve performance.
Yet concurrent programming stubbornly remains difficult and error-prone. First, a programmer must partition the overall system workload into tasks. Second, tasks must be associated with threads of execution in a manner that improves utilization while minimizing overhead; note that this set of decisions is highly dependent on characteristics of the platform, such as the number of available cores. Finally, the programmer must manage the dependence, interaction, and potential interleaving between tasks to maintain the intended semantics and avoid concurrency hazards, often by using low-level primitives for synchronization. To address these issues, the invention and refinement of better abstractions is needed: that can hide the details of concurrency from the programmer and allow them to focus on the program logic.
The significance of better abstractions for concurrency is not lost on the research community. However, we believe that a major gap remains. There is an impedance mismatch between sequential and implicitly concurrent code written using existing abstractions that is hard for a sequentially trained programmer to overcome. These programmers typically rely upon the sequence of operations to reason about their programs.
To illustrate the challenges of concurrent program design, consider a simplified navigation system. The system consists of four components: a route calculator, a maneuver generator, an interface to a GPS unit, and a UI. The UI requests a new route by invoking a calculate operation on the route calculator, assumed to be computationally intensive. When finished, the route is passed to the maneuver generator via method setNewRoute The GPS interface continually parses the data stream from the hardware and updates the maneuver generator with the current position via method updatePosition. The maneuver generator checks the position against the current route and generates a new turn instruction for the UI if needed (not computationally intensive).
The modular structure of the system is clear from the description above, and it is not difficult to define four Java classes with appropriate methods corresponding to this design.
However, the system will not yet work. The programmer is faced with a number of nontrivial decisions: Which of these components needs to be associated with an execution thread of its own? Which operations must be executed asynchronously? Where is synchronization going to be needed?
A human expert might reach the following conclusions.
- A thread is needed to read the GPS data
- The UI, as usual, has its own event-handling thread. The calls on the UI need to pass their data to the event handling thread via the UI event queue
- The route calculation needs to run in a separate thread; otherwise, calls to calculateRoute will steal the UI event thread and cause the UI to become unresponsive.
- The maneuver generator class does not need a dedicated thread, however, its methods need to be synchronized, since its data is accessed by both the GPS thread and the thread doing route calculation
None of the conclusions above, in itself, is difficult to implement in Java. Rather, in practice it is the process of visualizing the interactions between the components, in order to reach those conclusions, that is extremely challenging for programmers [2, 3]. For example, we recently gave the navigation system example to a class of 29 talented seniors. The students had just been exposed to roughly four weeks of instruction in concurrent programming, including detailed examples of all the issues raised in the bullet points above. Almost all students had previously taken an operating systems course, and roughly one-third of the students had had some prior experience using threads. Students were given the Java code in Figure 1, but with details of threading and synchronization removed except the thread for the GPS. Of the 29 students, 17 were able to identify the issue with the UI potentially blocking during a route calculation, 9 were able to recognize that there was data shared between threads in the ManeuverGen class, and only 7 were able to recognize the data sharing in the UI class. All in all, only two of the students described a fully correct solution. The outcome is consistent with the experience of one of the Panini designers when using a similar example in more than a dozen one-week workshops on concurrent programming for groups of telecom engineers.
Capsule-oriented programming is a programming paradigm that aims to ease development of concurrent software systems by allowing abstraction from concurrency-related concerns. Capsule-oriented programming entails breaking down program logic into distinct parts called capsule declarations and composing these parts to form the complete program using system declaration.
Compared to the explicitly concurrent Java program on the left in Figure 1, the Panini program on the right is an implicitly concurrent capsule-oriented program. Owing to the declarative nature of capsule-oriented programming features, this program is shorter compared to the explicitly concurrent program. Most importantly, this example illustrates some of the key advantages of capsule-oriented programming for a programmer. These are:
- They don't need to specify whether a given capsule in a system needs, or could benefit from, its own thread of execution.
- They work within a familiar method-call style interface with a reasonable expectation of sequential consistency.
- All concurrency-related details are abstracted away and are fully transparent to them.
- Hridesh Rajan, Steven M. Kautz, Eric Lin, Sarah Kabala, Ganesha Upadhyaya, Yuheng Long, Rex Fernando, and Lor��nd Szak��cs. Capsule-oriented Programming, Tech. Report #13-01, Computer Science, Iowa State University, Feb 28, 2013.
- ACM/IEEE-CS Joint Task Force. Computer science curricula 2013 (CS2013). Technical report, ACM/IEEE, 2012.
- D. Meder, V. Pankratius, and W. F. Tichy. Parallelism in curricula an international survey. Technical report, University of Karlsruhe, 2008.
Page last modified on $Date: 2013/08/03 14:04:23 $