• Home   /  
  • Archive by category "1"

Cp4101 B Comp Dissertation Abstract


Following is a complete list of doctoral graduates of the Department of Computer Science, with their dissertation titles. Graduates of other departments or schools, whose primary adviser was a member of the Department of Computer Science, are also listed.


Advisors names are given in parentheses. Advisor affiliations are given if the advisor was not on the UNC Department of Computer Science faculty.


Clicking on the author’s name will take you to a separate page containing dissertation abstracts, where available.


UNC Libraries: Except where noted, all of the dissertations listed here are available from the libraries at UNC-Chapel Hill. Those from the most recent year will not immediately be available, however.

  • ProQuest database
  • Microfiche copies: Microforms Section of Davis Library (basement)
  • Bound copies: Kenan Science Library Annex
  • Non-circulating copies: North Carolina Collection, Wilson Library


Ph.D. Graduates


Abram, Gregory D. (1986)

“Parallel Image Generation with Anti-Aliasing and Texturing”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR87-005
Electronic copy available

This dissertation explores the use of an analytic visibility algorithm for the high-speed generation of anti-aliased images with textures. When visibility is known analytically before any sampling takes place, low-pass filtering for anti-aliasing can be done to arbitrary accuracy at a cost proportionate to the output resolution. Furthermore, since the filtering and sampling processes can be expressed as a set of integrations over the image place, the filtering process can be decomposed into a set of sums of integrations over each visible surface in the image place, allowing the rendering of each visible surface to be done in parallel using an image buffer to accumulate the results. Thus, analytic visibility can serve as the basis for high-speed, high-quality image synthesis. In this dissertation, algorithms for computing analytic visibility and for producing filtered renderings from the resulting visible surfaces are presented. In order to provide real-time performance, these algorithms have been designed for parallel execution on simple concurrent structures. Architectures based on these algorithms are presented and simulated to produce expected execution times on a set of test images.

Ackermann Jr., Arthur F. (1972)

“Toward A Programming Language for Writing and Checking Mathematical Discourses”
Under the direction of Donald F. Stanat

In section 1.1, I begin by hypothesizing some desirable features for an interactive computer system for mathematicians. The requirements for one such feature, proof-checking, are discussed in section 1.2. It is argued that the traditional systems of mathematical logic must be modified before they can provide a suitable basis for writing and checking mathematical discourses [Newsom] in a “natural” language. In chapter 2 an initial candidate for a suitable logical language is discussed and its syntax and computer processing requirements are specified. In appendix 3 a complete discourse on a finite geometry is developed in this language to illustrate its strengths and weaknesses. In addition to providing for higher-order variables with restricted domains, the language utilizes a generalization of the natural deduction notion of tautological consequence. This generalization permits the proof writer to make any inference which can be validated by considering the meaning of the terms used. Such a rule is called a “semantic inference” rule. Operationally, in a computer proof-checker an automatic theorem-prover would be used to validate a semantic inference by attempting to prove a corresponding theorem from stated axioms and implied definitions. A first-order resolution theorem-proving program, PROV, was obtained from John K. Dixon at the NIH Heuristics Laboratory and attempts were made to check a few of the simpler inferences made in the proofs in the finite geometry discourse. These results are reported in chapter 4, and a complete, annotated copy of PROV is given in appendix 4. The text of this dissertation was prepared using the text-editing program FORMAT [Ferns] in combination with SN0B0L4 [Griswold] programs for creating special graphics by overprinting [Ackermann1].

Ackerman, Jeremy D. (2002)

“Application of Augmented Reality to Laparoscopic Surgery”
Under the direction of Henry Fuchs
Electronic copy available

The usefulness and feasibility of an augmented reality visualization system for laparoscopic surgery is examined. This technology could enable physicians to see data, such as medical images, as well as the patient’s 3-D anatomy, in the context of the patient rather than viewing a 2-D video monitor. Three new results will be presented:

  1. A study whose chief findings were that augmented reality improves performance for medical students and that exposure to augmented reality may improve later performance under laparoscopic visualization for both medical students and practicing surgeons.
  2. An image processing method that can acquire three-dimensional structure inside the highly specular environment of the abdomen will be shown.
  3. A design for a high-speed depth-extracting laparoscope that can be miniaturized and be made operating room compatible will be shown.

Laparoscopic surgery is a minimally invasive abdominal surgery that is performed through small incisions. Surgeons view the procedure on a video monitor that shows a camera’s view from inside the abdomen through a small telescope inserted through an incision. Laparoscopy is associated with a loss of natural hand-eye coordination because the work is viewed indirectly on a video screen. Augmented reality may improve performance by restoring natural hand-eye coordination during this type of procedure. A study of 40 medical students and 8 surgeons was conducted to compare performance of a surgically relevant task under laparoscopic and simulated augmented reality visualization. The study uses simulated augmented reality, a technique in which an idealized augmented reality system is created using physical mock-ups, to explore the utility of future augmented reality systems. A proof-of-concept real-time range acquiring laparoscope has been constructed.  The system uses a high-speed implementation of structured light depth extraction.  It uses specialized hardware and algorithms. Results from preliminary studies on biological tissues show the ectiveness of this approach.

Ahn, Ilsoo (1986)

“Performance Modeling and Access Methods for Temporal Database Management Systems”
Under the direction of Richard T. Snodgrass
Department of Computer Science Technical Report TR86-018
Electronic copy available

Conventional database storing only the latest snapshot lack the capability to record and process time-varying aspects of the real world. The need for temporal support has been recognized for over ten years, and recent progress in secondary storage technology is making such support practically feasible. There are three distinct kinds of time in databases: transaction time, valid time, and user-defined time. Depending on the capability to support either or both of transaction time and valid time, databases are classified into four types: snapshot, rollback, historical, andtemporal. Each of the four types has different semantics and different implementation issues. Database systems with temporal support maintain history data on line together with current data, which causes problems in terms of both space and performance. This research investigates the temporally partitioned store to provide fast response for various temporal queries without penalizing conventional non-temporal queries. The current store holds current data and possibly some history data, while the history store contains the rest. The two stores can utilize different storage formats, and even different storage media, depending on the individual data characteristics. Various issues on the temporally partitioned store were studied, and several formats for the history store were investigated. To analyze the performance of TQuel queries on various access methods, four models forming a hierarchy were developed: one each foralgebraic expressions,database/relations, access paths, and storage devices. The model of algebraic expressions maps the algebraic expression to the file primitive expression, based on information represented by the model of database/relations. The model of access paths maps the file primitive expression to the access path expression, which is converted to the access path cost by the model of storage devices. As a test-bed to evaluate the access methods and the models, a prototype of a temporal DBMS was built by modifying a snapshot DBMS. The prototype was used to identify problems with conventional access methods and also to provide performances data to check the analysis results from the models. Reverse chaining, among the temporally partitioned storage structures, was incorporated in the prototype to enhance its performance.

Ahuja, Vijay (1976)

“Exposure of Routed Networks to Deadlock”
Under the direction of Victor L. Wallace

The deadlock problem is addressed for a class of systems called ‘routed networks’. A routed network is a store and forward communication network where all message transmissions follow predefined routes through the network. An approach is developed which permits the subgraphs of the network graph to be checked for presence of a deadlock. The approach rests on the fact that a deadlock in a subgraph exists if and only if a complete weighted matching exists in an appropriately defined bipartite graph. An algorithm for determining whether a deadlock can occur is presented and it is shown to be computationally attractive for reasonably sized networks. The algorithm is applied to resolve this question for some examples of routed networks adapted from the ARPA network.

Aikat, Jay (2010)

“An Investigation of the Effects of Modeling Application Workloads and Path Characteristics on Network Performance”
Under the direction of Kevin Jeffay
Electronic copy available

Network testbeds and simulators remain the dominant platforms for evaluating networking technologies today. Central to the problem of network emulation or simulation is the problem modeling and generating realistic, synthetic Internet traffic as the results of such experiments are valid to the extent that the traffic generated to drive these experiments accurately represents the traffic carried in real production networks. Modeling and generating realistic Internet traffic remains a complex and not wellunderstood problem in empirical networking research. When modeling production network traffic, researchers lack a clear understanding about which characteristics of the traffic must be modeled, and how these traffic characteristics affect the results of their experiments. In this dissertation, we developed and analyzed a spectrum of empirically-derived traffic models with varying degrees of realism. For TCP traffic, we examined several choices for modeling the internal structure of TCP connections (the pattern of request/response exchanges), and the round trip times of connections. Using measurements from two different production networks, we constructed nine different traffic models, each embodying different choices in the modeling space, and conducted extensive experiments to evaluate these choices on a 10Gbps laboratory testbed. As a result of this study, we demonstrate that the old adage of “garbage-in-garbage-out” applies to empirical networking research. We conclude that the structure of traffic driving an experiment significantly affects the results of the experiment. And we demonstrate this by showing the effects on four key network performance metrics: connection durations, response times, router queue lengths, and number of active connections in the network.

Airey, John M. (1990)

“Increasing Update Rates in the Building Walkthrough System with Automatic Model-Space Subdivision and Potentially Visible Set Calculations”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR90-027
Electronic copy available

Pre-processing some building models can radically reduce the number of polygons processed during interactive building walkthroughs. New model-space subdivision and potentially visible set (PVS) calculation techniques, used in combination, reduce the number of polygons processed in a real building model by an average factor of 30, and a worst case factor of at least 3.25. A method of recursive model-space subdivision using binary space partitioning is presented. Heuristics are developed to guide the choice of splitting planes. The spatial subdivisions resulting from binary space partitioning are called cells. Cells correspond roughly to rooms. An observer placed in a cell may see features exterior to the cell through transparent portions of the cell boundary called portals.Computing the polygonal definitions of the portals is cast as a problem of computing a set difference operation, union, intersection and difference, on co-planar sets of polygons is presented with an emphasis on handling real-world data. Two different approaches to computing the PVS for a cell are explored. The first uses point sampling and has the advantage that it is easy to trade time for results, but has the disadvantage of under-estimating the PVS. The second approach is to analytically compute a conservative over-estimation of the PVS using techniques similar to analytical shadow computation. An implementation of the Radiosity lighting model is described along with the issues involved in combining it with the algorithms described in this dissertation.

Alexander, Geoffrey D. (1995)

“Proving First-Order Equality Theorems with Hyper-Linking”
Under the direction of David A. Plaisted
Department of Computer Science Technical Report TR96-019
Electronic copy available

Lee and Plaisted recently developed a new automated theorem proving strategy called hyper-linking. As part of his dissertation, Lee developed a round-by-round implementation of the hyper-linking strategy, which competes well with other automated theorem provers on a wide range of theorem proving problems. However, Lee’s round-by-round implementation of hyper-linking is not particularly well suited for the addition of special methods in support of equality. In this dissertation, we describe, as alternative to the round-by-round hyper-linking implementation of Lee, a smallest instance firstimplementation of hyper-linking which addresses many of the inefficiencies of round-by-round hyper-linking encountered when adding special methods in support of equality. Smallest instance first hyper-linking is based on the formalization of generating smallest clauses first, a heuristic widely used in other automated theorem provers. We prove both the soundness and logical completeness of smallest instance first hyper-linking and show that it always generates smallest clauses first under a large class of ordering on clauses. We discuss which of the refinements used by Lee in round-by-round hyper-linking are also applicable to smallest instance first hyper-linking. We add equality support to smallest instance first hyper-linking using unfailing completion, narrowing, and Brand’s equality transformation in a method we call unfailing completion equality support. We show the soundness and logical completeness of smallest instance first hyper-linking with unfailing completion equality support. We also show that Brand’s equality transformation is unnecessary for the case where all positive equational literals occur in equational Horn clauses. In particular, this means that for pure equational problems, smallest instance first hyper-linking with unfailing completion equality support simply reduces to an application of unfailing completion. We conclude the dissertation by describing our preliminary implementation of smallest instance first hyper-linking. We present a comparison of our implementation with that of Lee’s on the 2,655 problems in the TPTP Problem Library of Sutcliffe and Suttner.

Aliaga, Daniel G. (1999)

“Automatically Reducing and Bounding Geometric Complexity by Using Images”
Under the direction of Anselmo A. Lastra
Electronic copy available

Large and complex 3D models are required for computer-aided design, architectural visualizations, flight simulation, and many types of virtual environments. Often, it is not possible to render all the geometry of these complex scenes at interactive rates, even with high-end computer graphics systems. This has led to extensive work in 3D model simplification methods. We have been investigating dynamically replacing portions of 3D models with images. This approach is similar to the old stage trick of draping cloth backgrounds in order to generate the illusion of presence in a scene too complex to actually construct on stage. An image (or backdrop) once created, can be rendered in time independent of the geometric complexity it represents. Consequently, significant frame rate increases can be achieved at the expense of storage space and visual quality. Properly handling these last two tradeoffs is crucial to a fast rendering system. In this dissertation, we present a preprocessing and run-time algorithm for creating a constant-frame-rate rendering system that replaces selected geometry with images enhanced with depth at each pixel. First, we describe the preprocessing algorithm to automatically bound the geometric complexity of arbitrary 3D models by using images. Second, we explore two general approaches to displaying images, surrounded by geometry, at run time. Third, we present a system tailored to architectural models.

Allen, B. Danette (2007)

“Hardware Design Optimization for Human Motion Tracking Systems”
Under the direction of Greg Welch
Electronic copy available

A key component of any interactive computer graphics application is the system for tracking user or input device motion. An accurate estimate of the position and/or orientation of the virtual world tracking targets is critical to effectively creating a convincing virtual experience. Tracking is one of the pillars upon which a virtual reality environment is built and it imposes a fundamental limit on how real the “reality” of Virtual Reality can be. Whether working on a new or modified tracking system, designers typically begin the design process with requirements for the working volume, the expected user motion, and the infrastructure. Considering these requirements they develop a candidate design that includes one or more tracking mediums (optical, acoustic, etc.), associated source/sensor devices (hardware), and an algorithm (software) for combining the information from the devices. They then simulate the candidate system to estimate the performance for some specific motion paths. Thus the predictions of such traditional simulations typically include the combined effect of hardware and algorithm choices, but only for the chosen motion paths. Before tracker algorithm selection, and irrespective of the motion paths, it is the choice and configuration of the source/sensor devices that are critical to performance. The global limitations imposed by these hardware design choices set a limit on the quantity and quality of the available information (signal) for a given system configuration, and they do so in complex and sometimes unexpected ways. This complexity often makes it difficult for designers to predict or develop intuition about the expected performance impact of adding, removing, or moving source/sensor devices, changing the device parameters, etc. This research introduces a stochastic framework for evaluating and comparing the expected performance of sensing systems for interactive computer graphics. Incorporating models of the sensor devices and expected user motion dynamics, this framework enables complimentary system- and measurement-level hardware information optimization, independent of algorithm and motion paths. The approach for system-level optimization is to estimate the asymptotic position and/or orientation uncertainty at many points throughout a desired working volume or surface, and to visualize the results graphically. This global performance estimation can provide both a quantitative assessment of the expected performance and intuition about how to improve the type and arrangement of sources and sensors, in the context of the desired working volume and expected scene dynamics. Using the same model components required for these system-level optimization, the optimal sensor sampling time can be determined with respect to the expected scene dynamics for measurement-level optimization. Also presented is an experimental evaluation to support the verification of asymptotic analysis of tracking system hardware design along with theoretical analysis aimed at supporting the validity of both the system- and measurement-level optimization methods. In addition, a case study in which both the system- and measurement-level optimization methods to a working tracking system is presented. Finally, Artemis, a software tool for amplifying human intuition and experience in tracking hardware design is introduced. Artemis implements the system-level optimization framework with a visualization component for insight into hardware design choices. Like fluid flow dynamics, Artemis examines and visualizes the “information flow” of the source and sensor devices in a tracking system, affording interaction with the modeled devices and the resulting performance uncertainty estimate.

Amburn, Elton P. (1994)

“Development and Evaluation of an Air-to-Air Combat Debriefing System Using a Head-Mounted Display”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR94-068
Electronic copy available

The United States Air Force Red Flag exercise is the premier training experience for fighter pilots. An instrumented range to the north of Nellis AFB, Nevada provides information about aircraft in a Red Flag exercise. The Red Flag Measurement and Debriefing System transmits messages on the high-activity aircraft at a rate of 10 messages per second. These messages contain data such as position, orientation, pilot actions, and aerodynamic variables. This research created and evaluated a computer system for replay and evaluation of Red Flag air-to-air combat training data. This system can display the air combat data either on a workstation console (a 19-inch CRT) or in a head-mounted display (HMD). A human-performance experiment compared the effectiveness of replaying air combat data using these two display options. Experienced fighter pilots from the 422nd Test and Evaluation Squadron, 57th Test Group, Nellis Air Force Base, Nevada served as subjects for the experiment. Using a computer system to replay and evaluate this type of data is not new; however, using a head-mounted display and evaluating its effectiveness is new. Quantitative and qualitative data were collected in the human- performance experiment. The quantitative data were collected from probe questions asked during mission replay. The answers to probe questions were used to compute sensitivity as a measure of the effectiveness of the display types. There was no statistically significant difference between the sensitivity of a subject when using the HMD or the 19-inch CRT. However, there was a trend in the subject’s performance which favored the HMD. The qualitative data was quite clear and showed a statistically significant difference in the preference of the 19-inch CRT over the HMD.

Antani, Lakulish (2013)

“Interactive Sound Propogation Using Precomputation and Statistical Approximations”
Under the direction of Dinesh Manocha

Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques–Ambient Reverberance, and Aural Proxies–to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

Arthur, Kevin (2000)

“Effects of Field of View on Performance with Head-Mounted Displays”
Under the direction of Frederick P. Brooks Jr.
Department of Computer Science Technical Report TR00-019
Electronic copy available

The field of view (FOV) in most head-mounted displays (HMDs) is no more than 60 degrees wide–far narrower than our normal FOV of about 200 degrees wide. This mismatch arises mostly from the difficulty and expense of building wide-FOV HMDs. Restricting a person’s FOV, however, has been shown in real environments to affect people’s behavior and degrade task performance. Previous work in virtual reality too has shown that restricting FOV to 50 degrees or less in an HMD can degrade performance. I conducted experiments with a custom, wide-FOV HMD and found that performance is degraded even at the relatively high FOV of 112 degrees, and further at 48 degrees. The experiments used a prototype tiled wide-FOV HMD to measure performance in VR at up to 176 degrees total horizontal FOV, and a custom large-area tracking system to establish new findings on performance while walking about a large virtual environment. FOV was significant in predicting performance of two tasks: searching for and locating a target by turning one’s head, and walking through a simple maze-like environment while avoiding walls. Wide FOV (112 degrees or greater) was especially important for the walking task; for it, performance at 112 degrees was 23% less than at 176 degrees. At 48 degrees, performance was 31% less than at 176 degrees. For the search task, performance at 112 degrees was 12% less than at 176 degrees. At 48 degrees, performance was 24% less than at 176 degrees. Additional analyses of the data show trends that suggest future investigation. Restricting FOV appears to decrease the user’s sense of presence, as measured by a questionnaire. VR sickness, also measured by questionnaire, increased with successive exposures to our system within an hour-long session, but stayed at relatively low levels. FOV appears to alter the occurrence of some sickness symptoms, but the data are inconclusive on whether FOV predicts total sickness. I performed additional measures and analyses, including tests of postural instability, distance memory, spatial memory, head-movement behavior, and comparisons with other HMDs and with real-world performance.

Austin Jr., Joseph H. (1973)

“Formal Models of Binding Processes in Control Programs”
Under the direction of Victor L. Wallace

Formal models of control programs are developed for two purposes: (1) to provide a rigorous definition of the concept of a control program, and (2) to formalize the semantics of control programs as a basis for control program programming languages. It is shown that operating system semantics can be modeled as a rewriting system over graphs. The states of the system, understood as configurations of resources, are modeled by graphs, and the operations or events of the system, understood as changes of the configuration, are modeled by rewriting rules on the graphs. Applications of the model to major areas of concern to control program design are then discussed:

  1. Hierarchic system design is expressed as (a) elaboration and abstraction of a model and (b) as a relation between two models whereby one is said to be the realization of the other.
  2. Synchronization, sharing, and multiplexing are modeled in terms of the macro-concept of a generalized queue (ordered set) and queuing discipline. Thus the concept of macro-definitions within the model is shown to greatly simplify the description of typical control programs.
  3. Protection and “interrupt” handling are shown to be easily described by the model.

Finally, the model is related to the Petri-net model of asynchronous systems. It is suggested by informal illustration that extension of the Petri-net model to include flow of resources can provide an intuitive overview of the interrelationships of the configuration transitions. It is concluded that the models provide: (a) a rationally-derived definition of the control program that distinguishes it from other types of programs, (b) a syntactic representation of control program semantics suitable as a basis for a table-driven interpreter, and (c) a methodology for control program design, based on hierarchic elaboration and the resource-flow view of interrelated concurrent processes.

Aylward, Stephen R. (1997)

“Continuous Mixture Modeling via Goodness-of-Fit Cores”
Under the direction of James M. Coggins
Electronic copy available

This dissertation introduces a new technique for the automated specification of continuous Gaussian mixture models (CGMMs). This approach is ideal for representing distributions that arise in a variety of problems including the driving problem of this dissertation, representing the distribution of the intensities associated with tissues in medical images. Gaussian mixture models are population density representations formed by the weighted linear combination of multiple, multivariate Gaussian component distributions. Finite Gaussian mixture models are formed when a finite number of discrete component distributions are used. CGMMs are formed when multiple continua of component distributions are used. The approach to CGMM specification developed here is based on an original structure, the Gaussian goodness-of-fit (GGoF) core. GGoF functions quantify how well a Gaussian having a particular mean and covariance represents the local distribution of samples. GGoF cores capture the continua of Gaussians which well represent a sampled population; they define a CGMM. In this dissertation, Monte Carlo simulations are used to evaluate the robustness of a variety of GGoF functions and binning methods for the specifications of GGoF cores. The log likelihood ratio GGoF function is shown to produce the most accurate and consistent GGoF cores. In generalized projective Gaussian distributions, similar feature vectors, when projected onto a subset of basis feature vectors, have a Gaussian distribution. In this dissertation, Monte Carlo simulations and ROC analysis are used to demonstrate that for such distributions CGMMs defined using GGoF cores produce accurate and consistent labelings compared to K-means and finite Gaussian mixture models. Additionally, CGMMs do not suffer from the problems associated with determining an appropriate number of components, initializing the component parameters, or iteratively converging to a solution. Generalized projective Gaussian distributions exist in a variety of real-world data. The applicability of CGMM via GGoF cores to real-world data is demonstrated through the accurate modeling of tissues in an inhomogeneous magnetic resonance image. The extension of CGMM via GGoF cores to high-dimensional data is demonstrated through the accurate modeling of a sampled trivariate anisotropic generalized projective Gaussian distribution.

Azuma, Ronald T. (1995)

“Predictive Tracking for Augmented Reality”
Under the direction of Gary Bishop
Department of Computer Science Technical Report TR95-007
Electronic copy available

In Augmented Reality systems, see-through Head-Mounted Displays (HMDs) superimpose virtual three-dimensional objects on the real world. This technology has the potential to enhance a user’s perception of and interaction with the real world. However, many Augmented Reality applications will not be accepted unless virtual objects are accurately registered with their real counterparts. Good registration is difficult, because of the high resolution of the human visual system and its sensitivity to small differences. Registration errors fall into two categories: static errors, which occur even when the user remains still, and dynamic errors caused by system delays when the user moves. Dynamic errors are usually the largest errors. This dissertation demonstrates that predicting future head locations is an effective approach for significantly reducing dynamic errors. This demonstration is performed in real time with an operational Augmented Reality system. First, evaluating the effect of prediction requires robust static registration. Therefore, this system uses a custom optoelectronic head-tracking system and three calibration procedures developed to measure the viewing parameters. Second, the system predicts future head positions and orientations with the aid of inertial sensors. Effective use of these sensors requires accurate estimation of the varying prediction intervals, optimization techniques for determining parameters, and a system built to support real-time processes. On average, prediction with inertial sensors is 2 to 3 times more accurate than prediction without inertial sensors and 5 to 10 times more accurate than not doing any prediction at all. Prediction is most effective at short prediction intervals, empirically determined to be about 80 milliseconds or less. An analysis of the predictor in the frequency domain shows the predictor magnifies the signal by roughly the square of the angular frequency and the prediction interval. For specified head-motion sequences and prediction intervals, this analytical framework can also estimate the maximum possible time-domain error and the maximum tolerable system delay given a specified maximum time-domain error. Future steps that may further improve registration are discussed.


Babich, Wayne A. (1977)

“High Level Data Flow Analysis Using a Parse Tree Representation of the Program”
Under the direction of Medhi Jazayeri

Data flow analysis is a technique for collecting information about the flow of data within computer programs. It is used by optimizing compilers in order to insure the correctness of optimizations as well as by certain program debugging aids. This dissertation introduces an iterative technique, called the method of attributes, for performing global data flow analysis using a parse tree representation of the source program. The technique is based on a variant of attribute grammars, and permits most control flow structures found in modern algorithmic languages, including unconstrained GOTO’s. It is applied to analysis of live variables and to analysis of available expressions; in both cases, time required for analysis of GOTO- less programs is linear in program size (with a small constant of linearity), regardless of the nesting depth of program loops. This dissertation also describes the importance of producing data flow analysis information “on demand.” The demand problem is the problem of supplying a single piece of data flow information upon request from an optimizer or other processor. The method of attributes is applied to demand analysis of live variables. KEY PHRASES: Data flow analysis; Live variables; Dead variables; Available expressions; Attribute grammars; Demand analysis; Method of attributes; Program optimization; Program annotation. Computing Reviews Categories 4.12, 5.24

Bajura, Michael A. (1997)

“Merging Real and Virtual Environments with Video See-Through Head-Mounted Displays”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR98-036
Electronic copy available

This dissertation explores the problem of inserting virtual (computer generated) objects into natural scenes in video-based augmented-reality (AR) systems. The augmented-reality system problems addressed are the shorter-term goal of making synthetic objects appear to be more stable and registered and the longer-term goal of presenting proper occlusion and interaction cues between real and synthetic objects. These results can be achieved in video-based AR systems by directly processing video images of the natural scene in real-time. No additional hardware or additional tracking mechanisms are needed above those currently used in augmented-reality systems. The implications of this work suggest a different approach to the design of AR tracking systems in which the user’s world view is used as part of the tracking process. This work shows that tracking system errors which affect the user’s world view can be corrected by measuring these errors dynamically and correcting for them in a way which is not readily apparent or objectionable to the user. This means that either cheaper tracking systems could be used or possibly even no separate tracking system in certain situations. Furthermore, the user’s world view can be used to construct a 3-D model of his environment which can be used to model object interaction and occlusion relations. This is the first step toward further work of properly shading and lighting synthetic objects inserted into natural scenes. A real-time (30 Hz) AR system is demonstrated which uses simple image feature tracking to improve registration and tracking accuracy. This is extended to a more general system which illustrates how reconstruction of natural scene geometry, correct synthetic and real object occlusion, and collision detection could be achieved in real-time. 3-D error analysis is included. A camera calibration appendix contains accuracy measurements.

Banks, David C. (1993)

“Interacting with Surfaces in Four Dimensions Using Computer Graphics”
Under the direction of Stephen M. Pizer
Department of Computer Science Technical Report TR93-011
Electronic copy available

High-speed, high-quality computer graphics enables a user to interactively manipulate surfaces in four dimensions and see them on a computer screen. Surfaces in 4-space exhibit properties that are prohibited in 3-space. For example, nonorientable surfaces may be free of self-intersections in 4-space. Can a user actually make sense of the shapes of surfaces in a larger-dimensional space than the familiar 3D world? Experiment shows he can. A prototype system called Fourphront, running on the graphics engine Pixel-Planes 5 (developed at UNC-Chapel Hill) allows the user to perform interactive algorithms in order to determine some of the properties of a surface in 4-space. This dissertation describes solutions to several problems associated with manipulating surfaces in 4-space. It shows how the user in 3-space can control a surface in 4-space in an intuitive way. It describes how to extend the common illumination models to large numbers of dimensions. And it presents visualization techniques for conveying 4D depth information, for calculating intersections, and for calculating silhouettes.

Bartel, Jacob (2015)

“Predictions to Ease User’s Effort in Scalable Sharing”
Under the direction of Prasun Dewan

Significant user effort is required to choose recipients of shared information, which grows as the scale of the number of potential or target recipients increases. It is our thesis that it is possible to develop new approaches to predict persistent named groups, ephemeral groups, and response times that will reduce user effort.

We predict persistent named groups using the insight that implicit social graphs inferred from messages can be composed with existing prediction techniques designed for explicit social graphs, thereby demonstrating similar grouping patterns in email and communities. However, this approach still requires that users know when to generate such predictions. We predict group creation times based on the intuition that bursts of change in the social graph likely signal named group creation.  While these recommendations can help create new groups, they do not update existing ones. We predict how existing named groups should evolve based on the insight that the growth rates of named groups and the underlying social graph will match.

When appropriate named groups do not exist, it is useful to predict ephemeral groups of information recipients. We have developed an approach to make hierarchical recipient recommendations that groups the elements in a flat list of recommended recipients, and thus is composable with existing flat recipient-recommendation techniques. It is based on the insight that groups of recipients in past messages can be organized in a tree.

To help users select among alternative sets of recipients, we have made predictions about the scale of response time of shared information, based on the insights that messages addressed to similar recipients or containing similar titles will yield similar response times.

Our prediction approaches have been applied to three specific systems – email, Usenet and Stack Overflow – based on the insight that email recipients correspond to Stack Overflow tags and Usenet newsgroups.

We evaluated these approaches with actual user data using new metrics for measuring the differences in scale between predicted and actual response times and measuring the costs of eliminating spurious named-group predictions, editing named-group recommendations for use in future messages, scanning and selecting hierarchical ephemeral group-recommendations, and manually entering recipients.

Bastos, Rui (1999)

“Superposition Rendering: Increased Realism for Interactive Walkthrough”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR00-021
Electronic copy available

The light transport equation, conventionally known as the rendering equation in a slightly different form, is an implicit integral equation, which represents the interactions of light with matter and the distribution of light in a scene. This research describes a signals-and-systems approach to light transport and casts the light transport equation in terms of convolution. Additionally, the light transport problem is linearly decomposed into simpler problems with simpler solutions, which are then recombined to approximate the full solution. The central goal is to provide interactive photorealistic rendering of virtual environments. We show how the light transport problem can be cast in terms of signals-and-systems. The light is the signal and the materials are the systems. The outgoing light from a light transfer at a surface point is given by convolving the incoming light with the material’s impulse response (the material’s BRDF/BTDF). Even though the theoretical approach is presented in directional-space, we present an approximation in screen-space, which enables the exploitation of graphics hardware convolution for approximating the light transport equation. The convolution approach to light transport is not enough to fully solve the light transport problem at interactive rates with current machines. We decompose the light transport problem into simpler problems. The decomposition of the light transport problem is based on distinct characteristics of different parts of the problem: the ideally diffuse, the ideally specular, and the glossy transfers. A technique for interactive rendering of each of these components is presented as well as a technique for superposing the independent components in a multipass manner in real time. Given the extensive use of the superposition principle in this research, we name our approach superposition rendering to distinguish it from other standard hardware-aided multipass rendering approaches.

Bellovin, Steven M. (1982)

“Verifiably Correct Code Generation Using Predicate Transformers”
Under the direction of David L. Parnas

For about fifteen years, computer scientists have known how to prove that programs meet mathematical specifications. However, the methodology has satisfied only theoreticians; “practical” programmers have tended to reject it for a variety of reasons. Among these are the difficulty of rigorously defining the function of a program, and the fact that the proofs of correctness tend to be much longer and much more complex that the programs themselves. We have tackled this problem from two sides. First, we deal with proving compilers correct, a special class of programs for which there exists a rather simple specification of purpose. Second, although we would prefer to have programs which are guaranteed to be correct, we will accept a program where we can easily determine if the output of any given run is correct. People will generally be satisfied with a program that usually runs properly, but will always produce a message if it has not. That way, one can have a great deal of confidence in its output. Working with a compiler for Dijkstra’s language, as defined in A Discipline of Programming, and using predicate transformers for our semantic definitions, we have presented a methodology for showing that individual compilations are correct. This involves verifying formally the basic code templates and many library subroutines, and presenting a set of rules that allows one to perform a pattern match between the generated code and the standard templates. This match, which is fairly easy to perform and could in fact be done by a separate program, is our guarantee that the output of the compiler is correct. The set of rules we present includes a few restrictions on the compiler, most notably that the object code for a given statement must be uncoupled from the code for any other statement. In general, non- optimizing compilers are not affected by any of our restrictions; our DPL compiler, which was developed independently, was not changed in any way to meet our requirements.

Bennett, Eric P. (2007)

“Computational Video Enhancement”
Under the direction of Leonard McMillan
Electronic copy available

During a video, each scene element is often imaged many times by the sensor. I propose that by combining information from each captured frame throughout the video it is possible to enhance the entire video. This concept is the basis of computational video enhancement. In this dissertation, the viability of computational video processing is explored in addition to presenting applications where this processing method can be leveraged. Spatio-temporal volumes are employed as a framework for efficient computational video processing, and I extend them by introducing sheared volumes. Shearing provides spatial frame warping for alignment between frames, allowing temporally-adjacent samples to be processed using traditional editing and filtering approaches. An efficient filter-graph framework is presented to support this processing along with a prototype video editing and manipulation tool utilizing that framework. To demonstrate the integration of samples from multiple frames, I demonstrate methods for improving poorly exposed low-light videos to achieve improved results. This integration is guided by a tone- mapping process to determine spatially-varying optimal exposures and an adaptive spatio-temporal filter to integrate the samples. Low- light video enhancement is also addressed in the multispectral domain by combining visible and infrared samples. This is facilitated by the use of a novel multispectral edge-preserving filter to enhance only the visible spectrum video. Finally, the temporal characteristics of videos are altered by a computational video recompiling process. By resampling the video-rate footage, novel time-lapse sequences are found that optimize for user- specified characteristics. The resulting shorter video is a more faithful summary of the original source than a traditional time-lapse video. Simultaneously, new synthetic exposures are generated to alter the output video’s aliasing characteristics.

Bentley, Jon L. (1976)

“Divide and Conquer Algorithms for Closest Point Problems in Multidimensional Space”
Under the direction of Donald F. Stanat

The contributions contained in this dissertation can be broadly classified as falling into three areas: multidimensional algorithms, the “divide and conquer” strategy, and principles of algorithm design. Contributions to multidimensional algorithms are twofold: basic results and basic methods. The results deal with algorithms for determining properties of sets of N points in k dimensional space. Among other results it is shown that the closest pair among the N points can be found in time proportional to N log N and that the nearest neighbor to each point among the N can be found in time proportional to N(log N)^k-1 (for k > 2). The basic methods include an algorithm schema applicable to many multidimensional problems and fundamental concepts for dealing with such problems. The algorithms in this dissertation demonstrate the power of the divide and conquer strategy. The strategy is shown to be applicable to multidimensional problems. The basic technique is modified in many interesting ways to create faster algorithms. The final area to which this dissertation contributes is algorithm design. Instead of merely presenting the results herein as finished products, they are arrived at through a detailed development process. This development is one of the few written records of the development of an asymptotically fast algorithm; as such it is a suitable basis for teaching algorithm design. The general principles of algorithm design employed are enumerated.

Bergman, Lawrence D. (1993)

“VIEW–A System for Prototyping Scientific Visualizations”
Under the direction of Frederick P. Brooks, Jr.
Department of Computer Science Technical Report TR93-065
Electronic copy available

Different visual representations emphasize different aspects of scientific information. As a scientist searches for representations that yield the most profound insights, she wants to try a variety of representations quickly and easily. Current visualization systems emphasize the rapid application of visual representations to new datasets, rather than the rapid development of new visual representations. The dissertation addresses the question, “How can we facilitate exploratory visualization, particularly creation of new representations?” A design-process-based model is proposed. Based on the principles suggested by this model, VIEW, an interactive molecular visualization system, was developed. The VIEW system provides a two-pronged solution to the proposed problem. First, the system supplies a library of drawing tools. These tools allow the user to sketch geometric representations, specifying database parameters by interactively selecting geometric elements on-screen. Besides generating geometry, drawing tools modify color, change geometric parameters, reposition geometry, and animate groups. Second, a facility for developing new drawing tools or modifying existing ones allows the user to extend the library. Tools are specified in a C-like script language. With this language, the user defines new representations and customizes the interaction style of the tools. The system is designed to facilitate a rapid code-try-recode cycle. In addition to the interpreted language, a development environment includes a Macintosh-like editor and an interactive debugger that shows in real time the visual effects of changes to tool script. The dissertation presents the design model, describes the VIEW system, and then demonstrates the utility of this design-process-based visualization system through a series of case-studies.

Biagioni, Edoardo S. (1992)

“Scan Directed Load Balancing”
Under the direction of Gyula A. Mago and Jan F. Prins
Department of Computer Science Technical Report TR91-045
Electronic copy available

Scan Directed Load Balancing is an algorithm for dynamic load balancing on k-dimensional mesh-connected computers. In one dimension, a scan computes the load to the left of each processor and the total load. The difference between the load to the left and the average load times the number of processors to the left is the number of active data points to be shifted to balance the load. In two dimensions, scans are used to compute the load of each column of data and the load to the left of each column of processors. The difference between the load to the left and the average load times the number of processor columns to the left is the load of the data columns to be shifted by this column of processors; the computation is repeated for the rows. In more than two dimensions, the load is computed for (hyper-) planes of data and is balanced by shifting data (hyper-) planes among processor (hyper-) planes; the basic one-dimensional step of the algorithm is repeated k times. The algorithm targets applications that can be expressed as local computations on a grid and whose dynamic activity is unevenly distributed. In one such computation, Variable Conductance Diffusion, different pixels diffuse for different numbers of iterations, with those that require many iterations unevenly distributed and, in the absence of load balancing, often located on a small subset of the processors. Scan Directed Load Balancing is effective in speeding up Variable Conductance Diffusion for a class of images (512 X 512 pixel), and very effective for selected images in which only a small fraction of the pixels requires many iterations. This implementation, and that for a parallel interpreter for the functional language FFP that uses Scan Directed Load Balancing to both manage storage and balance load, are for UNC’s MasPar MP-1, with 4096 processors embedded in a 2-dimensional mesh. Scan Directed Load Balancing does not guarantee optimal load balance, but always assigns neighboring data to the same or to neighboring processors, so communication between neighboring data is local and fast.

Bishop, T. Gary (1984)

“Self-Tracker: A Smart Optical Sensor on Silicon”
Under the direction of Henry Fuchs
Department of Computer Science Technical Report TR84-002
Electronic copy available

A new system for real-time, three-dimensional computer input is described. The system will use a cluster of identical custom integrated circuits with outward looking lenses as an optical sensing device. Each custom integrated sensor chip measures and reports the shift in its one-dimensional image of the stationary room environment. These shifts will be processed by a separate general-purpose computer to extract the three-dimensional motion of the cluster. The expected advantages of this new approach are unrestricted user motion, large operating environments, capability for simultaneous tracking of several users, passive tracking with no moving parts, and freedom from electromagnetic interference. The fundamental concept for the design of the sensor chip relies on a cyclic relationship between speed and simplicity. If the frame rate is fast, the changes from image to image will be small. Small changes can be tracked with a simple algorithm. This simple algorithm can be implemented with small circuitry. The small circuitry lets a single chip hold the complete sensor, both imaging and image processing. Such implementation allows each sensor to be fast because all high-bandwidth communication is done on-chip. This cyclic relationship can spiral up as the design is iterated, with faster and simpler operation, or down, with slower and more complex operation. The system design sequence described here has been spiraling up. System, sensor, algorithm, and processor designs have each had several iterations. Most recently, a prototype sensor chip has been designed, fabricated, and tested. The prototype includes a one-dimensional camera and circuitry for image tracking that operates at 1000 to 4000 frames per second in ordinary room light. As part of this research, photosensors that operate at millisecond rates in ordinary room light with modest lenses have been designed, tested and fabricated on standard digital nMOS lines. They may be useful for designers of other integrated optical sensors.

Block, Aaron (2008)

“Adaptive Multiprocessor Real-Time Systems”
Under the direction of James Anderson
Electronic copy available

Over the past few years, as multicore technology has become cost-effective, multiprocessor systems have become increasingly prevalent. The growing availability of such systems has spurred the development of computationally-intensive applications for which single-processor designs are insufficient. Many such applications have timing constraints; such timing constraints are often not static, but may change in response to both external and internal stimuli. Examples of such applications include tracking systems and many multimedia applications. Motivated by these observations, this dissertation proposes several different adaptive scheduling algorithms that are capable of guaranteeing flexible timing constraints on multiprocessor platforms. Under traditional task models (e.g., periodic, sporadic, etc.), the schedulability of a system is based on each task’s worst-case execution time (WCET), which defines the maximum amount of time that each of its jobs can execute. The disadvantage of using WCETs is that systems may be deemed unschedulable even if they would function correctly most of the time when deployed. Adaptive real-time scheduling algorithms allow the timing constraints of applications to be adjusted based upon runtime conditions, instead of always using fixed timing constraints based upon WCETs. While there is a substantial body of prior work on scheduling applications with static timing constraints on multiprocessor systems, prior to this dissertation, no adaptive multiprocessor scheduling approach existed that is capable of ensuring bounded “error” (where error is measured by comparison to an ideal allocation). In this dissertation, this limitation is addressed by proposing five different multiprocessor scheduling algorithms that allow a task’s timing constraints to change at runtime. The five proposed adaptive algorithms are based on different non-adaptive multiprocessor scheduling algorithms that place different restrictions on task migrations and preemptions. The relative advantages of these algorithms are compared by simulating both the Whisper human tracking system and the Virtual Exposure Camera (VEC), both of which were developed at The University of North Carolina at Chapel Hill. In addition, a feedback-based adaptive framework is proposed that not only allows timing constraints to adapt at runtime, but also detects which adaptions are needed. An implementation of this adaptive framework on a real-time multiprocessor testbed is discussed and its performance is evaluated by using the core operations of both Whisper and VEC. From this dissertation, it can be concluded that feedback and optimization techniques can be used to determine at runtime which adaptions are needed. Moreover, the accuracy of an adaptive algorithm can be improved by allowing more frequent task migrations and preemptions; however, this accuracy comes at the expense of higher migration and preemption costs, which impacts average-case performance. Thus, there is a tradeoff between accuracy and average-case performance that depends on the frequency of task migrations/preemptions and their cost.

Bokinsky, Alexandra (2003)

“Multivariate Data Visualization with Data-Driven Spots”
Under the direction of Frederick Brooks
Electronic copy available

This dissertation addresses an important problem in the visualization of scientific data: Given a family of single-valued functions Fk(x,y) in two dimensions, all sampled on a regular, finite, two-dimensional grid, i,j, devise visualizations that allow each function to be visually discriminated and studied separately and studied for comparisons and correlations with other functions in the family. I developed a technique called Data-Driven Spots (DDS), where each function Fk(x,y) is sampled, by a random collection of circular Gaussians with a uniform standard deviation, rather than presented in full. Human visual perception estimates the function’s values where it is not represented in the sample. These sampled functions, called layers, are presented overlaid. Each function is sampled and displayed in some regions where the others are not, thus each can be discriminated separately; since all are shown simultaneously, correlations can also be estimated. Layers are displayed with alpha-blending, such that each layer is distinguished by hue and/or the standard deviation of the Gaussians. Data values are multiplied by the Gaussian at each i,j grid point; the result determines the opacity of that layer at that point. Blended with a neutral gray background, each layer has color saturation at i,j proportional to the modulated data value. Since opacities are displayed, lower layers are mostly visible between the spots on upper layers. I built a DDS Toolkit that enables users to construct DDS visualizations of function families. Construction of effective DDS visualizations requires interactive exploration of visualization parameters, which the toolkit facilitates. I used the toolkit to prepare visualizations of many kinds of function families; a collection of images is presented. To evaluate the effectiveness of the DDS technique, I performed user studies. These studies showed that performance on a spatial correlation task was better for overlaid DDS images than side-by-side DDS images, that alpha-blended layers are visually discriminable in the presence of up to seven distractors, and that animation of layers with respect to each other dramatically increases their visual salience. Animation permits a Fk(x,y) to be seen at every i,j over time; human visual perception integrates non-simultaneously displayed values into a coherent understanding.

Bollella, Gregory (1997)

“Slotted Priorities: Supporting Real-Time Computing Within General-Purpose Operating Systems.”
Under the direction of Kevin Jeffay
Electronic copy available

Recent advances in network technologies, processor capabilities, and microcomputer system hardware, coupled with the explosive growth of the Internet and on-line data access, have created new demands on personal computer operating systems and hardware. In large part, these demands are for the ability to acquire, manipulate, display, and store multimedia data. The computational processes that successfully acquire and display multimedia data necessarily have deadlines. That is, the computation must be complete before a specified point in time. Currently, no general-purpose operating systems support such real-time processes. We have developed a software architecture, called slotted priorities, that defines a way to add support for real-time computation to existing general-purpose operating systems for uniprocessor machine architectures. The slotted priorities architecture shares the resources of a computing system between a general-purpose operating system and a real-time kernel. Software components called executives manage how an instance of a resource is shared. Executives ensure that the RTK can gain access to the resource at precise times. The resulting operating system will be able to guarantee that certain computations will always} complete before their stated deadline. The modifications to the general-purpose operating system are modest. The architecture is comprised of a resource model, an execution model, and a programming model. The resource model is a classification of resources according to characteristics relevant to the sharing of the resources between the real-time kernel and the general-purpose operating system. The execution model defines how real-time tasks acquire the processor. The programming model defines how programmers write and think about real-time programs for an implementation of the slotted priorities architecture. Finally, we develop a feasibility test which can determine if a set of periodic real-time threads will all meet their deadlines when executed on a system implementing this architecture. We describe an implementation of the architecture and a set of experiments that validate the implementation. Two real-time demonstration applications were built and executed on the test implementation. Results and analysis of those applications are also presented.

Borland, David M. (2007)

“Flexible Occlusion Rendering for Improved Views of Three-Dimensional Medical Images”
Under the direction of Russell M. Taylor
Electronic copy available

The goal of this work is to enable more rapid and accurate diagnosis of pathology from three-dimensional (3D) medical images by augmenting standard volume rendering techniques to display otherwise-occluded features within the volume. When displaying such data sets with volume rendering, appropriate selection of the transfer function is critical for determining which features of the data will be displayed. In many cases, however, no transfer function is able to produce the most useful views for diagnosis of pathology. FOR is an addition to standard ray cast volume rendering that modulates accumulated color and opacity along each ray upon detecting features indicating the separation between objects of the same intensity range. For contrast-enhanced MRI and CT data, these separation features are intensity peaks. To detect these peaks, a hysteresis-based method is used to reduce sensitivity to noise. To further reduce noise and enable control over the spatial scale of the features detected, a smoothed version of the original data set is used for feature detection, while rendering the original data at high resolution. Separating the occlusion feature detection from the volume rendering transfer function enables robust occlusion determination and seamless transition from occluded views to non-occluded views of surfaces during virtual flythroughs. FOR has been applied to virtual arthroscopy of joint from MRI data. For example, when evaluating a shoulder joint, a view showing the entire shoulder socket is the most useful view for rapid evaluation. However, such views are impossible using standard volume rendering, as the material in the humeral head will occlude the shoulder socket from this viewpoint. FOR automatically culls the occluding material of the humeral head, enabling display of the entire shoulder socket. FOR has also been successfully applied to virtual ureteroscopy of the renal collecting system from CT data, and knee fracture visualization from CT data.

Brandenburg, Bjoern B. (2011)

“Scheduling and Locking in Multiprocessor Real-Time Operating Systems”
Under the direction of James H. Anderson
Electronic copy available

With the widespread adoption of multicore architectures, multiprocessors are now a standard deployment platform for (soft) real-time applications. This dissertation addresses two questions fundamental to the design of multicore-ready real-time operating systems: (1) Which scheduling policies offer the greatest flexibility in satisfying temporal constraints; and (2) which locking algorithms should be used to avoid unpredictable delays? With regard to Question 1, LITMUSRT, a real-time extension of the Linux kernel, is presented and its design is discussed in detail. Notably, LITMUSRT implements link-based scheduling, a novel approach to controlling blocking due to non-preemptive sections. Each implemented scheduler (22 configurations in total) is evaluated under consideration of overheads on a 24-core Intel Xeon platform. The experiments show that partitioned earliest-deadline first (EDF) scheduling is generally preferable in a hard real-time setting, whereas global and clustered EDF scheduling are effective in a soft real-time setting. With regard to Question 2, real-time locking protocols are required to ensure that the maximum delay due priority inversion can be bounded a priori. Several spinlock- and semaphore-based multiprocessor real-time locking protocols for mutual exclusion (mutex), reader-writer (RW) exclusion, and k-exclusion are proposed and analyzed. A new category of RW locks suited to worst-case analysis, termed phase-fair locks, is proposed and three efficient phase-fair spinlock implementations are provided (one with few atomic operations, one with low space requirements, and one with constant RMR complexity). Maximum priority-inversion blocking is proposed as a natural complexity measure for semaphore protocols. It is shown that there are two classes of schedulability analysis, namely suspensionoblivious and suspension-aware analysis, that yield two different lower bounds on blocking. Five asymptotically optimal locking protocols are designed and analyzed: a family of mutex, RW, and k-exclusion protocols for global, partitioned, and clustered scheduling that are asymptotically optimal in the suspension-oblivious case, and a mutex protocol for partitioned scheduling that is asymptotically optimal in the suspension-aware case. A LITMUSRT-based empirical evaluation is presented that shows these protocols to be practical.

Britton, Edward G. (1977)

“A Methodology for the Ergonomic Design of Interactive Computer Graphic Systems, and its Application to Crystallography”

Under the direction of Frederick P. Brooks, Jr. This dissertation presents a methodology for the ergonomic design, or human engineering, of interactive computer graphic systems. It proposes human-factors principles based on linguistic and psychological considerations, and proposes an order for designing the features of a system. As a vehicle for developing this methodology, we designed and built a system, described herein, with which biochemists can determine the structures of macromolecules. Crystallographers have used this over one thousand hours and published chemical results based on their work. We assert that the ease with which crystallographers use this system indicates that our methodology for ergonomic design is useful.

Broadhurst, Robert Elijah (2008)

“Compact Appearance in Object Populations Using Quantile Function Based Distribution Families”
Under the direction of Stephen Pizer
Electronic copy available

Statistical measurements of the variability of probability distributions are important in many image analysis applications. For instance, let the appearance of a material in a picture be represented by the distribution of its pixel values. It is necessary to model the variability of these distributions to understand how the appearance of the material is affected by viewpoint, lighting, or scale changes. In medical imaging, an organ’s appearance varies not only due to the parameters of the imaging device but also due to changes in the organ, either within a patient day to day or between patients. Classical statistical techniques can be used to study distribution variability given a distribution representation for which the variation is linear. For many distributions relevant to image analysis, standard representations are either too constrained or have nonlinear variation, in which case classical linear multivariate statistics are not applicable. This dissertation presents general, non-parametric representations of a variety of distribution types, based on the quantile function, for which a useful class of variability is linear. A key result is that principal component analysis can be used to efficiently parameterize their variability, ie, construct a distribution family. The quantile function framework is applied to two driving problems in this dissertation: (1) the statistical characterization of the texture properties of materials for classification, and (2) the statistical characterization of the appearance of objects in images for deformable model based segmentation. It is shown that in both applications the observed variability is appropriately linear, allowing efficient modeling. State of the art results are achieved for both the classification of materials in the Columbia-Utrecht database and the segmentation of the kidney, bladder, and prostate in 3D CT images. While the applications presented in this dissertation use image-based appearance observations in the field of image analysis, the methods and theory should be widely applicable to the variety of observations found in the many scientific fields, and, more specifically, to shape observations in the field of computer vision.

Brown, Peter H. (2002)

“Multiscale Evaluation of 3D Shape Perception in Computer Graphics”
Under the direction of Christina Burbeck
Electronic copy available

This dissertation describes a tool and associated analysis techniques (collectively called the Multiscale Test of Perceived Shape, or MTOPS) for measuring observers’ perceptions of 3D surface orientation in computer-graphics visualizations at multiple spatial scales. Using this tool and techniques, I demonstrated that such multiscale measurements are both possible and fruitful. Specifically, I showed the following:

  • Observers consistently made different surface-orientation settings at different scales, indicating both that perceived orientation changed across scale and that MTOPS was able to measure that change.
  • By comparing observers’ settings to the stimulus geometry, I could measure the across-scale effects of adding a given depth cue to a visualization. Specifically, I found the following:
    • Adding stereo to a non-stereo presentation caused the accuracy of observers’ settings to improve more at large scale than at small.
    • Adding head-motion parallax to a stereo presentation caused a modest improvement in observers’ accuracy, mostly at small scale.
    • Adding automatic object motion to a non-stereo presentation improved observers’ accuracy substantially, at many scales. Adding user-controlled object motion to a non-stereo presentation gave a less consistent, and for most users smaller, accuracy benefit.
  • The MTOPS tool and techniques were sufficiently sensitive to capture differences in those multiscale depth-cue effects due to surface complexity and observer knowledge.
Brownlee, Jr., Edward H. (1975)

“Lossiness in Tessellation Automata”
Under the direction of Stephen F. Weiss

A tessellation automaton is an infinite array of finite state automata interacting in a prescribed manner. The state of the entire array at any instant is called its configuration. A tessellation automaton is said to be lossy if its global transition function is not one-to-one, that is, if some configuration has more than one predecessor. Lossiness is shown to be equivalent to three other properties of tessellation automata. One of these leads to a sufficient test for lossiness which is easily applied to the local transition function. A necessary condition is also given. Finally, it is shown that certain kinds of computation cannot be carried out in lossless spaces.

Burns, Eric (2007)

“MACBETH: Management of Avatar Conflict By Employment of a Technique Hybrid”
Under the direction of Frederick P. Brooks, Jr.
Electronic copy available

Report Submission Schedule

Time line for BComp Dissertation (Final Year Project) Report Submission



1. Continuous Assessment : CA Report

a) Deadline to submit Project Update Form

b) Deadline for submission of CA report to supervisor and main evaluator.

c) Presentation to supervisor and main evaluator.

a) Friday, week 3

b) Wednesday, week 12

c) Reading week

2. Final Assessment (FYP Presentation)

a) Deadline for submission of Final Report:

Students to submit ring-bound hardcopies of the report to supervisor, main evaluator and moderator directly.

Wednesday, Week 12 of next semester
3.  PresentationReading Week

4a) Submission of Feedback of Final Year Project Guidance and Evaluation

4b) Submission of e-copy of Final Report

First Monday after Exam

Please click here for the actual schedule.

Continual Assessment

Continual Assessment (CA) - At the end of semester 1 (on week 12), each FYP student has to submit an interim project report (Please see CA Report Guidelines) which is to be evaluated by the supervisor and one examiner (called the Main Evaluator). The main evaluator will also be the evaluator for the same project in the final presentation.

The supervisor and the main evaluator will have the opportunity to comment on student's competence and attitude towards the project and also on student's progress so far.

Students are required to make the arrangement to meet up with both the supervisor and the main evaluator so that he/she can report on his/her progress after submitting the interim report, during the reading week.

CA accounts for 30% of the final grade of the project. CA given by supervisor will be 20% and the main evaluator would assess the progress for 10%.

The CA Report will be used as a basis for your supervisor and main evaluator to decide whether or not you have made sufficient progress to continue with CP4101 (please see the CA evaluation form at the FYP website for the criteria which your supervisor would evaluate your progress).

If your supervisor and main evaluator agrees that you can continue the project, you will be automatically registered for CP4101 next semester; and you will receive an 'IP' grade for CP4101 this semester. If however, your supervisor and main evaluator decide that you have not made sufficient progress and recommends that you discontinue the project, you would receive an 'IC' grade for CP4101, and you have to take 3 modules instead of doing FYP.

Final Report format

The physical layout and formatting of the report is also important, and yet is very often neglected. A tidy, well laid out and consistently formatted document makes for easier reading and is suggestive of a careful and professional attitude towards its preparation.

Remember that quantity does not automatically guarantee quality. Conciseness, clarity and elegance are invaluable qualities in report writing, just as they are in programming, and will be rewarded appropriately.

Please refer to Final Report Guidelines for the report format.

Some Guidelines

The BComp Dissertation is an extremely important aspect of the module. It serves to show what you have achieved and should demonstrate that:

  • Approach you have taken compared to existing research/methodology/investigation.
  • Theoretical and/or practical techniques and problem-solving skills applied in the project.
  • You may objectively criticise your own work and make constructive suggestions for improvements or further work based on your experiences so far.

Evaluators will only have a short time to listen to your presentation during the seminar (and in some cases see a demonstration). For this reason they will rely heavily on the report to judge the project and the final outcome can be influenced significantly by the quality of your project report. Don't make the mistake of leaving the write-up to the last minute. Ideally, starting from CA report, you should produce the bulk of the report as you go along and use the last week or two to bring it together into a coherent document.

What else to include in the report?

In case you wrote a paper describing your FYP results for possible conference publication (up to 10 pages), you can include the paper in the FYP Report Appendix. In such case, the page limit for your FYP report is 55 pages plus 10 pages for the paper. Note that it is not obligatory for you to include a paper into your FYP Report.

Report Submission

The final report (hard copy) is due on Wednesday, week 12, 5.00 pm.

Students are to submit ring-bound hardcopies of the report ON or BEFORE the due date directly to their respective supervisor, main evaluator and moderator.

Students should have uploaded their report in PDF format (only report and not appendix, source code, or any other material)

  • All submissions of reports will be recorded by your supervisor. It is important to note that THE DEPARTMENT MAY REJECT ANY REPORT THAT DOES NOT CONFORM WITH THE FORMAT AS SPECIFIED IN THE "FINAL YEAR PROJECT REPORT FORMAT.
  • A student will have to amend any major error detected during the examination. Copies of the project reports with examiners comments will be returned to the student immediately after the presentation for amendments.
  • A revised soft copy of the Final Report in PDF format is to be submitted electronically by one week after the final examination.

The detailed instructions for electronic submission of FYP Reports through the web site: https://dl.comp.nus.edu.sg is available here. Failure to do so may affect your CP4101 grade.

Confidentiality of the report

Students who have worked on a project of confidential nature or with the intention to apply for patents for the work should inform the Office of Undergraduate Studies, through their project supervisor, for non-submission of the soft copy of the report to the SoC Digital Library.

It is mandatory for students to put a covering note saying that the contents are copy protected and should not be photocopied or scanned. And diplay the word "Confidential" prominently on the cover note and back page of the report.

The evaluators after reviewing the reports should return the hard copies of the report to the project supervisor(s) after evaluation, undertake a declaration to maintain the confidentiality of the contents of such work and comply with all necessary steps needed by supervisor to ensure confidentiality.

Project Update Form

If the title of project, key words or nature of projects is changed then students/suprvisors must complete a PROJECT UPDATE FORM, and return it to the SoC Office of Undergraduate Studies (Attn: Mrs Kwek). Alternatively, supervisors can also make the changes through the Project Administration System. The deadline for any updates is by Week 3 of the semester in which the project started, ie first sem of the project.

One thought on “Cp4101 B Comp Dissertation Abstract

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *