Professor Buhr’s primary research area is programming languages, in which he studies concurrency, polymorphism, visualization/debugging, and persistence.
Virtually all computers support multiple simultaneous threads of execution in the form of multi-threading, multi-core, and multi-processors; programming multiple threads is more complex than a single thread, and is called concurrent programming. Professor Buhr’s initial work in concurrency began by providing concurrency for the C language, called the μSystem. While much was learned about designing and constructing thread libraries during this project, the work was abandoned because it is impossible to construct statically type-safe direct communication among threads in C.
Subsequently, the work shifted to C++ because of its object-oriented features allowing concurrent communication to be statically type-safe. However, it is still impossible to add concurrency without extending C++; as a result, a new concurrent dialect of C++ was created, called µC++.
The ability to write generic reusable programs, called polymorphism, is fundamental to advanced software engineering.
Professor Buhr is interested in static type-systems providing polymorphism without the use of nominal inheritance (object-oriented programming). The problem with nominal inheritance is the restrictions it imposes on reuse because of the ridge use of a hierarchy to express relationships among types. Instead he adopts a more flexible approach using duck typing implemented by parametric polymorphism and extensive overloading to provide a more general type-system. This work has resulted in a new dialect of C language, called C∀ (C-for-all).
Monitoring, Visualization and Debugging
Getting a concurrent program to work correctly, efficiently and with maximal parallelism can be very difficult. Professor Buhr develops techniques and tools to help understand the dynamic behaviour of a concurrent program with the goal of increasing program performance. This work has produced a toolkit for monitoring, visualizing, and debugging μC++ programs, called MVD.
Data structures containing pointers (versus values) cannot be stored directly to/from disk, because pointers represent absolute locations in memory. Professor Buhr is also interested in memory-mapped single-level stores, which allow data, including pointers, to be transparently transferred to and retrieved from disk storage implicitly via virtual memory. To handle the address consistency problem, i.e., pointers to addresses that have changed location, exact positioning of data is used so no relocation or adjusting of pointers is necessary. This work has produced a toolkit, called μDatabase, for building persistent data structures using the exact-positioning approach to memory-mapped single-level stores.
BSc, MSc, PhD (Manitoba)
Industrial and sabbatical experience
Professor Buhr worked at Sun Microsystems for his 1993/4 sabbatical on programming language design, which ultimately became Java. He worked on the HP Gelato project from 2003 to 2008 developing advanced Linux software for the Intel Itanium processor. He worked at Google for his 2013/4 sabbatical on the Go programming-language team.
Thierry Delisle and Peter A. Buhr. (2020). Advanced Control-flow and Concurrency in C∀. Software: Practice and Experience.
Peter Buhr, David Dice and Wim H. Hesselink. (2018). High-Contention Mutual Exclusion by Elevator Algorithms. Concurrency and Computation: Practice and Experience. 30(18)
Aaron Moss, Robert Schluntz and Peter A. Buhr. (2018). C∀: Adding Modern Programming Language Features to C. Software: Practice and Experience. 48(12): 2111-2146.
Wim H. Hesselink, Peter A. Buhr and David Dice. (2018). Fast Mutual Exclusion by the Triangle Algorithm. Concurrency and Computation: Practice and Experience. 30(4)
Peter A. Buhr and David Dice and Wim H. Hesselink. (2016). Dekker’s Mutual Exclusion Algorithm Made RW-Safe. Concurrency and Computation: Practice and Experience. 28(1): 144-165.
Peter A. Buhr, David Dice and Wim H. Hesselink. (2015). High-Performance N-Thread Software Solutions for Mutual Exclusion. Concurrency and Computation: Practice and Experience. 27(3): 651-701.