Professors Shai Ben-David and Ian Goldberg have been named 2023 ACM Fellows. The Association for Computing Machinery is recognizing Professor Ben-David for his contributions to and research leadership in machine learning theory, and Professor Goldberg for his contributions to the development and deployment of privacy enhancing technologies.
The Association for Computing Machinery is the world’s largest educational and scientific computing society, uniting computing educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM Fellowships are conferred to the top one percent of the association’s members. This prestigious recognition indicates outstanding accomplishments in computing and information technology and exemplary service to ACM and the computing community.
This year the Association for Computing Machinery named 68 members across the globe as ACM Fellows for their wide-ranging and fundamental contributions in disciplines running the gamut of the computing field, including algorithm design, computer graphics, cybersecurity, energy-efficient computing, mobile computing, software analytics, and web search, to name a few. Fellows are nominated by their peers, with nominations reviewed by a distinguished selection committee.
“Congratulations to Shai and Ian on their being named ACM Fellows,” said Raouf Boutaba, Professor and Director of the Cheriton School of Computer Science. “Their recognitions are well deserved. Both are exceptional researchers whose pioneering and sustained contributions, mentorship of students, and service to the community have shaped and informed the direction of their respective fields within computer science.”
With Professors Ben-David and Goldberg’s nomination, 10 faculty members at the Cheriton School of Computer Science have been named ACM fellows. Previous recipients of this prestigious honour are Professors N. Asokan, Srinivasan Keshav, Ihab Ilyas, Ming Li, Jimmy Lin, J. Ian Munro, M. Tamer Özsu and Frank Tompa.
About Professor Shai Ben-David
He holds a PhD in mathematics from the Hebrew University in Jerusalem, and has held postdoctoral positions at the University of Toronto in both the mathematics and computer science departments. He was a professor of computer science at the Technion in Haifa, Israel, before joining the Cheriton School of Computer Science.
Professor
Ben-David’s
research
spans
learning
theory,
computational
complexity
and
mathematical
logic.
A
prolific
researcher,
he
has
published
more
than
130
papers
that
have
been
cited
collectively
more
than
23,000
times
with
an
h-index
of
51
as
of
January
2024
according
to
Google
Scholar.
His
publications
have
received
best
paper
awards
at
five
conferences
—
at
ICASSP
2005,
COLT
2006,
COLT
2011,
NeurIPS
2018
and
ALT
2023.
With
Shai
Shalev-Shwartz,
he
co-authored
Understanding
Machine
Learning:
From
Theory
to
Algorithms,
a
textbook
published
in
2014
by
Cambridge
University
Press
that
introduces
machine
learning
and
the
mathematical
foundations
on
which
it
is
based.
This
publication
is
both
a
standard
textbook
as
well
as
a
research
reference.
His lectures on machine learning, logic and computation on YouTube have been viewed thousands of times by students and colleagues alike, with his introductory lecture on machine learning garnering more than 100,000 views to date.
RESEARCH CONTRIBUTIONS
Foundational
Research
in
Machine
Learning
Professor
Ben-David
began
investigating
machine
learning
in
the
early
1990s,
well
before
it
reached
its
current
ubiquity.
His
early
works
uncovered
key
issues
that
later
became
central
to
machine
learning.
Computational
hardness
of
machine
learning
tasks
Professor
Ben-David
established
some
of
the
first
results
on
computational
hardness
of
learning.
These
include
the
first
hardness
of
approximation
result
for
learning
neural
networks
in
“Hardness
results
for
neural
network
approximation
problems,”
a
EuroCOLT
1999
paper
with
Peter
Bartlett,
as
well
as
NP-hardness
of
learning
for
natural
concept
classes
in
“On
the
difficulty
of
approximately
maximizing
agreements,”
a
paper
with
Nadav
Eiron
and
Philip
Long
presented
at
COLT
2000.
Change-detection
in
data
streams
Professor
Ben-David
was
among
the
first
to
recognize
the
importance
of
addressing
non-stationary
data
generation
in
machine
learning.
His
key
publications
in
this
area
are
“Learning
changing
concepts
by
exploiting
the
structure
of
change,”
a
COLT
1996
paper
with
Peter
Bartlett
and
Sanjeev
Kulkarni,
“Detecting change
in
data
streams,”
a
VLDB
2004
paper
with
Daniel
Kifer
and
Johannes
Gehrke,
and
“Nonparametric
change
detection
in
2D
random
sensor
field,”
a
paper
with
Ting
He
and
Lang
Tong
that
also
received
a
best
paper
award
at
ICASP
2005.
Multi-class
and
real
valued
learning
Professor
Ben-David
played
a
key
role
in
developing
combinatorial
dimensions
that
characterize
learnability
beyond
binary
classification.
His
key
publications
include
“Scale-sensitive
dimensions,
uniform
convergence
and
learnability,”
a
1997
paper
with
Noga
Alon,
Nicolò
Cesa-Bianchi
and
David
Haussler,
“Characterizations
of
learnability
for
classes
of
{O,
…,
n}-valued
functions,”
a
COLT
1992
paper
with
Nicolò
Cesa-Bianchi
and
Philip
Long,
and
“Multiclass
learnability
and
the
ERM
principle,”
a
paper
with
Amit
Daniely,
Sivan
Sabato
and
Shai
Shalev-Shwartz
that
also
received
the
best
paper
award
at
COLT
2011.
Initiating Novel Research Directions
Domain
adaptation
Professor
Ben-David’s
publication
“Analysis
of
representations
for
domain
adaptation,”
a
NIPS
2006
paper
with
John
Blitzer,
Koby
Crammer
and
Fernando
Pereira,
propelled
research
on
what
is
now
recognized
as
key
to
machine
learning
applications.
This
paper
continues
to
inspire
domain
adaptation
algorithms
and
is
a
major
theoretical
benchmark.
Methodical
analysis
of
clustering
paradigms
Professor
Ben-David’s
NIPS
2008
paper
“Measures
of
clustering
quality:
A
working
set
of
axioms
for
clustering”
with
Margareta
Ackerman
initiated
a
long
line
of
research,
developing
a
theory
for
taxonomizing
clustering
methods
according
to
their
input-output
behaviour.
This
paper
alerts
users
to
the
crucial
need
for
thoughtfully
choosing
the
clustering
method
for
a
given
task.
Follow-up
papers
by
Professor
Ben-David
and
his
coauthors
introduce
tools
for
that
important
and
under-investigated
issue.
Learning
with
computable
learners
Professor
Ben-David’s
ALT
2020
paper
“On
learnability
with
computable
learners,”
with
Sushant
Agarwal,
Nivasini
Ananthakrishnan,
Tosca
Lechner
and
Ruth
Urner,
initiated
research
on
learnability
under
computability
requirements.
Surprisingly,
much
of
common
learnability
theory
collapses
under
this
natural
condition.
His
follow-up
paper
“On
computable
online
learning,”
with
recent
master’s
graduate
Niki
Hasrati,
which
expanded
on
these
results,
won
the
ALT
2023
best
paper
award.
Independence
of
set
theory
results
in
computer
science
Professor
Ben-David’s
1991
paper
“On
the
independence
of
P
versus
NP”
with
S.
Halevi
proved
equivalence
between
the
independence
from
a
strong
axiomatic
theory
of
P
vs
NP
and
having
almost-polynomial-time
SAT
solvers.
And his invited STOC 2021 paper “Learnability can be independent of set theory” with Pavel Hrubes, Shay Moran, Amir Shpilka and Amir Yehudayoff showed that learnability of some basic tasks cannot be decided by common mathematics. This is the first result in machine learning establishing independence from Zermelo–Fraenkel set theory with the axiom of choice.
Proving Unexpected Limitations of Popular Algorithmic Paradigms
Randomization
in
online
learning
Professor
Ben-David’s
1994
paper
“On
the
power
of
randomization
in
on-line
algorithms”
with
Allan
Borodin,
Richard
Karp,
Gábor
Tardos
and
Avi
Wigderson
showed
that
the
then-commonly
held
belief
that
randomization
may
help
the
competitive
ratio
performance
of
online
algorithms
is
severely
limited
if
not
essentially
incorrect.
Limitations
of
learning
via
embeddings
in
Euclidean
half
spaces
Professor
Ben-David’s
COLT
2001
paper
“Limitations
of
learning
via
embeddings
in
Euclidean
half-spaces”
with
Nadav
Eiron
and
Hans
Ulrich
Simon
proved
inherent
shortcomings
of
the
popular
support
vector
machines
paradigm.
Theoretical
analysis
of
the
stability
method
in
clustering
Professor
Ben-David’s
award-winning
COLT
2006
paper,
“A
sober
look
at
clustering
stability”
with
Ulrike
von
Luxburg
and
Dávid
Pál,
indicated
fallacies
in
common
usage
of
stability
as
a
tool
for
choosing
clustering
parameters.
SERVICE
CONTRIBUTIONS
Professor
Ben-David
has
also
served
the
computer
science
and
machine
learning
community
in
a
variety
of
administrative
capacities.
He
served
as
president
of
the
Association
for
Computational
Learning
from
2009
to
2012,
as
the
action
editor
for
the
journal
Machine
Learning,
on
various
National
Science
Foundation
and
European
Union
funding
agency
panels,
and
as
the
program
committee
chair
for
all
theoretical
machine
learning
conferences
—
the
European
Conference
on
Computational
Learning
Theory
in
1997,
the
Conference
on
Learning
Theory
in
1999,
and
the
International
Conference
on
Algorithmic
Learning
Theory
in
2004.
He
also
serves
regularly
as
the
senior
area
chair
for
the
largest
annual
machine
learning
conferences
—
the
Conference
on
Neural
Information
Processing
Systems,
the
International
Conference
on
Machine
Learning,
and
the
Conference
on
Artificial
Intelligence
and
Statistics.
Professor
Ben-David
has
also
organized
workshops
on
emerging
research
directions.
These
include
the
2016
Dagstuhl
seminar
on
Foundations
of
Unsupervised
Learning
in
Germany,
the
2016
Lorentz
workshop
on
the
Theoretical
Foundations
of
Learning
from
Easy
Data
in
The
Netherlands,
the
Simons
Institute’s
cluster
on
Interpretable
Machine
Learning
in
2020
and
2022
in
Berkeley,
California,
and
the
2023
IDEAL
Workshop
Machine
Learning
Interpretability
and
Logic
in
Chicago.
About Professor Ian Goldberg
Professor Goldberg’s ACM Fellowship is among his growing list of professional recognitions, awards and accolades. These include being named an IEEE Senior Member in 2023, receiving the USENIX Security Test of Time Award in 2019 and the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies in 2018. In addition, he was named an ACM Distinguished Member in 2017, received a Pioneer Award from the Electronic Frontier Foundation in 2011, an Outstanding Young Computer Science Researcher Award from the Canadian Association of Computer Science in 2011, and an Early Researcher Award from CS-Can | Info-Can in 2010.
Professor Goldberg is the only person to date whose papers have been recognized three times — in 2013, 2018, and 2023 — with the Andreas Pfitzmann Best Student Paper Award at the Privacy Enhancing Technologies Symposium. This award recognizes the best paper appearing at that year’s symposium where a student has co-authored and presented the paper.
RESEARCH
CONTRIBUTIONS
Professor
Goldberg
is
a
leader
in
privacy
enhancing
technologies
—
what
are
known
as
PETs
—
a
field
in
which
he
has
been
working
for
more
than
a
quarter
of
a
century.
He
has
created,
developed
and
deployed
many
innovations
that
have
substantially
and
meaningfully
improved
privacy
as
well
as
access
online
against
a
backdrop
of
increased
censorship,
surveillance
by
governments,
and
data
misuse
and
abuse
by
corporations.
His work appears regularly not only in PoPETs — the Proceedings on Privacy Enhancing Technologies known previously as PETS or the Privacy Enhancing Technologies Symposium — the top research venue dedicated to this field (22 papers over 2007–2024), but also at top-tier venues in the broader computer security and privacy community. These include the ACM Conference on Computer and Communications Security (12 papers over 2009–2023), the USENIX Security Symposium (10 papers over 1996–2023), the IEEE Symposium on Security and Privacy (five papers over 2007–2015), and the Network and Distributed System Security Symposium (two papers over 2013–2018).
All told, he has published 111 peer-reviewed papers to date, of which 37 have been cited more than a hundred times, and two more than a thousand. Recently, he was ranked the top researcher in Canada assessed by the number of papers published at the top security and privacy venues, and the top researcher in the world for the same measure in PoPETs/PETS.
Professor Goldberg is known internationally for his research on a variety of privacy enhancing technologies, including secure and private messaging, privacy-preserving communications networks, private computation, censorship resistance, and more. His contributions range from mathematical foundations to protocol and system design to implementation and deployment. The privacy enhancing technologies he and his students have developed and deployed allow for personal autonomy online for citizens of restrictive regimes and in Western democracies alike.
His work on Off-the-Record Messaging brought widespread encrypted messaging to Internet users. His work on privacy-preserving communications networks, such as his contributions to the widely used, open-source Tor network, helps millions of people around the world protect themselves and their online communications from both censorship and surveillance. As many people live in countries where the Internet is censored, Professor Goldberg’s research on censorship resistance technologies allow them to access the free and open Internet. In this area, his research on Telex (2011) and Slitheen (2016–2021) allow people in censored countries to access websites with the help of a cooperating Internet service provider outside the censoring country. To the overseeing authority, it appears the user is accessing an innocuous website — videos of cats — while the Internet service provider secretly redirects the user’s traffic to the desired censored website. Even his team’s proof-of-concept deployment of Telex saw real censored users being helped by the technology. More recently, his open source Lox software (2023), which received one of three Andreas Pfitzmann Best Student Paper Award at PETS 2023 with his recent master’s student Lindsey Tulloch, similarly allows people in censored countries to access the Tor network and from there, the open Internet. This work is being enhanced by The Tor Project to deploy to Tor’s millions of daily users.
Professor Goldberg is also well known for his work on private computation, where a computation involving private data can be performed without the parties performing that computation being able to see that private data. Private computation prevents the kinds of large-scale data breaches seen almost daily, as companies would be able to perform the computations they desire without being able to access the underlying unencrypted data. If they do not have access to the data themselves, it cannot then be stolen from them. Professor Goldberg has worked on a variety of aspects of private computation, including private information retrieval, trusted execution environments, and multiparty computation, including the multiparty digital signature scheme FROST. FROST is being standardized in the Crypto Forum Research Group of the Internet Research Task Force and incorporated into a number of blockchain products.
Professor Goldberg’s success is attributable largely to building systems that solve the whole problem for real-world users, not just an academically interesting subset. This guiding principle enables his work to be adopted in a way that scales to serve everyone.
Contributions of Record
A number of Professor Goldberg’s innovations have seen wide impact and deployment, a sample of which are detailed below.
From 2004–16, he was the lead researcher and developer of Off-the-Record Messaging, a protocol to provide secure and private messaging to users of existing instant messaging networks. Hundreds of thousands of people around the world used OTR to protect their online communications. His desktop-focused OTR protocol was later adapted to mobile environments as the Signal protocol. More than a billion people worldwide now use the Signal protocol as part of Signal, WhatsApp, and Facebook Messenger mobile apps.
With his work on privacy-preserving and censorship-resistant communications networks — from his PhD thesis to his award-winning PoPETs paper in 2023 — he has seen many of his ideas being deployed, first as the commercial Freedom Network in the early 2000s where he was Chief Scientist, then into the open-source Tor network where he was Chair of the Board of Directors for 10 years. Millions of people every day use Tor to protect their privacy, and directly benefit from Professor Goldberg’s research.
Professor Goldberg’s work with University of London’s Alex Davidson and others in a paper titled “Privacy Pass: Bypassing Internet Challenges Anonymously” aims to reduce the prevalence of CAPTCHAs, an online test to differentiate people from bots. It does so by having one party that can verify a user that then issues the user with tokens, which can then be redeemed at websites to bypass CAPTCHAs. The tokens are issued blindly, using a cryptographic technique that ensures that even the issuing party cannot recognize the issued tokens when they are subsequently redeemed. This work is being standardized at the Crypto Forum Research Group, and is being rolled out as Apple’s Private Access Token feature in iOS 16 and macOS Ventura.
SERVICE
CONTRIBUTIONS
Professor
Goldberg
has
held
many
leadership
positions
in
the
privacy
enhancing
technologies
and
broader
computer
security
and
privacy
research
communities.
He
was
Program
Co-Chair
for
PETS
(2008,
2009),
Program
Chair
for
the
USENIX
Security
Symposium
(2010),
and
Program
Co-Chair
for
Financial
Cryptography
and
Data
Security
(2019).
He
was
the
General
Chair
for
PETS
2011.
He serves on the Privacy Enhancing Technologies Advisory Board (2007 to present), where he is responsible for monitoring their financial position, and for maintaining the article submission system and the website software infrastructure. He also implemented the move to self-publishing for the 2022 Proceedings on Privacy Enhancing Technologies. He has served on 36 program committees, including PETS/PoPETs, ACM CCS, USENIX Security Symposium, IEEE Symposium on Security and Privacy, ACM AsiaCCS, ACM WiSec, and many others.
Professor Goldberg served as the Chair of the Board of Directors of The Tor Project for the first 10 years of the organization’s existence (2007–2016), and currently serves on the Board of Waterloo’s Cybersecurity and Privacy Institute (2017–present), a cross-campus interdisciplinary institute bringing together researchers studying all aspects of cybersecurity and privacy, from cryptography and data science to human and societal aspects of security and privacy to law and policy.