We have a great program lined up for 2025. Let's welcome the following leading experts on digital trust.
In an era of digital transformation where artificial intelligence (AI) is revolutionizing industries and people’s daily life, AI Safety is of paramount importance towards a trusted digital economy. This talk will discuss our approach to digital trust, the three pillars of trust technologies and the underlying techniques for addressing key challenges such as data privacy, transparency and accountability. This talk will also discuss the current trend of AI safety and it’s interactions with the issue of Data protection. Through case studies and emerging best practices, this talk will also discuss some of our work on privacy-preserving AI.
Professor Lam is the Associate Vice President (Strategy and Partnerships) and Professor in the College of Computing and Data Science at the Nanyang Technological University (NTU), Singapore. He is currently also the Executive Director of the Digital Trust Centre Singapore (DTC) and the Singapore AI Safety Institute, Director of the Strategic Centre for Research in Privacy-Preserving Technologies and Systems (SCRiPTS), and Director of NTU’s SPIRIT Smart Nation Research Centre. From August 2020, Professor Lam was also on part-time secondment to the INTERPOL as a Consultant at Cyber and New Technology Innovation. He served as the Director of the Nanyang Technopreneurship Center 2019-2022, and as Program Chair (Secure Community) of the Graduate College at NTU 2017-2019. Professor Lam has been a Professor of the Tsinghua University, PR China (2002-2010) and a faculty member of the National University of Singapore and the University of London since 1990. He was a visiting scientist at the Isaac Newton Institute of the Cambridge University and a visiting professor at the European Institute for Systems Security. In 2018, Professor Lam founded TAU Express Pte Ltd, an NTU start-up which specializes in AI and Data Analytics technologies for Smart Cities applications. TAU is a spin-off of the Intelligent Case Retrieval System project, a collaboration between NTU and the Singapore Supreme Court. In 1997, he founded PrivyLink International Ltd, a spin-off company of the National University of Singapore, specializing in e-security technologies for homeland security and financial systems. In 2012, he co-founded Soda Pte Ltd which won the Most Innovative Start Up Award at the RSA 2015 Conference. In 1998, he received the Singapore Foundation Award from the Japanese Chamber of Commerce and Industry in recognition of his R&D achievement in Information Security in Singapore. Prof Lam received his B.Sc. (First Class Honours) from the University of London in 1987 and his Ph.D. from the University of Cambridge in 1990. His research interests include Distributed Systems, IoT Security Infrastructure and Cyber-Physical System Security, Distributed Protocols for Blockchain, Biometric Cryptography, Quantum Computing, Homeland Security and Cybersecurity. In 2020, he authored a Technical Report on the “Application of Quantum Computers for Law Enforcement and Security Communications” for a Singapore Government ministry. Professor Lam is the recipient of the 2022 Singapore Cybersecurity Hall of Fame Award.

Prof Elisa Bertino
Professor at Purdue University, USA
Website:https://www.cs.purdue.edu/people/faculty/bertino.html
Data is today even more critical than ever as not only supports operational functions of organizations, but it is a gold mine for strategic decisions and predictions, via the use of advanced machine learning techniques. Data is thus today immensely valuable. However, despite more than 40 years of research on data security and privacy, data breaches keep increasing. Malicious actors use sophisticated attack strategies, based on multiple steps and tools, such as infostealers and Trojan, and on detailed intelligence about organizations applications, key individuals, and defenses. Recent attacks show that now attackers target software supply chains so to easily gain access to large numbers of organizations. It is important to mention that current database technology, e.g. relational database management systems (DBMSs), provides excellent data protection techniques for data stored in databases, including fine-grained content and context-based access control, protections from database administrators, full database encryption, and always-encrypted database, also leveraging hardware and software secure enclaves. And yet, we will see a lot of data breaches. One reason for this situation is that, even if the data is well protected by the DBMS, once the data leaves the database to be handed to applications, which are often not secure, it is not any longer protected. An example of such insecure applications is represented by the ones prone to SQL injection. One would have expected that applications would today be free of such a long-known vulnerability. However, the number of SQL vulnerabilities accepted as common vulnerabilities and exposures remains high even in recent years. In the talk, after discussing relevant examples of data breaches to show some of the tactics, techniques, and procedures (TTPs) used by malicious actors to steal data, I will discuss the problem of insecure database applications and approaches to enhance their security. Then I will provide my own broader perspective on challenges and approaches to comprehensive data security, including the role of recent artificial intelligence techniques.
Elisa Bertino is a Samuel Conte Distinguished Professor of Computer Science at Purdue University. Prior to joining Purdue, she was a professor and department head at the Department of Computer Science and Communication of the University of Milan. She has been a visiting researcher at the IBM Research Laboratory in San Jose (now Almaden) and at Rutgers University. She has held visiting professor positions at the Singapore National University and the Singapore Management University. Her recent research focuses on security and privacy of cellular networks and IoT systems, and on edge analytics for security. Elisa Bertino is a Fellow member of IEEE, ACM, and AAAS. She received the 2002 IEEE Computer Society Technical Achievement Award for “For outstanding contributions to database systems and database security and advanced data management systems”, the 2005 IEEE Computer Society Tsutomu Kanai Award for “Pioneering and innovative research contributions to secure distributed systems”, the 2019-2020 ACM Athena Lecturer Award, and the 2021 IEEE 2021 Innovation in Societal Infrastructure Award. She is currently serving as ACM vice-president.
Touching Points of AI and Cryptography
AI is becoming (again) a central theme in computing and obviously there are also security and privacy issues in AI deployment (like any other new computing technology). The nature of AI is that it moves fast and changes the computing landscape in major ways, and industry is committed to using it in the future. Certainly AI is in a need of having security to assure its success and have cryptography be part of it. Further, besides technological deployment, AI possesses some fundamental modeling questions and its understanding, at times, is a challenge as well.
In this survey talk I will cover some points where AI and Cryptography touch, in practice and in theoretical works, and I will cover some issues on how the areas influence each other, and what are the issues where they can help or hurt each other.
Dr. Moti Yung is a Distinguished Research Scientist with Google and an adjunct senior research faculty at Columbia University. He got his PhD from Columbia University in 1988. Previously, he was with IBM Research, Certco, RSA Laboratories, and Snap. Yung is a fellow of the IEEE, the ACM, the International Association for Cryptologic Research (IACR), and the European Association for Theoretical Computer Science (EATCS). Among his awards is the IEEE-CS W. Wallace McDowell Award and the IEEE-CS Computer Pioneer Award, the ACM SIGSAC outstanding innovation award, as well as a number of test-of-time awards and industry awards. Yung is a member of the American Academy of Arts and Sciences.
Building Trust in AI: A Journey from True Trustworthiness to Perceived
We are currently experiencing a significant and widespread digital transformation across all sectors of our economy. This transformation is driven by the rapid emergence and adoption of critical technologies, particularly Artificial Intelligence (AI) and Machine Learning (ML). Research has shown that AI and ML systems can be vulnerable to various attacks, which raises significant concerns when these systems are used in critical services and infrastructures, posing threats to the national economy.
It is essential to develop AI and ML systems that can be trusted. Trustworthiness has two dimensions: true trustworthiness, which reflects a system's reliability, and perceived trustworthiness, which indicates how reliable a system appears to users. This presentation will explore the journey from true to perceived trustworthiness of AI and ML systems, highlighting the research challenges and initiatives undertaken at CSIRO’s Data61.
Dr Surya Nepal is a senior principal research scientist at CSIRO’s Data61. He has been with CSIRO since 2000 and currently leads the cybersecurity and quantum systems research group comprising over 100 staff and PhD students. His primary area of research focuses on developing and implementing technologies in distributed systems, with a specific emphasis on security, privacy, and trust. Dr. Nepal has over 250 peer-reviewed publications to his credit. He currently serves as the interim editor-in-chief of IEEE Transactions on Service Computing and is a member of the ACM Transactions on Internet Technology editorial board. Additionally, Dr. Nepal is a conjoint professor at UNSW. He is a Fellow of IEEE.

Prof Carsten Maple
Professor at University of Warwick
Website: https://warwick.ac.uk/fac/sci/wmg/about/our-people/profile?wmgid=1102
Digital systems are integral to every part of modern life, supporting our work, travel, entertainment and interaction with government and industry. These systems are woven tightly into national critical infrastructure and so building and assuring the trustworthiness of these systems is vital. In this talk we will explore the concept of trustworthiness and present some of the latest developments in developing and assuring trustworthiness in digital systems from projects at the University of Warwick and the Alan Turing Institute.
Professor Carsten Maple is the Director of the NCSC-EPSRC Academic Centre of Excellence in Cyber Security Research and Professor of Cyber Systems Engineering at the University of Warwick. He is also Director for Research Innovation at EDGE-AI, the National Edge Artificial Intelligence Hub, a co-investigator of the PETRAS National Centre of Excellence for IoT Systems Cybersecurity, and is a Fellow and Professor of the Alan Turing Institute, where he is a principal investigator on a $9 million project developing trustworthy digital infrastructure. Carsten is co-investigator on the Framework for Responsible AI in Finance project, leading on Security and Privacy. He has an international research reputation, having published over 450 peer-reviewed papers. His research has attracted millions of pounds in funding and has been widely reported through the media. He has given evidence to government committees on issues of anonymity and child safety online. Additionally he has advised the boards of public and multibillion pound private sector organisations and is a member of two Royal Society working groups, including the working group on Privacy Enhancing Technologies.

Dr Shaanan Cohney
Senior Lecturer at University of Melbourne
Website: https://findanexpert.unimelb.edu.au/profile/640004-shaanan-cohney
Like many research communities, security researchers often focus inward, aiming to advance the state of the art in technology. However, the academic field has experienced a recent surge toward helping a broader swath of society. In this talk, I’ll present some early results on trends in the community along with some early findings about how we can improve on building security research look like if that can shape public policy and help society at large.
Dr. Shaanan Cohney is a Senior Lecturer and Deputy Head of School (Academic) at the University of Melbourne’s School of Computing and Information Systems, where his works at the nexus of computer systems, cryptography, and law. His accolades include Best Paper Awards at ACM CCS and ACM/IEEE ICSE, along with teaching awards including the CORE Award for Teaching (Early Career), the Edward Brown Award, and the Kelvin Medal. More can be found at https://cohney.info
Exploring Non-Intrusive Side Channels to Uncover Hidden Data Leakages from Mobile and IoT Devices.
Over the past decade, mobile and IoT devices have undergone rapid development, introducing innovative technologies that enhance daily life convenience. However, these emerging technologies also present novel attack surfaces, leading to covert data leakage. In this talk, I will discuss my team’s recent work on exploring non-intrusive side channels to uncover and understand hidden data leakage from these attack surfaces of mobile and IoT devices. Specifically, we will demonstrate the ability to infer user activities through wireless and power side channels and reconstruct user fingerprints from in-display fingerprint sensors in smartphones via the electromagnetic (EM) side channel without compromising either the hardware or software of the smartphones. Additionally, I will briefly outline potential mitigation strategies.
Dr. Qingchuan Zhao is an assistant professor in the Department of Computer Science at the City University of Hong Kong. Prior to joining the department in 2021, he completed his Ph.D. at Ohio State University in the same year. His research focuses on the security and privacy practices in the Android appified ecosystem. He employs both static and dynamic data flow analysis on mobile apps and delves into hardware side channels to uncover a variety of vulnerabilities, including privacy leakage, privilege escalation, and vulnerable access controls. His work has been granted bug bounties from industry-leading companies and has garnered significant media attention.
Safeguarding Privacy, Robustness and Intellectual Property of Machine Learning
The growing complexity of deep neural network models in modern application domains (e.g., vision and language) necessitates a complex training process that involves extensive data, sophisticated design, and substantial computation. These inherently encapsulate the intellectual property (IP) of data and model owners, highlighting the urgent need to protect privacy, ensure model robustness, and safeguard proprietary rights of model owners during development, deployment, and post-deployment stages. In this talk, we will present our recent research surrounding holistic strategies for privacy preservation, model robustness verification, and model usage control, addressing challenges across the entire model lifecycle. Our approaches aim to advance responsible AI practice by ensuring secure and ethical utilization of AI systems.
Guangdong Bai is an Associate Professor in the School of Electrical Engineering and Computer Science at the University of Queensland, Australia. His research spans responsible machine learning, security, and privacy. He is an Associate Editor of IEEE Transactions on Dependable and Secure Computing.
Meta concerns in ML security/privacy
The success of deep learning in many application domains has been nothing short of dramatic. This has brought the spotlight onto security and privacy concerns with machine learning (ML), generating tremendous interest among researchers. In this talk, I will discuss two meta issues ML security and privacy that merits greater attention from the research community. The first is the question of whether we are using the right adversary models. As a case study, I will use model ownership resolution (MOR) schemes that are intended to deter "model theft." The second is the issue of conflicts that arise when protection mechanisms for multiple different threats need to be applied simultaneously to a given ML model.
N. Asokan is a professor of computer science and a David R. Cheriton Chair at the University of Waterloo where he also serves as the executive directory of the Cybersecurity and Privacy Institute (https://cpi.uwaterloo.ca). His research focuses on systems security. Asokan is a Fellow of the ACM, the IEEE, and the Royal Society of Canada. More information about his work is on his website at https://asokan.org/asokan/ or via Bluesky and X/Twitter @nasokan
Encrypted Databases: Retrospective and Way Forward
The necessity of safeguarding important and sensitive data has been globally recognized, and there is an urgent call to keep sensitive data always encrypted to protect the data at rest, in transit, and in use. Satisfying the demand is not easy, especially in the context of modern databases. The difficulty lies in how to perform the database query processing over encrypted data while meeting the requirements of security, performance, and complex query functions. In this talk, we will take a retrospective view on encrypted database research. The area has received tremendous advancements over the past decade, with proposals based on cryptographic designs and hardware enclaves. We will overview these latest advancements and the potential challenges, e.g., leakage-abuse attacks, and discuss the possible roadmap ahead towards practically more secure, efficient and functional encrypted databases.
Cong Wang is a Professor and Head of the Department of Computer Science at the College of Computing, City University of Hong Kong. His research encompasses data security and privacy, AI systems and security, and blockchain with decentralized applications. He has made prolific contributions to these fields, witnessed by 30,000+ citations on Google Scholar and multiple best paper awards, including the 2020 IEEE INFOCOM Test of Time Paper Award. He is an IEEE Fellow, an HK RGC Research Fellow, and a Founding Member of the Young Academy of Sciences of Hong Kong. He has served as the Editor-in-Chief for the IEEE Transactions on Dependable and Secure Computing, a premier security journal within the IEEE Computer Society. Additionally, he is a senior scientist at The Laboratory for AI-Powered Financial Technologies Limited (AIFT) and has been appointed by the Hong Kong Monetary Authority as a member of the Central Bank Digital Currency (CBDC) Expert Group.
Pseudorandom Correlation Generators: Secure Computation with Silent Preprocessing
Protocols for secure multi-party computation (MPC), dating back to the 1980s, allow multiple parties to jointly evaluate any efficiently computable function on their private inputs without revealing anything beyond the function's output. However, generic MPC protocols often incur significant computational and communication overhead, making them impractical for real-world use. To address this challenge, pseudorandom correlation generators (PCGs) enable MPC with silent preprocessing, where a communication-efficient, input-independent preprocessing phase is followed by a lightweight online phase. In this talk, I will introduce PCGs, discuss their applications, and provide an overview of recent advancements in the field.
Lisa Kohl is a tenured researcher in the CWI Cryptology group, Amsterdam. A special focus of her work lies in exploring new directions in secure computation with the goal of developing practical post-quantum secure protocols. Before coming to CWI, she worked as a postdoctoral researcher with Yuval Ishai at Technion. In 2019, she completed her PhD at Karlsruhe Institute of Technology under the supervision of Dennis Hofheinz. During her PhD, she spent eight months in the FACT center at Reichman University (IDC Herzliya) for a research visit with Elette Boyle.
Blockchain for Trust and Transparency: Transforming Agriculture and Education
Blockchain technology is redefining the foundations of trust and transparency across diverse sectors, including agriculture and education. In this talk, I will explore how blockchain solutions are addressing critical challenges in these fields, fostering integrity, efficiency, and accountability. Drawing from key Australian Government initiatives, this talk will delve into three transformative projects: (1) Agriculture Supply Chain: Leveraging blockchain to enhance traceability and authenticity in agricultural supply chains, ensuring product integrity from farm to table. (2) Greenhouse Gas Monitoring: Employing blockchain to track and verify agricultural greenhouse gas emissions, supporting sustainability and compliance with environmental standards.(3) Education Credentials: Implementing blockchain to authenticate academic records and prevent credential fraud, safeguarding the value of education and empowering institutions globally.
By sharing insights from real-world deployments, this talk aims to inspire further innovation and collaboration, showcasing blockchain’s pivotal role in shaping trustworthy digital ecosystems for the future.
Joseph Liu is a Full Professor in the Faculty of Information Technology, Monash University in Melbourne, Australia. He got his PhD from the Chinese University of Hong Kong in 2004. His research areas include cybersecurity, blockchain and applied cryptography. He has received more than 15000 citations and his H-index is 70, with more than 200 publications in top venues such as CRYPTO, ACM CCS, IEEE S&P, NDSS. He has received more than US$10M funding, and he is currently the Director of the Monash Blockchain Technology Centre. He has been given the prestigious ICT Researcher of the Year 2018 Award by the Australian Computer Society (ACS), and has won the IEEE Technical Achievement Award in 2021 given by the Technology and Engineering Management Society for his achievement in the blockchain and cybersecurity domain. He has several patents and international standards from his research contributions to be adopted by the industry.