AI Is Undermining Our Privacy. What Can We Do About It?
08-17, 17:00–17:50 (US/Eastern), Little Theatre

We’ve been grappling with evolving issues around online privacy for years now, but the recent burst in use of AI or LLMs (large language models) has quickly introduced new and sometimes alarming privacy concerns for both users and those creating AI experiences to consider. This talk will take a look at six specific areas where AI is undermining privacy and discuss what, if anything, we can do about them.
The six areas to be discussed are: lack of transparency with data sharing, accidental exposure of personal data, reversing data anonymization, deceptive design patterns, AI listening in everywhere, and malicious misuse of AI.
To end on a constructive note, eight guidelines that designers and developers can follow to ensure they’re focusing on privacy when working with AI will be discussed.

Robert Stribley is a user experience design professional with over 20 years of experience. He worked with dozens of brands at Razorfish and Publicis Sapient before recently starting his own small company: Technique. He teaches UX design at the School of Visual Arts and speaks regularly on the subjects of user experience design and privacy by design. He has just completed a book on the topic of design for privacy for Rosenfeld Media.

Past HOPEs: i ii iii iv vi vii viii ix xi xii xiii xiv xv