07-12, 21:00–21:50 (US/Eastern), Tobin 201/202
Large language models are only as good as the data we feed into them. Unfortunately, we haven't quite dismantled racism, sexism, and all the other -isms just yet. AI isn't going away, so let's apply a harm reduction lens. Given the imperfect tools that we have, how can we write LLM prompts that are less likely to reflect our own biases? In this session, Tilde will review current literature about LLM prompting and social justice. They'll compare how different models perform in this context, since they're trained on different datasets. You'll leave with some ideas that you can apply as both users and builders of LLM applications, to iterate towards a more equitable world.
Tilde Thurium is a San Francisco based artist, activist, and engineer. By day they are a free and open source software advocate at deepset. They can probably deadlift more than you. Ask them about how to paint an algorithm, the intersections between mutual aid and biology, or which coast has the best vegan croissants.
mastodon twitter/x