Thomas Krendl Gilbert

Thomas Krendl Gilbert

AI Ethicists and co-host of The Retort.

Appears in 36 Episodes

The Retort's biggest AI stories of 2024

We're back! Tom and Nate catch up after the Thanksgiving holiday. Our main question was -- what were the biggest AI stories of the year? We touch on the core themes of...

The Nobel Albatross

Tom and Nate catch up on the happenings in AI. Of course, we're focused on the biggest awards available to us as esteemed scientists (or something close enough) -- the...

Claude Needs a Constitutional Convention

Tom and Nate catch up on recent events (before the OpenAI o1 release) and opportunities in transparency/policy. We recap the legendary scam of Matt from IT department,...

Avoiding the AI burnout

Tom and Nate catch up on core themes of AI after a somewhat unintended summer break. We discuss the moral groundings and philosophy of what we're building, our travels...

What we are getting wrong about AI regulation

Tom and Nate catch up on the rapidly evolving (and political) space of AI regulation. We cover CA SB 1047, recent policing of data scraping, presidential appointees, a...

AI, feedback, and population public health

Tom and Nate revisit one of their old ideas -- AI through the lens of public health infrastructure, and especially alignment. Sorry about Tom's glitchy audio, I figure...

Apple sends a memo to the AGI faithful

Tom and Nate caught up last week (sorry for the editing delay) on the big two views of the AI future: Apple Intelligence and Situational Awareness (Nationalistic AI do...

Murky waters in AI policy

Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal ...

ChatGPT talks: diamond of the season or quite the scandal?

Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with...

Three pillars of AI power

Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of powe...

Llama 3: Can't Compete with a Capuchin

Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? ...

Into the AI Trough of Disillusionment

Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillu...

AI's Eras Tour: Performance, Trust, and Legitimacy

Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, rewa...

Claude 3: Is Nathan too bought into the hype?

Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's ...

Model release therapy session #1

This week Tom and Nate cover all the big topics from the big picture lens. Sora, Gemini 1.5's context length, Gemini's bias backlash, Gemma open models, it was a busy ...

Waymo vs. the time honored human experiences, vandalism and defacement

A metaphor episode! We are trying to figure how much the Waymo incident is or is not about AI. We bring back our Berkeley roots and talk about traditions in the Bay ar...

We believe in the metaverse

... and you should too. We catch up this week on all things Apple Vision Pro and how these devices will intersect with AI. It really turned more into a commentary on t...

How to OLMost find a Waifu

Wow, one of our favorites. This week Tom and Nate have a lot to cover. We cover AI2's new OPEN large language models (OLMo) and all that means, the alchemical model me...

Tom's Story: to get through grad school, become a sperm whale

We recovered this episode from the depth of lost podcast recordings! We carry on and Tom tells the story of his wonderful sociology turned AI Ph.D. at Berkeley. This c...

Non-profits need to be businesses too

This week Tom and Nate catch up on two everlasting themes of ML: compute and evaluation. We chat about AI2, Zuck's GPUs, evaluation as procurement, NIST comments, negl...

How the US could lose (and win!) against China in AI, with Jordan Schneider of ChinaTalk

We're excited to bring you something special today! Our first cross over episode brings some fresh energy to the podcast. Tom and Nate are joined by Jordan Schneider o...

AI is literally the culture war, figuratively speaking

Tom and Nate are ready to kick off the year, but not too ready! There's a ton to be excited about this year, but we're already worried for some parts of it. In this ep...

What I wish someone had told me

The end of the year is upon us! Tom and Nate bring a reflective mood to the podcast along with some surprises that may be a delight.Here are some links for the loyal f...

Everyone wants fair benchmarks, but do you even lift?

No stone is left unturned on this episode. As the end of the year approaches, Tom and Nate check in on all the vibes of the machine learning world: torrents, faked dem...

Cybernetics, Feedback, and Reinventionism in CS

In this episode, Tom gives us a lesson on all things feedback, mostly where our scientific framings of it came from. Together, we link this to RLHF, our previous work ...

Q* and OpenAI's Strange Loop: We Pecan't Even

We break down all the recent events of AI, and live react to some of the news about OpenAI's new super-method, codenamed Q*. From CEOs to rogue AI's, no one can be tru...

OpenAI: Developers, Hegemons, and Origins

We cover all things OpenAI as they embrace their role as a consumer technology company with their first developer keynote.Lots of links:Dev. day keynote https://www.yo...

Executive Orders, Safety Summits, and Open Letters, Oh My!

We discuss all the big regulation steps in AI this week, from the Biden Administration's Executive Order to the UK AI Safety Summit. Links:Link the Executive OrderLink...

Transparency (Tom and Nate's Version)

This week, we dunk on The Center for Research on Foundation Models's (Stanford) Foundation Model Transparency Index.Yes, the title is inspired by Taylor.Some links:The...

Techno-optimism: Safety, Ethics, or Fascism?

Tom and Nate sit down to discuss Marc Andreessen's Techno-Optimist Manifesto. A third wave of AI mindsets that squarely takes on both AI Safety and AI Ethics communiti...

Broadcast by