Pod-Alization: Second Season Of "Stuck With Damon Young"; "Hard Fork" Examines Google's AI Disaster

Stuck With Damon Young releases trailer ahead of second season premiere

There are so many people talking on podcasts these days. Almost all the time, these conversations are characterized or marketed as being culturally relevant and intellectually stimulating. All too often, this dialogue is feckless and frivolous babbling about future tattoos, "a personal search for meaning," and going green by purchasing a Tesla Roadster.

Thankfully for listeners and for the culture, Damon Young doesn't do superficial in his podcast, Stuck With Damon Young.

Last week, Spotify and Gimlet announced that their podcast, Stuck with Damon Young, will be returning for its second season on February 16, 2023.

The trailer for season two is available, and you can check it out HERE

If you don't know, Damon Young is a writer, critic, and satirist whose debut memoir What Doesn’t Kill You Makes You Blacker: A Memoir In Essays won the 2020 Thurber Prize for American Humor and Barnes & Noble's 2019 Discover Award.

In season two, Young returns with more off-the-cuff conversations, and with round ups of Damon-approved listener-submitted questions. In season two, Young will be joined by special guests Kiese Laymon, Roy Wood Jr., Elaine Welteroth, Nikole Hannah-Jones and others. With weekly episodes dropping each Thursday, listeners can explore the uncomfortable, hideous, and hilarious absurdity of human behavior.

NYT's Hard Fork podcast examines how Google's response to Bing was such a disaster

In the most recent episode of The New York Times's podcast, Hard Fork, hosts Kevin Roose and Casey Newton speak with OpenAI’s CEO Sam Altman and Microsoft’s CTO Kevin Scott on why Microsoft’s release of a ChatGPT-powered Bing signifies a new era in search. 

Then, they discuss how a disastrous preview of Bard — Google’s answer to ChatGPT — caused the company’s stocks to slide seven percent.

The full transcript of the episode is available here, with highlights below.

Kevin Roose: Sam, there are people, including some at OpenAI, who are worried about the pace of all of this deployment of AI into tools that are used by billions of people, people who worry that maybe it's going too fast, that corners may be getting cut, that some of the safety work is not as robust as maybe it should be. So what do you say to those people who worry that this is all going too fast for society to adjust or for the necessary guardrails to be put in?

Sam Altman:
I also share a concern about the speed of this and the pace. You know, we make a lot of decisions to hold things back, slow them down. You know, you can believe whatever you want or not believe about rumors, but, you know, maybe we've had some powerful models ready for a long time that, for these reasons, we have not yet released. But I feel two things very strongly.

Number one, everyone has got to have the opportunity to understand these tools. The pluses and minuses, the upsides and downsides, how they're going to be used, decide what this future looks like, co-create it together. And the idea that this technology should be kept in like a narrow slice of the tech industry because those are the people who we can trust, and the other people just aren't ready for it — you hear different versions of this in corners of the community, but I like I deeply reject that. That is like not a world that I think we should be excited about. And given how strongly I believe this is going to change many, maybe the great majority of aspects of society, people need to be included early and they need to see it, you know, imperfections at all as we get better, and participate in the conversation about where it should go, what we should change, what we should improve, what we shouldn't do. And keeping it hidden in a lab bench and only showing it to like, you know, the people that like, we think are ready for it or whatever, that feels wrong.

The second is, in all the history of technology I have seen, you cannot predict all the wonderful things that will happen and the misuse without contact with reality. And so by deploying these systems and by learning and by, you know, getting the human feedback to improve, we have made models that are much, much better. And what I hope is that everything we deploy gets to a higher and higher level of alignment. We are not — at Microsoft and OpenAI — we are not the companies that are rushing these things out. We've been working on this and studying this for years and years, and we have, I think, a very responsible approach. But we do believe society has got to be brought into the tent early. [...]

Casey Newton: We've been hitting you pretty hard on the the safety and responsibility questions, but I wonder if you want to sketch out a little bit more of a utopian vision here for once you get this stuff into the hands of hundreds of millions of people and this does become part of their everyday life. What is the brighter future that you're hoping to see this stuff create?

Sam Altman: I think Kevin and I both very deeply believe that if you give people better tools, if you make them more creative, if you help them think better, faster, be able to do more, like build technology that extends human will, people will change the world in unbelievably positive ways, and there will be a big handful of advanced A.I. efforts in the world. [...] And that tool, [...] will be as big of a deal as any of the great technological revolutions that have come before it in terms of what it means for enabling human potential and the economic empowerment, the kind of creative and fulfillment empowerment that will happen. I think it's going to be jaw-droppingly positive. We could hit a wall in the technology, you know, don't want to promise we've got everything figured out. We certainly don't. But the trajectory looks really good.

Kevin Scott: And the trajectory is towards more accessibility. Like the thing that I come to over and over again is the first machine learning code that I wrote 20 years ago, took, you know, a graduate degree and a bunch of grad textbooks and a bunch of research papers and six months worth of work. And like that same effect that I produced back then, a motivated high school student could do in a few hours on a weekend. And so, like, the tools are putting more power in the hands of people. 

Check out Hard Fork here. 

FYI: This article was written by a real person, not an AI intelligence.



Comments