In conversation with Sam Altman: «ChatGpt? It’s incredibly stupid compared to what we’ll soon have.”

In conversation with Sam Altman: «ChatGpt? It’s incredibly stupid compared to what we’ll soon have.”
In conversation with Sam Altman: «ChatGpt? It’s incredibly stupid compared to what we’ll soon have.”

The interview at Harvard with the CEO of OpenAI: «What do people want from artificial intelligence? A very capable colleague or an extension of oneself? The only way to find out is to let people explore these options.”

Riccardo Di Molfetta, 24 years old, is the young Italian founder of Symbiotic, a startup created at Harvard that deals with the topic of aligning artificial intelligences (here you can find an interview with Di Molfetta). A few days ago Riccardo had a conversation with Sam Altman, CEO of OpenAI and “father” of ChatGpt, which we publish exclusively.

I would be interested in hearing your vision on the evolution of the relationship between artificial intelligence and humans, specifically I am referring to the “personal AI” that we will have in the future.
«I think it’s really an open question. What do people want? A very capable colleague or a reflection of us? It seems clear at this point that we will spend a lot of time talking to AIs, but will we treat them as colleagues or as reflections of ourselves? It seems clear that people will spend more time talking to AI in the future than today, but will we treat them as colleagues or as extensions of ourselves? I don’t think we know what we all want yet; I mean, we might want different things. I don’t even think we know what most people will want. And the only way to find out is to let people explore all these options. It seems clear that people will spend more time talking to AI in the future than they do today. But what it will feel like, or how it will be used could go any number of ways. I think what I want is something like a super-competent colleague who knows absolutely everything about me: my entire life, every email, every conversation I’ve ever had, but who I don’t perceive as an extension of me. It is clear that I am me and he is something else, a separate thing.
Furthermore, one of the potential visions for AGI (the general artificial intelligence that is hypothesized could one day have human or even super-human capabilities, ed) that I find stimulating is that it is not a sort of new black box in the sky, but that it is simply made up of different entities that contribute as new tools to the construction of knowledge in society. Society is already in a certain sense like an AGI for us. It’s much more capable. And that collective intelligence doesn’t exist in your brain or my brain. It’s in the space between the knowledge we have accumulated, in the set of tools we can use. Like the iPhone it is not built by a person, but it is a sort of superpower… once you own it it in turn allows you to build value in society. So, I think this is a good general framework that illustrates how AI and humans can coexist.”

Continuing this discussion, do you think we will need greater social capabilities or mental capabilities (in AI)? What is missing from current LLMs, large language models, to allow a more intimate relationship between us and artificial intelligence?
«Well, I think the biggest problem is simply that the LLMs aren’t good enough. Like GPT-4…compared to what I hope we’ll have soon it’s incredibly stupid. So it’s not that useful. If we go back to that super competent colleague, he is currently not a colleague who thinks very much. He makes a lot of silly mistakes. He really can’t think straight. So, I think we’re really far from where we need to be in terms of those mental capabilities that you were referring to.”

May 16, 2024 (changed May 16, 2024 | 07:22)

© ALL RIGHTS RESERVED

 
For Latest Updates Follow us on Google News
 

PREV Volkswagen Golf R restyling, new teaser video before the debut
NEXT Tombi release date announced! Special Edition, the PS1 classic returns to PC and console