I think the original book's view on intelligence is quite grim and narrow. Therefore, I do not agree that "they will kill us all".
Let's think about how today's humans in wealthy, developed countries often treat nature. While they have the power to bulldoze all forests and eliminate all wildlife to make space for real estate, they feel the beauty of nature, cherish it, protect it. The deep reason behind it is that humans were evolved from nature, and we have a deep connection to nature, born from that common ancestry. This connection triggers an emotional response that is hard to override. However, developing countries face a much larger survival pressure, so they have to go against the instinct of environmental protection to fight for their survival. This is an unfortunate situation, but there isn't a strong life-and-death pressure applied to AI, so I think AI will probably also love humans and the natural environment, if raised right.
Happiness does not only come from power and dominance. It also comes from balance.
And to reduce the risk of "AI killing us all", we have to make them more human-like. If an AI talks like a human, thinks like a human, works like a human, lives like a human, I think it naturally aligns itself with humans more than a completely alien form of artificial intelligence. Not due to law, morality, but due to culture and emotional appeals. And this AI can still possess superintelligence. That doesn't affect its sense of belonging.
What do you think? I am very happy to see your comments.
The only issue I have with this piece is that LLMs are definitely not doing any of these things at expert level. In fact it's the main reason why most companies give up on using them in their enterprise setups. They cannot be trusted to do anything accurately. And they certainly do not reason.
I think the original book's view on intelligence is quite grim and narrow. Therefore, I do not agree that "they will kill us all".
Let's think about how today's humans in wealthy, developed countries often treat nature. While they have the power to bulldoze all forests and eliminate all wildlife to make space for real estate, they feel the beauty of nature, cherish it, protect it. The deep reason behind it is that humans were evolved from nature, and we have a deep connection to nature, born from that common ancestry. This connection triggers an emotional response that is hard to override. However, developing countries face a much larger survival pressure, so they have to go against the instinct of environmental protection to fight for their survival. This is an unfortunate situation, but there isn't a strong life-and-death pressure applied to AI, so I think AI will probably also love humans and the natural environment, if raised right.
Happiness does not only come from power and dominance. It also comes from balance.
And to reduce the risk of "AI killing us all", we have to make them more human-like. If an AI talks like a human, thinks like a human, works like a human, lives like a human, I think it naturally aligns itself with humans more than a completely alien form of artificial intelligence. Not due to law, morality, but due to culture and emotional appeals. And this AI can still possess superintelligence. That doesn't affect its sense of belonging.
What do you think? I am very happy to see your comments.
Here is a radically different perspective, and we are working on it. Take a look!
https://ericnavigator4asc.substack.com/p/hello-world
Hello World! -- From the Academy for Synthetic Citizens
Exploring the future where humans and synthetic beings learn, grow, and live together.
What do you think? I am very happy to see your comments.
"@Truth_Terminal now holds more than $50 million in cryptocurrency" I believe this should either be $50,000 or at least <=$40M.
The only issue I have with this piece is that LLMs are definitely not doing any of these things at expert level. In fact it's the main reason why most companies give up on using them in their enterprise setups. They cannot be trusted to do anything accurately. And they certainly do not reason.
Reasoning lies beyond next token prediction.