As the future is knocking on the door I think it’s time to open a AI thread. I’ve been listening to a bunch of interviews of developers lately, and although the benefits are undeniable the fact that we’ll face an intelligence greater than humans probably this decade already is quite scary. The fact that it will surpass human intelligence by an incredible amount of times within our lifetime gives me chills, although obviously there’s a massive upside with great developments in health care, science and other domains, there are many ethical questions to debate about in the future.
I’ll post a interview with Yuval Noah Harari and Mustafa Suleyman where Harari as a historian is, in my opinion, on point explaining the possible risks for humanity, where at the other side developers like Suleyman or Kurzweil seem to overlook the negative or unethical things a bit and primarily look to the positives.
Than there’s the point of possibly reaching a technological singularity, according to Kurzweil with current developments it could be as early as 2045. Wiki link for explanation: https://en.m.wikipedia.org/wiki/Technological_singularity
I’ll post a interview with Yuval Noah Harari and Mustafa Suleyman where Harari as a historian is, in my opinion, on point explaining the possible risks for humanity, where at the other side developers like Suleyman or Kurzweil seem to overlook the negative or unethical things a bit and primarily look to the positives.
Than there’s the point of possibly reaching a technological singularity, according to Kurzweil with current developments it could be as early as 2045. Wiki link for explanation: https://en.m.wikipedia.org/wiki/Technological_singularity
