One of the major breakthroughs wasn’t just compute hardware, it was things like the “Attention Is All You Need” whitepaper that spawned all the latest LLMs and multi-modal models (video generation, music generation, classification, sentiment analysis, etc etc.) So there has been an insane amount of improvement on the whole neural network architectures themselves. (LSTM, Transformers, recurrent neural nets, convolutional neural nets, etc.) RNN’s were 1972, LSTMs only came out in 1999 come to find out.
2009-2011 was when we got good image recognition. Transformers started after the Attention whitepaper in 2017. Now the models are improving themselves at this point, singularity is heading our way pretty quickly.
What does that mean exactly? What does a post singularity world actually look like because every single example of a post-singularity world I’ve ever seen depicted always assumes it’ll happen hundreds of years in the future after other technology has been invented.
One of the major breakthroughs wasn’t just compute hardware, it was things like the “Attention Is All You Need” whitepaper that spawned all the latest LLMs and multi-modal models (video generation, music generation, classification, sentiment analysis, etc etc.) So there has been an insane amount of improvement on the whole neural network architectures themselves. (LSTM, Transformers, recurrent neural nets, convolutional neural nets, etc.) RNN’s were 1972, LSTMs only came out in 1999 come to find out.
2009-2011 was when we got good image recognition. Transformers started after the Attention whitepaper in 2017. Now the models are improving themselves at this point, singularity is heading our way pretty quickly.
What does that mean exactly? What does a post singularity world actually look like because every single example of a post-singularity world I’ve ever seen depicted always assumes it’ll happen hundreds of years in the future after other technology has been invented.