Sir, this is a Wendy’s. You personally attacking me doesn’t change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn’t mean it works for everyone, no tech is inevitable.
I’m also annoyed by how “in the face” it has been, but that’s just how marketing teams have used it as the hype train took off. I sure do hope it wanes, because I’m just as sick of the “ASI” psychos. It’s just a tool. A novel one, but a tool nonetheless.
What do you mean “black box”? If you mean [INSERT CLOUD LLM PROVIDER HERE] then yes. So don’t feed sensitive data into it then. It shouldn’t be in your codebase anyway.
Or run your own LLMs
Or run a proxy to sanitize the data locally on its way to a cloud provider
There are options, but it’s really cutting edge so I don’t blame most orgs for not having the appetite. The industry and surrounding markets need to mature still, but it’s starting.
Models are getting smaller and more intelligent, capable of running on consumer CPUs in some cases. They aren’t genius chat bots the marketing dept wants to sell you. It won’t mop your floors or take your kid to soccer practice, but applications can be built on top of them to produce impressive results. And we’re still so so early in this new tech. It exploded out of nowhere but the climb has been slow since then and AI companies are starting to shift to using the tool within new products instead of just dumping the tool into a chat.
I’m not saying jump in with both feet, but don’t bury your head in the sand. So many people are very reactionary against AI without bothering to be curious. I’m not saying it’ll be existential, but it’s not going away, I’m going to make sure me and my family are prepared for it, which means keeping myself informed and keeping my skillset relevant.
We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn’t produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn’t matter. Turns out AI is a hindrance and not an asset to us, thus is life.
Sir, this is a Wendy’s. You personally attacking me doesn’t change the fact that AI is still not inevitable. The bubble is already deflating, the public has started to fall indifferent, even annoyed by it. Some places are already banning AI on a myriad of different reasons, one of them being how insecure it is to feed sensitive data to a black box. I used AI heavily and have read all the papers. LLMs are cool tech, machine learning is cool tech. They are not the brain rotted marketing that capitalists have been spewing like madmen. My workplace experimented with LLMs, management decided to ban them. Because they are insecure, they are awfully expensive and resource intensive, and they were making people less efficient at their work. If it works for you, cool, keep doing your thing. But it doesn’t mean it works for everyone, no tech is inevitable.
I’m also annoyed by how “in the face” it has been, but that’s just how marketing teams have used it as the hype train took off. I sure do hope it wanes, because I’m just as sick of the “ASI” psychos. It’s just a tool. A novel one, but a tool nonetheless.
What do you mean “black box”? If you mean [INSERT CLOUD LLM PROVIDER HERE] then yes. So don’t feed sensitive data into it then. It shouldn’t be in your codebase anyway.
Or run your own LLMs
Or run a proxy to sanitize the data locally on its way to a cloud provider
There are options, but it’s really cutting edge so I don’t blame most orgs for not having the appetite. The industry and surrounding markets need to mature still, but it’s starting.
Models are getting smaller and more intelligent, capable of running on consumer CPUs in some cases. They aren’t genius chat bots the marketing dept wants to sell you. It won’t mop your floors or take your kid to soccer practice, but applications can be built on top of them to produce impressive results. And we’re still so so early in this new tech. It exploded out of nowhere but the climb has been slow since then and AI companies are starting to shift to using the tool within new products instead of just dumping the tool into a chat.
I’m not saying jump in with both feet, but don’t bury your head in the sand. So many people are very reactionary against AI without bothering to be curious. I’m not saying it’ll be existential, but it’s not going away, I’m going to make sure me and my family are prepared for it, which means keeping myself informed and keeping my skillset relevant.
We had a custom made model, running on an data center behind proxy and encrypted connections. It was atrocious, no one ever knew what it was going to do, it spewed hallucinations like crazy, it was awfully expensive, it didn’t produce anything of use, it refused to answer shit it was trained to do and it randomly leaked sensitive data to the wrong users. It was not going to assist, much less replace any of us, not even in the next decade. Instead of falling for the sunken cost fallacy like most big corpos, we just had it shut down, told the vendor to erase the whole thing, we dumped the costs as R&D and we decided to keep doing our thing. Due to the nature of our sector, we are the biggest players and no competitor, no matter how advanced the AI they use will never ever get close to even touching us. But yet again, due to our sector, it doesn’t matter. Turns out AI is a hindrance and not an asset to us, thus is life.