

Stoic desire to be informed and to be a force of good for others with like intentions
Stoic desire to be informed and to be a force of good for others with like intentions
No problem can do
Thank you
None of this was true of copilot for years, but I stand corrected as for the current state of affairs
GitHub copilot is not chatgpt
I stand corrected thank you for sharing
I was commenting based on anecdotal experience and I didn’t know where was a test specifically for this
I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel
Which, while not correct, I would not treat as hallucination
That could be true, please enlighten me
This isn’t really true as far as I’m aware
I’m not familiar with the term
🆗
What makes this a bullshit take? Focusing attention on actual problems is a great way to make progress
Please articulate why the premise of my argument is fundamentally flawed
No that is not how reasoned debate works, you have to articulate your argument lest you’re just sloppily babbling talking points
I think I’m on board with arguing against how LLMs are being owned and managed, so I don’t really have much to say
Basically no. What you’re calling tailored AI is actually low cost AI. You’ll be hard pressed, on the other hand, to get ChatGPT o3 to hallucinate at all
What exactly is the argument?
I don’t understand the nature of your question
I don’t think this answers the question
What would Altman gain from overstating the environmental impact of his own company?
You should consider the possibility that CEOs of big companies essentially always think very hard about how to talk about everything so that it always benefits them
I can see the benefits, I can try to explain if you’re actually interested
I actually think that (presently) self hosted LLMs are much worse for hallucination
Ahh yes the random rolling stone article that refutes the point
Let’s revisit the list, shall we?