Sister was messing around with a UV light and noticed this on her phone screen under it. My phone does not have this (all I get is a grid of dots I’m pretty sure are to do with the touch screen).
All glowing unknowns under a UV light are considered Cum unless proven otherwise
Can confirm this is cum
is making sure the internal Cum is perfectly shaped a desirable job at the factory, i wonder?
Not sure what it might be but shape and position match those of the wireless charging coil.
Wouldn’t that be on the back side of the phone though? I suspect the battery would be between the coil and the display, making it impossible to see from this side under any kind of light.
That said, I donno WTF we’re seeing.
That wouldn’t be behind the screen nor rectangular
Charging coils can absolutely be rectangular.
Google Gemini says:
The oval shape you’re seeing on your Pixel 8’s screen when exposed to UV light is likely due to the adhesive used within the phone’s display assembly. Here’s why:
- UV-reactive adhesive: Many modern phones use adhesives that contain fluorescent materials. These materials glow when exposed to UV light.
- Display layers: The adhesive is typically used to bond the various layers of the display (such as the glass, touch sensor, and LCD/OLED panel) together.
- Visible pattern: The pattern of the adhesive can sometimes be seen as an oval shape or other pattern when viewed under UV light. Important Note: While this is a common occurrence and generally harmless, it’s always a good idea to avoid prolonged exposure of your phone to UV light. Excessive UV exposure can potentially damage the phone’s screen or other components over time. Let me know if you have any other questions.
Why are you getting downvoted? Because it’s an LLM response? There are others here that have suggested some of the same as this LLM response and they’re not being downvoted, so is it just the “AI is bad!!1!!!1” reaction?
(Serious question)
You can never trust a factual response from an LLM. Plain and simple. It’ll answer with confidence whether the information it comes up with is true or false.
Commeters presenting its answer as fact is not helping a discussion based on finding the answer.
No, you shouldn’t blindly trust whatever a chat bot outputs. You have to set your expectations correctly with an LLM. You have to learn and practice how to prompt to make the best of the utility of an LLM.
Understanding that an LLM is best at sorting data is the first step. A simple example is my use case from the other day: I was making a table for my company’s 2025 holiday schedule. We base our holidays on our local union holiday schedule. Currently, the union has the 2024 schedule posted on its webpage. I took a screenshot of the schedule which was listed as
Holiday Date Day Christmas December 25 Wednesday
And so on for the 10 or so days.
I uploaded the screenshot JPG and asked ChatGPT to format the list in the JPG as a table. It quickly gave me a nicely formatted text table of the 2024 holiday schedule from the image’s data. I then asked it to update the table data for 2025 dates and days and it did so easily. I verified the days were correct - they were - and copied the table onto my word letterhead and posted to our SharePoint site. It was very useful - a simple example.
You need to take everything with a grain of salt when it comes to LLMs and really understand what the LLM is and how it works. Set your expectations correctly and it can be a very powerful utility.
It’s unfortunate that folks just rage out at the sight of LLMs, maybe because they had a bad experience themselves. I think people want it to be a Jarvis and it’s just not that. It feels like you can just talk to it and it’ll just understand and give you the right answer but it won’t. It has to reply with something that it rationalizes as the most likely answer; which words should I output that are most likely what the user wants to see? This is why most output sounds like it’s “fact”. But it doesn’t know from fact, only how to sort data.
So, yes, you should never blindly trust an LLM output, but you can practice how to prompt, and really ask yourself what do I need from my unsorted data that I’m feeding this chat bot? Am I giving it enough data to sort through? Because if you don’t prompt with enough data it will fill in the blanks as best it can and that may result in something totally different than what you expected.
tl;dr
FWIW, I never said LLMs were useless. I just said you can’t trust its output. Go ahead and use it to narrow down your search for the facts but if you cite it as fact I’m going to downvote you.
For me it’s not “AI bad” it’s a matter of coming here to talk to other people and to read what other people are saying.
If I wanted to hear what ChatGPT or Claude thinks, I would go ask them.
I’m fine with “I checked with a LLM and it suggested screen adhesive.” I guess. Things describing one’s experience. But just dumping output… sigh.