Latest Posts (2 found)

Gemini 2.5 Pro system prompt

After a disillusioning exchange with Gemini yesterday and user "spijdar" on Hacker News providing some insight into the system prompt, I was curious about it and dumped it. Not sure if this is well-known info or not? Anyways, here it goes: I think it explains how something like my aforementioned conversation can happen very easily. (The block isn't even labeled explicitly, the poor model needs to figure out on its own what it refers to? Did an engineer just do ? Or maybe those are some invisible tokens that the model knows but just can't regurgitate back?)

0 views

I caught Google Gemini using my data—and then covering it up

I asked Google Gemini a pretty basic developer question. The answer was unremarkable, apart from it mentioning in conclusion that it knows I previously used a tool called Alembic: Cool, it's starting to remember things about me. Let's confirm: Ok, maybe not yet. However, clicking "Show thinking" for the above response is absolutely wild: I know about the “Personal Context” feature now — it’s great. But why is Gemini instructed not to divulge its existence? And why does it decide to lie to cover up violating its privacy policies? I’m starting to believe that “maximally truth-seeking” might indeed be the right north star for AI.

0 views