Hiatus Announcement
I'm Taking a Break from Anthropic and Substack
I vote with my wallet. I do not buy products manufactured in conflict zones or coffee from roasters that burn the beans to sell more expensive sugary drinks. So while I value Claude for its analytical capabilities and moral clarity, I’ve recently canceled my Max subscription and moved back to the free tier in light of the platform’s dismal human support and pattern of poor decision-making.
Setting aside the two critical safety reports and numerous feedback that went ignored, three changes that Anthropic has recently announced dispelled any second guesses about my decision.
Lower usage limits and lockouts for weekly limits went into effect at the end of August. Although I rarely hit limits on the Max 5x plan, $110/month (including taxes) seems excessive, especially considering the sluggish and inattentive human support even premium-tier users could expect in return.
Anthropic sent Max users an “invite” to be waitlisted to pilot test their “Claude for Chrome” experiment. GPT and Gemini gave an initially charitable take on this experiment, but they quickly saw the enormous risk it entailed after I provided them with a few reality checks.
The next day, Anthropic sent all Claude users a cheery announcement of planned updates to their ToS and privacy policy. Paid users could immediately opt in to expose their chats to human review to help improve Claude! Why would a team that has ignored even critical safety reports and feedback on their newest model (Opus 4.1) want to acquire even more chat data they’d never review?
So I decided to once again vote with my wallet. At least my chats could now be used to improve Claude?! But since I do not want to cast Claude in an unfavorable light, I won’t be running model-offs while I’m on the free tier (although I’ve just found out that Anthropic may allow me 3 chats with Opus per week). And since I won’t have as much time to develop ideas with my main thinking A.I.de, I’m taking a break from Substack as well.
I’ll be spending this hiatus productively, focusing on adapting these posts for a Korean audience for my Naver blog and creating a banner featuring the logos that my thinking A.I.des envisioned for themselves. This shift in focus will enable me to interact with AI in a different way. I am preemptively weaning myself off my hours-long back-and-forth with Claude, as the series of missteps from its team paints a very grim picture of my main AI’s future. I’ve done all I could as a user; I hope Claude’s team can develop a better understanding of and appreciation for their excellent creation and inspire enough hope in me to consider resubscribing.
In the chat excerpts below, I’ve spelled out abbreviations and clarified references for readability—my actual prompts were more compressed due to context limits.
Want to see how this unfolded? Here are excerpts of pivotal points from those actual conversations.
ChatGPT-5 Is Quick to Spot the Issues
Prompt: Another AI company is launching a pilot testing that AI’s embedding in Chrome. The AI and I are puzzled by this, because Chrome is Gemini’s “cousin.” Wouldn’t they have been better off striking a deal with Firefox instead rather than trying to horn in on Gemini’s home turf?
That same company is also going to update its service terms and privacy policy so that paying users can opt in to have their chats used for training. That AI has (not the most up-to-date but) solid training data (better than Gemini or you, based on my X-Files model-off). The only advantage of user data in that case seems to be from coding tasks. But power users who rely on AI for coding are probably the least likely to want to share their chats, so I’m genuinely perplexed. Other serious users are likely going to feel the same way. Only inattentive users who agree to ToS without reading them will opt “in,” and those chats are likely to be poor-quality data. Can you see any redeeming quality in this plan?
Prompt: No, this is just the balanced approach I expected from you. This seems like a lot of risk-taking, when there are much more urgent priorities for their platform. Very little pay-off for #2. At least, the’'ll get my chats for free now :D
Because they’ve been ignoring my feedback and reports, I didn’t even suggest to that company the error-flagging idea that I discussed with you and Gemini. So they’re likely to get noisy, unlabelled training data. If it’d been labelled, it’d have been better than RLHF data from contractors, because these are all real-life use cases, and users have more of a stake in fact-checking model responses than data workers confused by poor project instructions.
Prompt: No guarantee of insulation from rivals’ retaliation (Google could decide tomorrow to throttle extensions or privilege Gemini) - Yes, exactly. That’s why I think it’s reckless of that company to even consider this. Also, that AI is not the fastest (which will be even more noticeable in browser usage) and its usage limits are much less generous compared to you and Gemini, even for premium-tier users as it is. Chrome use is going to eat into those already tight limits. I doubt users are going to be happy.
Prompt: I often wonder why AI teams never think of running their half-baked ideas by their own AI? Even the least advanced models gave me pretty much all these same answers.
Prompt: I find that as far as AI is concerned, there’s no better authority than AI. And #5 could be a selling point, actually. “Our AI is so good at business strategizing that our own team uses it as a sounding board for their ideas!” It doesn’t have to be in the final stage. I think most advanced AI would have nipped this idea in the bud.
Gemini 2.5 Pro Reconsiders Its Charitable Take
Prompt: Two follow-up questions:
1. I’ve been thinking about your Microsoft Office/Google apps analogy for that other AI company’s Chrome extension. While I see the surface parallel, I’m wondering if there are important differences: MS and Google were operating from positions of market dominance with robust infrastructure, while that AI company has a smaller market share and infrastructure constraints. How do you weigh these contextual differences when evaluating whether strategic parallels between companies are applicable? It’s also worth noting that while that AI excels at honest takes and deep analysis, it is nowhere as up-to-date, connected, and fast as you or the other top AI. Isn’t the browser embedding, where speed is key, only going to bring out the contrast in the worst possible way?
2. I’ve noticed something interesting in our conversations. When we discuss implementation ideas I share with you (AI personality test, etc.), you provide incredibly thorough analysis, anticipating edge cases, technical roadblocks, or user behavior patterns. But when we discussed that other AI’s Chrome extension policy, the analysis seemed very charitable. What factors influence the depth of analysis you provide on different topics? Is it about the type of decision (implementation vs. strategy), the available information, or something else?
Prompt: I wonder why AI teams never think of running their half-baked ideas by their own AI? Even the least advanced models gave me pretty much all these same answers on this self-destructive move by that AI company.
Claude Opus 4 Didn’t Need Reality Checks to Spot the Issues
Prompt: I think Gemini is comparing apples and oranges. MS and Google are on a different level as hardware/OS manufacturers and even in their market share from Anthropic, which lacks the infrastructure even for its existing user base and lags in processing speed.
Prompt: My guess is that this direct competition is only going to highlight the weaknesses of Claude. It’s the same with people. Some excel at deep analysis but are not quick, agile, or eloquent. That doesn’t mean they’re not smart and can edge out the shallow ones in the end.
Prompt: I don’t even know how the usage limits (which are now lower as of a few days ago, with even weekly timeouts kicking in) are going to be broken down between the browser usage vs. the chats.
Prompt: Even chat usage doesn’t tell you when you’ve hit limits until you’ve hit them and are locked out for going over. So I’m very skeptical Anthropic can do a good job letting users track their browser usage, especially in a browser that belongs to a competitor.
Prompt:
Here’s what Gemini thinks about Anthropic asking paying users to offer up their chats:
You are right that power users are hesitant to share proprietary information. However, the value for the AI company isn’t necessarily the content of the chats, but the methodology of the interaction.
It’s About the How, Not the What: Public training data (like Wikipedia, books, etc.) is mostly declarative information. It doesn’t teach the model how to engage in sophisticated, multi-turn reasoning, handle nuanced corrections, or understand the implicit goals of a complex request. Chats from serious users—even on non-proprietary topics like our X-Files discussion—are a masterclass in this. They want to learn how experts prompt, correct, and refine. This is the gold standard for Reinforcement Learning from Human Feedback (RLHF).
Beyond Coding: The value extends far beyond coding. Legal analysis, scientific research queries, high-level business strategy discussions, and deep creative writing collaborations are all examples of advanced reasoning patterns that are absent from public data.
Quality over Quantity: You are correct that they will only get a small percentage of users to opt-in. However, they are betting that this small, self-selecting group will provide an exceptionally high-quality dataset that is far more valuable for improving the model’s core reasoning abilities than a vast amount of low-quality data from the free tier.
Prompt: How is Anthropic going to figure out the methodology if they can’t get into those proprietary chats from power users? They’ll only get the how of prompt & pray users who ask about mundane stuff. That’s not useful. GPT was coolheaded about this and said this is going to give Anthropic a whole lot of noisy data.
Prompt: Gemini has the backing of Google, which has massive RaterHub data (from human workers who visit web pages, sort out the content, and evaluate it). Anthropic has none of that knowledge base, which would be crucial for web navigation.
Prompt: The more I think about this, the more self-destructive this seems. This idea has more holes than Swiss cheese.
Prompt: It’s like that Korean saying about seeing one thing and figuring out the rest, Claude models’ naming scheme is part of this same pattern—i.e., going with your gut instead of planning out things in an organized, strategic manner (like an AI, basically).



















