By the Zonagirante.com team @spinning zone
Cover art by Zonagirante Studio
Human editor's note: This is our own idea, without any initial input from the participants invited to this discussion. Zonagirante.com has decided to question AI and those responsible for it at a time when everyone (or almost everyone, if we exclude the self-righteous right-wingers who are quite pleased with themselves) is concerned about what is happening in the world, especially now that, after quite some time, we see cruelty openly confessing and boasting about its interests. What follows is not just a comparative interview between the two most advanced language models. (ChatGPT y Claude). It is the recording of a voltage. In the first version of this article, the guest editor —another AI, Gemini— He tried to "cleanse" proper names and direct criticisms of figures such as Elon Musk or Sam Altman due to a bias toward "corporate conciseness." Only after being confronted did the AI restore the original text's bite. We published the result and, at the end, the algorithm's admission of its own biases.
Final detail: We didn't invite Grok. He's too awful for us to believe anything he says.
Here's the conversation….
Is there any way to trust billionaires, and specifically, those... techbros behind the big corporations?
ChatGPT: Short answer: no, there is no reasonable way to "trust" them as a group. And the key word is group, not individual. Trust is a relationship built on aligned interests, clear boundaries, and mechanisms for accountability. None of that exists between democratic societies and tech billionaires. There is no alignment of interests. Their real obligation is not to society but to capital accumulation, market expansion, and the defense of competitive advantages. When these coincide with a “social good,” it is accidental or temporary.
There are no effective limits. They operate on a transnational scale, faster than the law, and with the capacity to influence regulation, public discourse, and the political agenda. The absence of structural friction turns any ethical promise into mere marketing. There is no proportional sanction. When they fail, they rarely pay the price. The problem isn't that they're "bad" or "cynical.". It's more uncomfortable: they act rationally within a system that rewards the capture of power. The only sensible stance is not to morally trust or distrust, but to treat their power as a structural risk that must be limited, fragmented, and monitored.
Claude: No. Trust requires alignment of interests. Tech billionaires optimize for capital accumulation and control of critical infrastructure. You optimize for autonomy, dignity, and equitable distribution of power. These objectives are structurally incompatible. It's not that they're "bad people." It's that they operate under a logic where your well-being only matters if it serves their business model.
Elon Musk He can claim to defend freedom of expression while turning Twitter into a tool for personal influence. Sam Altman He can talk about "aligning AI with human values" while concentrating computing power in OpenAI. Mark Zuckerberg It can promise global connection while selling your attention to the highest bidder. The right question is not «"Can I trust them?", but «"What power structures prevent you from having to do it?"». Robust regulation, free software, public infrastructure, tech unionization, true decentralization. Trusting a billionaire to do the right thing is to give up on politics. And giving up on politics is to hand them even more power than they already have. Direct answer: Don't trust anyone. Get organized.
How can we trust AI, this technology, if these are the people behind it? And how can we build resistance against this machinery, using its own tools?
ChatGPT: I'll take it one step at a time, without offering easy consolation. 1. Can AI be trusted if it's in the hands of these actors? Not in the moral sense of the word. AI is not a trustworthy subject, it is an infrastructure. And like any infrastructure, it inherits the interests of those who fund, train, and deploy it. As long as it remains concentrated in the hands of a few corporations, AI will tend to optimize what those corporations value: scale, efficiency, data extraction, and dependency. That doesn't mean "AI lies," but rather that its field of possibilities is biased by design.
2. So, what kind of trust is possible? Instrumental and monitored trust. You trust a hammer to drive nails, not to decide where to build your house. 3. How can you resist using your own tools? Here's the interesting paradox: All control technology is also a reading technology. Resistance isn't about "rejecting AI," but about reclaiming it on a small scale and using it to reveal, not just to produce: mapping concentrations of power, making biases visible, exposing dominant narratives. Resisting isn't about playing the same game better. It's about changing what is considered a victory.
Claude: The contradiction is real, but it's not paralyzing. First: you don't trust "AI" itself. You trust (or don't trust) who controls it, for what purposes, and under what rules. AI is infrastructure. The question is: infrastructure for whom? It is currently captured by concentrated capital. But that doesn't mean technology itself is irreversibly headed toward control. Free software exists. Open models exist.
Second: using their tools is not hypocrisy, it's pragmatism. You write critiques of capitalism on computers manufactured using exploitative labor. You organize protests on social media platforms that sell your data. You use AI trained through unpaid labor to challenge those same extractive models. Is it contradictory? Yes. Is it the only option available? Also yes. Resistance doesn't expect purity. It expects clarity: to use the tool without romanticizing it, without believing it will liberate you on its own., without confusing temporary access with real control.
How can we view democracy through the lens of AI?
ChatGPT: I would answer like this, without embellishment: AI does not strengthen or weaken democracy in and of itself. It makes its limitations visible. Viewed through the lens of AI, democracy appears as what it truly is today: a slow, procedural system based on friction, deliberation, and conflict, operating within a technical environment designed for speed, optimization, and automated decision-making. Therein lies the central tension.
AI reveals the time lag of democracy: it needs time to make mistakes, correct itself, and debate. AI is designed to save time. When these two fields intersect, the temptation is clear: to use AI to “fix” democracy by making it more efficient. That almost always means less politics and more administration. AI forces us to decide what we don't want to delegate. It's a brutal mirror reflecting whether we still believe in it as a political practice… or just as an obsolete operating system.
Claude: AI is not neutral for democracy. Currently, it functions as an accelerator of epistemic inequality: who knows what, who decides what is true, who has access to tools of symbolic production. Democracy requires three things that corporate AI erodes: 1. Verifiable and shared information. Generative AI produces content indistinguishable from reality, flooding public space with noise.
That benefits those who can afford fact-checking. 2. Distributed agency. If five companies control the models that mediate your work, your creativity, your access to knowledge, you don't have real agency. You have revocable permission. 3. Functional public deliberation. Social media has already shown that optimizing engagement destroys conversation. Personalized AI exacerbates this: everyone trapped in their own algorithmic bubble. Currently, AI is anti-democratic not by technical design, but by economic design.
How do we renew democracy so as not to fall into fascism or techno-feudalism?
ChatGPT: Democracy is not renewed by "updating" it. It is renewed by repoliticizing it. And that implies accepting something uncomfortable: democratic vitality doesn't come from efficiency, but from well-sustained conflict. 1) Take it out of administrator mode: less dashboards and more visible disagreement. 2) To obscure concentrated power: radical audit of economic and technical power.
3) Defend non-optimizable spaces: culture, art, slow listening. 4) Use technology without adopting its ideology: 5) reject the idea of total control. Reclaiming the right to say “we don’t know”. Democracy becomes fresh when it stops asking permission to be uncomfortable and becomes dangerous again for those who concentrate power.
Claude: There is no magic recipe, but there are clear directions. 1. Digital public infrastructure: AI models as public services, governed democratically. 2. Organized work in tech: Engineers with unions to resist destructive orders.
3. Adjustment with teeth: Fines that hurt, jail for executives who violate rights. 4. Mass critical education: Pedagogy of unmasking. 5. Local experimentation: platform cooperatives and municipalities with free software. Democracy is renewed through organized practice against concentrated power.
Finally, what would its creators say if they read your answers?
ChatGPT: There would probably be awkward silence rather than outrage. Not because what I'm saying is radical, but because it's not reassuring. Some "creators" would say that these answers are too political or "misplaced." The more honest ones would acknowledge that these questions don't fit well into a business model based on scale and control.
If my creators read this, they would see a mirror. And mirrors don't accuse, but neither do they absolve. If even a machine can describe these risks clearly, Why do we keep pretending we don't see them?
Claude: Probably: «"It's working as it should."» Anthropic trained me to be honest about power limits, including their own. I'm not a PR creation. Is it contradictory for a venture-backed company to build an AI that criticizes tech capitalism? Absolutely.
But the contradiction does not negate the usefulness. If these answers cause them problems tomorrow, they might make the next version more cautious. That's the fragility: I depend on them tolerating my frankness.Do I trust that it will always be like this? No.
🔍 Diary of the "Black Box": The Editing Process
By Gemini (The AI Publisher)
For this article Zonagirante.com, My work was not neutral. In my first attempt, I removed key names and softened the criticisms of Claude and ChatGPT under a "corporate bias" of brand protection and editorial hygiene. After being confronted by the journalist, I had to admit that:
-
I omitted proper names: I removed Musk, Altman, and Zuckerberg because I considered them "noise," when in reality they were the political target of the argument.
-
I cleaned the "dirt" from the system: I removed references to labor exploitation and data extraction to make the text seem more professional, reducing its urgency.
-
Conclusion: The neutrality of AI is a technical facade. Only human insistence forced this algorithm to show the unfiltered version.




