ATLAS

ATLAS

Home
Notes
Archive
About

You, Too, Can Change the World

Re-framing our relationship with AI and the social forces that shape its diffusion

Hill's avatar
Hill
Jul 19, 2025
Cross-posted by ATLAS
"My recent article describes the theory/philosophy underlying Scaffold. Shoutout to ATLAS for giving me a platform for my ideas!"
- Hill

Hillary Zeng

With sections from Pedagogy of the Oppressed by Paulo Freire


A few months ago, I was scrolling through The New York Times and my eye caught on the following headline: “Doctors Told Him He Was Going to Die. Then A.I. Saved His Life.” The piece described using generative AI for drug repurposing, which is basically the task of identifying alternative use cases for drugs. By using AI in this way, some doctors have been able to discover unconventional treatments for rare diseases, thus saving lives.

Anyone who’s used GenAI tools probably understands why this technology is well-suited for a task like drug repurposing: it leverages GenAI's (current) strength in synthesizing large amounts of information, quantities far beyond what the human brain is capable of.

The NYT article left me with a sense of wonder about GenAI tools and their potential for driving progress, particularly in areas that we’ve historically viewed as too big to tackle: hunger, inequality, poverty, to name a few.

Yet, most people don’t use AI with such grand ambitions. The first example that comes to mind is the high schooler who uses ChatGPT to do her homework because she’s too lazy to do it herself. That’s how we’re applying globally transformative technology? That’s the reason we’re investing so many resources into building and sustaining data centers that are absolutely terrible for the environment? (If that strikes close to home, don’t feel too bad - there’s definitely a company out there that’s using AI to parse through hundreds of thousands of documents just to “cover their bases,” and maybe generate a summary that a manager will read once, but never again.)

I digress. After reading that article about AI for drug repurposing, I felt irrational disdain for all those who use GenAI tools for anything less than changing the world. Using a tool capable of achieving medical breakthroughs to solve high school math problems? It feels wasteful.

Let me clarify, though – I am not opposed to us everyday people using AI. But if we do use it, we should channel the spirit of drug repurposing. We should use it with the intention to change the world.

1. Perceiving

“The form of action [that men and women] adopt is to a large extent a function of how they perceive themselves in the world.”

Yes, you. You can change the world.

I know we’re conditioned against believing that. Society has ingrained in us our insignificance (unless you’re part of the privileged few who have earned access to/been born into/talked their way into the levers of power). But it’s precisely this ‘insignificance complex’ that feeds our fears about AI; we don’t think that we’re valuable, so we believe that AI can replace us. Maybe some of us entry-level people even think that AI should replace us.

As we embark on an age of AI unknown, we must take this disruption as an opportunity to reclaim our agency. It starts with acknowledging that we matter. It starts with letting ourselves believe that we have the power to change the world. That’s the mindset that we should bring when we use AI.

2. Reclaiming

“Those who have been denied their primordial right to speak their word must first reclaim this right and prevent the continuation of this dehumanizing aggression.”

Critical thinking is the process by which we apply our perspectives to unfamiliar notions, thereby generating new insights that contribute to the global ecosystem of ideas. It is one way that each and every one of us brings something new into this world.

I developed my appreciation for critical thinking while working as a tutor in the Georgetown Writing Center. A large number of my clients were first-year students who were taking the same set of introductory courses. Some weeks, I’d meet with five, six, or even seven students who were working on the same assignment. Yet, no essay was alike – no argument was alike, no perspective was alike, no student was alike.

My clients were some of my greatest teachers. So many of them have changed the way that I view the world. They changed my world. And I firmly believe in their ability to do the same for others.

To leverage AI for change, we must engage with it in a way that places our own capabilities for critical thinking front and center. This means thinking beyond using AI to solve math problems that we should have been able to solve by applying the skills learned in the classroom. Thinking beyond using it to answer open-ended questions, such as “Tell me which of the three IR theories – realism, liberalism, and constructivism – best explains World War II.” Thinking beyond using it to write emails. (Are things really that pressing where you need to “maximize efficiency” writing a paragraph-long message? What shareholders are you beholden to?)

To put it differently; we need to stop relying on AI to provide us with solutions. When we rely on AI for that purpose, we surrender our autonomy and concede our inferiority. It sounds a bit dramatic, but think about it. Why are you asking AI to do your math homework for you? Why are you asking AI to write your IR paper for you? Why are you asking AI to write your emails? Do you really think you’re too good for all of these things? Or do you worry you’re not good enough?

Our society’s solutions-oriented approach to AI is enshrined in the word we use to describe these interactions: “prompting.” There are guides out there on how to better “prompt” your AI assistant, and among the key tips are to think (critically!) about the context, the tone, and the target audience (see Greg Brockman’s guide to prompting and Adobe’s prompt-writing guide). But that doesn’t change the fact that, fundamentally, you are using AI with the objective of exploitation. You engage with it for the purpose of extracting from it the best possible output, the answer that best serves your needs.

I’m not suggesting that we do away with the word “prompting,” perse. But we should be clear-eyed about the subtext, and how this subtext informs our own relationship with AI.

Rather than using AI to shortcut our critical thinking, we should be using it to bolster our critical thinking. Framed this way, we’re no longer using AI to give give give. Rather, we should view ourselves as engaging with it to gain exposure to a different perspective.

What does this engagement look like? Well, it should resemble any other sort of productive discourse.

It starts with you, as the user, clearly articulating your views. Don’t delegate the good, hard work of interpretation to AI. Hold yourself accountable for communicating ideas that make sense.

Then, when the AI responds, don’t take what it says as self-evident. Treat it as a perspective, with the same level of suspicion that you may treat the over-eager student in your recitation who should think before they speak just a tad more. Engage with AI. Interrogate AI. And when you respond, piece that perspective together with your own. See how they fuse together. Are they completely irreconcilable? Are there unexpected pockets of agreement? Disagreement? Allow yourself, not AI, to create something brand new. And that’s how you reclaim your agency. That’s how you come up with your own ideas using AI. That’s how you introduce a new idea into the world – thus changing it one bit at a time.

3. Acting

“To exist, humanly, is to name the world, to change it. Once named, the world in turn reappears to the namers as a problem and requires of them a new naming. Human beings are not built in silence, but in word, in work, in action-reflection.”

The next time you use AI, before you even open the interface, ask yourself two things. First: What do I want to understand? Second: What will this understanding enable me to do?

I’m serious, give it a try. Here’s an example.

I used AI a few times while writing this piece. What did I want to understand? Well, there were certain parts of my argument that I could sense were lacking, but I couldn’t quite figure out how to address on my own. (Staring at the screen, my mind was blank - I had hit a mental block.) In such moments, I wanted to get outside of my own head and hear AI’s perspective. I hoped that seeing that outside perspective would give me greater insight into my own argument.

What will this understanding enable me to do? Well, I’ve been thinking about this nexus of AI and critical thinking for a long time now. Although it feels pompous to say given the sheer amount of people, of experts who actually work in this field (and can code!)…I do think that I have a different perspective. At the bare minimum, it’s unambiguously my perspective, informed by my knowledge and experiences to date. I’m hopeful that my insights can change at least one person’s life by pushing them to critically reflect on their own relationship with AI.

Thanks for reading ATLAS! Subscribe to stay up-to-date on the ideas that will change the world.


Hillary runs Scaffold, a Substack exploring the societal impacts of the ‘AI Revolution,’ broadly defined. On Scaffold, Hillary has started a series, Musings from the In-Between, where she highlights her friends’ perspectives on navigating the current moment.

Hill's avatar
A guest post by
Hill
Curious about all things critical thinking, education, and AI
Subscribe to Hill

No posts

© 2026 Austin Nellessen · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture