please enter password to view case study.

get in touch instead.

designing for
ai visibility

designing for ai visibility

a cross-campaign research initiative at Tech for Campaigns examining how AI chatbots shape political discovery—and how design decisions can improve clarity, accuracy, and representation in AI-mediated systems

Note: This case study reflects high-level process and personal learnings. All sensitive details, data, and materials have been anonymized or abstracted in accordance with confidentiality agreements.

organization
Tech for Campaigns

role
market research &design strategy

timeline
july 2025– jan 2026

focus
ai systems, information design,
platform-mediated perception

“Technology is neither good nor bad; nor is it neutral.”
- Melvin Kranzberg

↦ the "why"

how do political candidates get understood when AI systems become the first point of contact?

As AI chatbots increasingly become the first place people go for information, political candidates are no longer discovered solely through websites, news articles, or social media. More often, the first interaction happens through an AI‑generated summary—before a voter ever clicks a campaign‑owned page.

Tools like ChatGPT, Gemini, Perplexity, and Claude now sit in the middle of the discovery process. They interpret questions, decide what information matters, and shape how public content is framed long before users reach the original source.

At Tech for Campaigns, I worked on a cross‑campaign research initiative focused on understanding how political candidates are represented inside these AI systems—and how design and content decisions can meaningfully influence those representations.

Rather than asking “what performs best?”, this research asked a more foundational question:

How do AI systems read, prioritize, and synthesize publicly available information in the first place?

This reframing positioned the work squarely within design strategy—mapping opaque systems, identifying leverage points, and designing for legibility inside platforms we don’t directly control.

the problem we needed to address

AI‑generated responses are becoming the first impression

Before someone ever visits a website or reads a news article, an AI system may already be summarizing who a candidate is, what they stand for, and why they matter. That framing happens quickly—and often invisibly.

Campaigns typically lack:

  • Clear visibility into how AI systems form those impressions

  • Control over which information is emphasized, simplified, or omitted

  • Established frameworks for improving representation across AI platforms

The problem we prioritized wasn’t persuasion—it was interpretability.

How do we make candidates understandable to systems that synthesize information on their own terms, using signals we can influence but not fully see?

the problem

There is no centralized, trustworthy platform that helps users discover games that align with their values, identities, and access needs. Current tools offer raw data or unmoderated reviews, but none center the lived experiences of players who are most at risk of harassment or exclusion.

the goal & hypothesis

Goal

To better understand how AI systems construct representations of political candidates—and where design and content decisions can meaningfully improve clarity and accuracy.

We focused on AI chatbots including ChatGPT, Gemini, and Claude, examining how each interpreted the same publicly available information in subtly (and sometimes surprisingly) different ways.

While this research supported broader program goals around messaging, fundraising, and press readiness, the core focus remained at the system level: understanding AI‑mediated discovery, not optimizing for any single campaign outcome.

Hypothesis

Making strategic changes to a candidate’s website, online presence, and content structure may influence how that candidate is represented in AI‑generated responses.

Rather than treating optimization as a checklist, we reframed content as infrastructure—something designed not only for human readers, but for AI systems tasked with interpreting, condensing, and presenting information at scale.

my role

I contributed to the design, execution, and synthesis of a multi‑phase study evaluating how AI systems represent political candidates.

My work included:

  • Designing and applying structured evaluation frameworks for AI‑generated outputs

  • Participating in human grading across multiple AI platforms

  • Analyzing qualitative and quantitative patterns across candidates and systems

  • Supporting synthesis discussions that informed downstream strategy

I wasn’t designing screens or interfaces. I was designing strategy for systems that don’t have a UI, but still deeply shape user experience, perception, and trust.

↦ research as
design strategy

phase 1: baseline mapping & live review

(july august)

The first phase focused on understanding how candidates were actually being represented across AI platforms—before assumptions or idealized fixes got in the way.

We defined queries to mirror how real people search: who a candidate is, where they’re running, what they believe, and how visible they appear overall. Baseline analysis combined machine‑assisted review with human evaluation, widening coverage without unnecessary complexity.

Using a standardized evaluation framework, we reviewed AI‑generated responses to consistent natural‑language prompts. The emphasis wasn’t on performance scores or “winning” outputs, but on interpretability—how clearly, consistently, and coherently candidates were represented across systems.

Importantly, this wasn’t a clean lab experiment. Content updates were happening in parallel. The information environment was live and messy—and that messiness became a feature, not a flaw. It allowed us to observe how AI systems respond to evolving signals in real time.

This phase functioned like a system map, revealing where responses broke down, where information gaps existed, and which structural signals carried disproportionate weight.

crawl pause: letting the system respond

(august → mid september)

After Phase One, we intentionally paused.

We stopped evaluations to give AI systems time to re‑crawl and reprocess updated content—a decision that proved as important as the analysis itself.

This reinforced a core insight: designing for AI systems requires patience, sequencing, and restraint. Feedback loops are delayed, opaque, and non‑linear. You don’t ship a change and immediately see results. You wait. You observe. You resist over‑correction.

phase 2: re-review & targeted interventions
(late september)

Phase Two revisited candidates after the crawl window to understand how representations shifted across time and platforms.

Here, the work shifted from what AI systems were saying to how and why those shifts occurred. We examined where evaluative nuance added clarity and where it added noise, refining future research approaches accordingly.

Human testing questions were simplified when nuance wasn’t useful, favoring clear yes/no evaluations. Tone‑based analysis was reserved for contexts where narrative framing or controversy meaningfully shaped perception.

Rather than cataloging every possible source, we focused on unexpected or unusual sources surfaced by AI systems—often more revealing than exhaustive lists.

phase 3: evaluation & learning
(late october)

In later phases, I completed additional evaluations using the same framework to maintain consistency.

This period included a rapid mini‑sprint for a single candidate, where evaluation and updates occurred within a tight window—a useful stress test for how quickly AI systems respond to new signals.

Across evaluations, one pattern became clear: responsiveness varies widely across platforms. No two systems update, stabilize, or prioritize signals in the same way.

↦ implications

answer engine optimization (AEO) as a design problem

As synthesis progressed, we began framing the work through Answer Engine Optimization (AEO)—designing content so it can be surfaced directly as answers by AI systems.

Unlike traditional SEO, AEO prioritizes clarity, authority, and structure. The goal isn’t clicks—it’s comprehension.

For campaigns, this shift introduces both opportunity and risk: greater visibility where voters already ask questions, paired with less control over framing and nuance.

From a design perspective, AEO isn’t a checklist. It’s infrastructural. It requires disciplined messaging, clear issue articulation, and ethical care in how information is structured for systems increasingly summarizing reality on users’ behalf.

synthesis & outcomes

from measurement to meaning

In early November, the work shifted from evaluation to collective synthesis.

As a group, we surfaced patterns and tensions, including:

  • What shifted meaningfully—and what didn’t

  • Platform‑specific behaviors and quirks

  • Recurring challenges around accuracy, bias, and location‑based information

  • The role of media, forums, and human‑generated content

  • Which approaches felt design‑leveraged versus content‑heavy

The project concluded with knowledge‑sharing and reflection, emphasizing organizational learning over campaign‑specific outcomes.

individual insights

As I mentioned earlier, this phase of the project was less about metrics and more about meaning. It was a chance to step back and reflect on what the research revealed - not just technically, but personally.

What this work changed for me

  1. local government is personal (and powerful)

Framer is a no-code tool for building and publishing responsive websites—perfect for anyone creating modern, high-performance pages without coding.

  1. ai chatbots are already decision-making tools

Framer is fully visual with no code needed, but you can still add custom code and components for more control if you're a designer or developer.

  1. policy categorization is hard - for both humans and ai

This is a free, responsive FAQ section for Framer. Drop it into any project, customize styles and text, and use it to save time on support or info pages.

  1. structure matters more than ideology

After duplicating, copy and paste the component into your Framer project. Then edit the questions, answers, styles, and animations as needed.

emerging signals

Several high‑level signals became clearer:

  • AI‑generated answers vary over time, even without major content changes

  • Human‑generated sources (news, forums, long‑form content) consistently influence outputs

  • FAQ‑style, question‑driven structure improves legibility

  • Timing is a design constraint; feedback loops are delayed and opaque

Taken together, these insights reframed the work from studying AI outputs to asking a broader design question:

How do we create content that remains accurate, legible, and trustworthy when systems act as intermediaries between people and information?

When AI systems summarize reality on behalf of users, structure becomes ethics—and clarity becomes equity.

↦ reflection

designing before the interface exists

This project reshaped how I think about elections, technology, and design responsibility. It reinforced that invisible design decisions carry real consequences.

Structure shapes understanding. Clarity shapes trust. And systems we don’t see still deeply influence how people form opinions.

This work strengthened my interest in systems‑level design, transparency, and dignity in information access—and continues to shape how I approach design strategy in complex, high‑stakes contexts.

January 2026 update: This case study reflects work completed through initial research and synthesis. Ongoing discussions continue to extend these learnings into broader guidance around AI‑mediated discovery and AEO.

NEXT WORK

joystick

a game discovery platform built for players who want to feel seen, safe, and supported.

don't be a stranger . . . !

@ 2025 by summer chaves

don't be a stranger . . . !

@ 2025 by summer chaves

don't be a stranger . . . !

@ 2025 by summer chaves