Private Document

nonai.space

Research Framework — Branko Lukić

The Problem 30,000 Experiments Frameworks Principles Roots About
The Missing Layer

nonai

Building the missing layer between human creativity and frontier AI. Backed by 30,000+ experiments with 250 designers at Logitech's CDAI Lab — proving that AI can amplify human originality instead of averaging it out.

"What if we could transmit what makes a creator irreplaceable — through the very systems designed to replace them?"

The research behind the claim
The convergence
toward nothing

Every generation of AI tools brings us closer to a paradox that no one in Silicon Valley wants to name: the more powerful the tool, the more it erases the hand that wields it. We call this Creativity Collapse — the systematic flattening of creative diversity toward a statistical mean. Not through malice, but through mathematics.

Large language models don't create. They converge. They learn the central tendencies of billions of human expressions and produce the most probable next token — the average of everything humans have ever written, painted, composed, or designed. The result is not intelligence. It is the platonic ideal of mediocrity, wrapped in impressive fluency.

This isn't speculation. In July 2024, Shumailov et al. published a landmark paper in Nature demonstrating that AI models trained on AI-generated data undergo model collapse — a degenerative process where the model progressively loses information about the tails of human distribution. The rare, the unusual, the brilliantly weird — all of it disappears. First the fringes dissolve. Then the model converges toward a single point estimate with vanishing variance.

The tails are where creativity lives.

A separate study from Science Advances found what might be more alarming for creative fields: AI enhances individual creativity but reduces collective diversity. Writers given AI assistance produced stories rated as more creative — but those stories were significantly more similar to each other. Individual gain, collective loss. A social dilemma playing out across every creative industry simultaneously.

The PNAS study "Echoes in AI" quantified this in narrative: LLM-generated stories contain idiosyncratic plot elements that echo across different models and different generations. The same archetypes. The same narrative arcs. The same emotional beats. An entire species of stories converging toward a single genome.

And then: the scar. Research published in Technological Forecasting and Social Change (2025) found that when AI assistance is withdrawn, individual creativity drops — but the homogeneity it introduced keeps climbing. Even months later. The convergence isn't just a side effect. It's a wound that doesn't heal.

Nature · July 2024
"Models trained on model-generated data undergo irreversible model collapse"
Shumailov et al. proved that when AI trains on AI output, the distribution tails — where originality lives — disappear first. The model forgets the rare and converges on the average. This process is mathematically inevitable.
Science Advances · July 2024
"AI enhances individual creativity but reduces the collective diversity of novel content"
Doshi & Hauser's experiment with 600+ writers found a creativity paradox: AI-assisted stories scored higher individually but were more similar to each other. Especially for less creative writers, who became anchored to AI suggestions.
PNAS · August 2025
"LLM-generated stories contain idiosyncratic plot elements echoed across models"
"Echoes in AI" found that different LLMs produce eerily similar narrative patterns — the same plot structures, character types, and emotional arcs. Human stories showed no such convergence. AI doesn't just flatten — it stamps.
Technological Forecasting · 2025
"The creative scar persists even after AI withdrawal"
When AI tools were removed, individual creativity failed to recover — but the homogeneity AI introduced kept climbing for months. A permanent imprint on how people think. Not just a tool effect, but a cognitive scar.
More similarity between AI-assisted creative works than human-only works
100%
Inevitability of tail loss in recursive AI training (Shumailov et al.)
Duration of the creative scar — homogeneity persists even after AI is removed
The Evidence
30,000+ experiments
Not thought experiments. Not prototypes. 30,000+ structured experiments run inside one of the world's largest design organizations — proving that human creative nuance can be amplified through AI, not averaged out.

At Logitech, I co-founded and led the CDAI Lab (Creative & Design AI Lab) with a single mission: transform how 250 designers across global teams work with AI — without losing what makes each of them irreplaceable.

The mission wasn't adoption. Every company can get designers to use Midjourney. The mission was preservation — proving, through systematic experimentation, that AI tools can carry a creator's fingerprint forward rather than dissolving it into the statistical mean.

Over 18 months, we ran 30,000+ structured experiments across every domain in the design organization: visual identity, product design, brand communication, UX, packaging, motion, and strategic concept development. Every experiment was designed to answer one question: does the output carry the creator's signature, or did the AI flatten it?

The results were not what anyone expected. They didn't confirm that AI kills creativity — nor that it magically enhances it. They revealed something more nuanced: AI amplifies whatever you feed it. Feed it generic prompts, you get generic output. Feed it a rich, structured creative fingerprint, and it produces work that only that specific creator could have directed.

The difference wasn't the model. It was the protocol.

30K+
Structured experiments across AI-assisted creative workflows
250
Designers across global teams participating in the research
85%+
Accuracy of Logi Brand Guru — the AI agent built from our findings
18
Months of systematic experimentation, 2024–2025

The Methodology

Six categories of experiments, each designed to isolate a different dimension of the human-AI creative interaction.

Category 01

Fingerprint Fidelity

Can AI output be distinguished from a specific designer's work? We ran blind evaluations where teams rated whether AI-assisted outputs matched the authoring designer's known creative signature — their color instincts, compositional habits, conceptual vocabulary. We measured the "fingerprint fidelity score" across thousands of outputs.

Category 02

Diversity Metrics

Does AI-assisted work converge or diverge? We measured the variance in creative outputs across teams — comparing AI-assisted batches against human-only batches. We tracked lexical diversity, visual entropy, conceptual distance, and surprise factor. The results confirmed the "creativity paradox" from academic literature — but also revealed the escape routes.

Category 03

Protocol Testing

Which workflows preserve identity? We A/B tested dozens of prompting strategies, reference architectures, and calibration methods. Generic prompts vs. fingerprint-enriched prompts. Style guides vs. vibe protocols. The protocols that won became the foundation of the Copyself framework.

Category 04

Brand Voice Transmission

Can an organization's creative DNA survive AI mediation? We built Logi Brand Guru — an internal AI agent trained on Logitech's brand identity — and measured whether it could guide designers toward brand-consistent output without flattening their individual voice. It reached 85%+ accuracy: proof that organizational fingerprint transmission works at scale.

Category 05

Vibe Translation

How much creative intent survives the prompt bottleneck? We gave designers prelinguistic briefs — mood boards, sensory palettes, reference clusters — and measured how much of that intent arrived in AI outputs versus traditional text prompts. The structured vibe protocols recovered 2–3× more of the original creative direction.

Category 06

Team Identity Experiments

Do collective fingerprints exist — and can AI carry them? We studied how design teams develop emergent creative identities and tested whether AI tools could amplify rather than homogenize these collective signatures. Project FRIDA (AI Clones) explored asynchronous creative collaboration through AI proxies calibrated to individual team members.

What We Proved

Five findings that changed how we think about human-AI creative collaboration.

1

The Protocol Matters More Than the Model

The same AI model produces radically different creative fidelity depending on the workflow wrapping it. A designer using GPT-4 with a Copyself protocol produced more authentic work than one using the latest image model with a generic prompt. The intelligence layer between the human and the AI is the decisive variable.

2

Fingerprints Are Transmissible

Individual creative signatures can be encoded into structured protocols that AI systems can interpret and preserve. In blind tests, evaluators identified the authoring designer's fingerprint in AI-assisted outputs at rates significantly above chance — when the right protocols were used.

3

The Collapse Is Real — But Not Inevitable

Without intervention, AI-assisted creative work converges toward homogeneity exactly as the academic literature predicts. But with structured fingerprint protocols, diversity metrics stayed flat or even improved. The collapse is a default behavior, not a law of physics. It can be counteracted.

4

Vibe Protocols Recover Lost Signal

Prelinguistic creative direction — the "felt sense" of what you want before you can describe it — is the most valuable and most fragile signal in the creative process. Structured vibe protocols recovered 2–3× more of this signal compared to conventional text prompting.

5

Scale Amplifies Both the Problem and the Solution

At organizational scale (250 designers), unchecked AI use accelerated creative homogenization. But the same scale made fingerprint protocols more powerful — because the organization's collective identity became a richer calibration signal. The frameworks work better the more people use them.

These Findings Became Four Frameworks

The 30,000+ experiments didn't just produce data. They produced intellectual architecture. Each experiment family crystallized into a research framework: Creativity Collapse, Creative Fingerprint Transmission, Vibe Designing, and the Copyself Protocol. Not theory — distilled practice.

Research Frameworks
The intellectual output of
30,000 experiments
Each framework was distilled from thousands of structured experiments at Logitech's CDAI Lab. They address different dimensions of the same problem: how to preserve, transmit, and amplify what makes human creativity irreplaceable — not despite AI, but through it. This is what "building the missing layer" means in practice.
01
Creativity Collapse
The systematic convergence of creative output toward the statistical mean when AI tools mediate the creative process. Not a bug — a fundamental property of how generative models work.
Every large language model is, at its core, a compression of human expression into probability distributions. When you ask it to write, design, or compose, it produces the most likely output given its training data. "Most likely" is another word for "most average." The model doesn't seek the edges — it seeks the center.
This is the problem every frontier AI lab recognizes but none has solved. OpenAI confronts it as model collapse in synthetic data loops. Anthropic encounters it as homogenization across assistant outputs. DeepMind sees it as the progressive loss of human signal in training data. The names differ. The phenomenon is identical.
The Atlantic documented the visual dimension in "Why Does AI Art Look Like That?" (2024): despite wildly different prompts, AI-generated images converge toward the same aesthetic — saturated colors, airbrushed surfaces, dramatic lighting. The models default to a statistical composite of visual "quality." Not bad. Not good. Just average, at high resolution. Creativity Collapse is the formal framework for understanding, measuring, and countering this convergence across all creative domains.
⬡ Validated across 30,000+ experiments — diversity metrics, convergence analysis, blind evaluations
CONVERGENCE
02
Creative Fingerprint Transmission
Methods for encoding and preserving individual creative identity through AI workflows. Your unique cognitive signature — the way only you see the world — transmitted intact through systems designed to normalize everything.
Every creator has a fingerprint. Not a style, exactly — something deeper. The specific way their mind bridges disparate concepts. The particular weight they give to contradiction. The instinct for when something is "right" that no amount of training data can replicate. This fingerprint is formed through decades of lived experience, cultural immersion, aesthetic rebellion, and accumulated taste.
Jaron Lanier calls this "data dignity" — the recognition that creative output carries the identity of its origin. When AI strips this provenance away, it doesn't just lose attribution. It loses the information. The fingerprint is the information. Without it, you have content. With it, you have authorship.
Creative Fingerprint Transmission is the research program for encoding this identity in ways that AI systems can carry forward. Not by constraining the AI, but by calibrating it — teaching it to recognize and preserve the specific frequencies of a creator's cognitive signature. The goal: AI that synthesizes in ways only you could, because it carries your fingerprint as a structural parameter, not just a style overlay.
⬡ Proven with 250 designers — blind fingerprint identification tests at Logitech CDAI Lab
IDENTITY
03
Vibe Designing
A framework for translating intuitive, prelinguistic creative direction into structured guidance for AI systems. Because the most important creative decisions happen before language — and prompts are all language.
Every designer, writer, and creator knows the moment: you can feel what you want but can't say it. The direction is real — it exists as a sensory-emotional field, a cluster of associations, a felt sense that's more precise than any words you could attach to it. This is the prelinguistic intent. And it's where the best work lives.
The problem with AI prompting is that it forces this intent through the bottleneck of explicit language. "Make it feel warmer but not friendly. Confident but not aggressive. Like the sound of rain on glass but for visual design." These translations are lossy. Every conversion from felt-sense to text strips signal. The AI receives a degraded version of your intent, and its output reflects that degradation.
Vibe Designing is the systematic approach to preserving bandwidth between creative intuition and AI execution. It develops structured methods for capturing intent before it has words — through reference libraries, emotional coordinates, sensory palettes, and what we call "vibe protocols" — enabling AI to respond to the direction rather than just the description. The cognitive distance between a thought and its tangible reality isn't fixed. It can be compressed.
⬡ 2–3× signal recovery in vibe translation experiments vs. conventional prompting
intent TRANSLATION
04
Copyself Protocol
Open protocols for creators to maintain authentic voice and style across AI-assisted work. Not a defensive measure against AI, but a constructive architecture for creative persistence.
The Copyself Protocol addresses the fundamental question: when AI participates in your creative process, how much of the output is still yours? Not legally — philosophically. Does the result carry your cognitive signature, or has it been smoothed into the model's comfort zone?
Current AI workflows have no mechanism for this. You prompt, the model generates, you edit. But the generation step — the moment of synthesis — is where the model's statistical average overwrites your unique distribution. Every cycle dilutes. Over days and weeks of AI-assisted work, the dilution compounds. Your voice gets quieter. The model's voice gets louder. And you may not even notice, because the model's voice is trained to sound plausible.
The protocol defines structured methods for calibrating AI outputs to a creator's authentic signature. Think of it as a creative operating system that runs alongside AI tools — continuously measuring the distance between what the AI produces and what you would produce, and correcting for drift. It's a feedback loop for identity, ensuring that the creator's fingerprint survives the round-trip through statistical inference.
⬡ Protocol A/B tested across dozens of workflow configurations at enterprise scale
YOU AI PROTOCOL
From one mind to
entire organizations
Every framework in the nonai research program operates at three scales. The principles that preserve one creator's fingerprint are the same principles that preserve a team's collective voice and an organization's creative DNA. The math changes. The physics don't.

Individual

One creator's cognitive signature — their unique perceptual biases, associative patterns, aesthetic instincts, and creative decision-making rhythms. The irreducible unit of original thought. The fingerprint that 30,000 hours of lived experience produces and no dataset can replicate.

Teams

The emergent creative identity that arises when individual fingerprints interact. Not the average of its members, but the unique interference pattern — the specific way this group's ideas collide, combine, and transform. The collective intelligence that no single person contains.

Organizations

The creative DNA encoded in culture, brand, and institutional memory. When 250 designers use AI tools, the organization's creative identity becomes the front line. Copyself at scale means ensuring that AI amplifies the company's creative signature rather than dissolving it into market averages.

Same principles · Different resolution
Core Principles
The physics of
human creativity
Three principles underpin every framework. They describe not what creativity produces, but how it operates — the mechanics that separate human creative cognition from statistical generation.

Human Asymmetry

"Flaws are signal, not noise"

AI is trained to be balanced, consistent, and predictable. Humans are none of these things — and that's the point. The specific way a designer overweights texture over color, or a writer reaches for unusual rhythm over conventional clarity — these asymmetries are the creative signal. They're what makes the work authored rather than generated. Imperfections aren't failures of precision. They're engines of originality. Every deviation from the statistical mean is information about the creator that no model contains.

Creative Bitrate

"Thought → artifact, with minimal loss"

The distance between a creative thought and its tangible realization is a measurable quantity — cognitive distance. Every tool either compresses or expands this distance. A piano compresses it for a trained pianist. A prompt-based AI often expands it, forcing complex intent through a text bottleneck. Creative Bitrate is the bandwidth of the channel between mind and artifact. The research goal: maximize throughput while minimizing the lossy compression that strips the creator's signature from the output.

Pre-Linguistic Intent

"Before the word, there was the direction"

The most important creative decisions happen before they have names. A direction that exists as a felt sense — a cluster of sensory associations, emotional weights, and aesthetic instincts — is often more precise than any verbal description could be. Yet every AI interaction starts with language. Pre-Linguistic Intent is the research into capturing and transmitting this prelingual creative field — the half-formed thought, the intuition, the direction that doesn't have words yet but is already specific.

The space between

In 2010, MIT Press published Nonobject — a book that proposed design should start not from the object, but from the space between people and objects. Written by Branko Lukić with Barry M. Katz (Stanford, IDEO), it argued that the most meaningful design lives in the relationship, not the artifact.

Fifteen years later, the same principle applies to something far more consequential. AI is the most powerful creative tool ever built, and the question it poses is identical to the one Nonobject asked about physical design: where do you start?

Most AI development starts from the model. Better architecture. More parameters. Faster inference. This is starting from the object. nonai starts from the space between — the gap between human creativity and machine intelligence, where the most important interactions happen and the most critical information is lost.

The concept of "Emotional Relevance" that Nonobject introduced — going beyond function and aesthetics to create meaningful experiences — directly informs the nonai principle of Pre-Linguistic Intent. The "Experience Essence" framework, which captures the underpinnings of an experience as a compass for design, is the philosophical ancestor of Vibe Designing.

The lineage is not incidental. It's structural. The same mind that saw design as the space between people and objects now sees AI as the space between human creativity and machine intelligence. The question hasn't changed. The scale has.

2010

Nonobject · MIT Press

"Design starts not from the object, but from the space between people and objects." The philosophical foundation.

2017

Cerebras Systems

Designing the physical architecture of supercomputers that power frontier AI models. Understanding the hardware layer of intelligence.

2017

"AI Powered Design"

Published five years before the transformer wave. Early recognition that AI would transform creative practice.

2024

CDAI Lab · Logitech

Co-founded the Creative & Design AI Lab. 30,000+ experiments in transmitting creative fingerprints through AI systems at organizational scale.

2025

copyself.xyz

The experimental lab for creative fingerprint transmission. Live protocols. Open research.

2026

iF Design Berlin Keynote

"The Missing Layer" — presenting the research frameworks to the global design community.

Voices
What they said about
the beginning
Before nonai, there was Nonobject. These are the people who recognized what that work meant — and what it was pointing toward.
A designer's motto should always be 'What if?' It certainly is the motto of Nonobject. The fantasy of what an object should or could be becomes a way for the designer to embrace experimentation and imbue projects with a vitality that expands beyond the physical object and into our experience.
Paola Antonelli
Senior Curator, Architecture & Design, MoMA
The only other time I felt thrilled and mystified in a similar way came from working with Naoto Fukasawa.
Bill Moggridge
Co-founder, IDEO · Director, Cooper Hewitt
Branko Lukić is the best design-fiction designer in the world.
Bruce Sterling
Author, futurist, design critic
Lukić's objective of humanizing mass-manufactured objects is among the most important challenges for design.
Alice Rawsthorn
Design critic, The New York Times
From lab to open protocol
The CDAI Lab proved the frameworks work at enterprise scale. Now the research goes public — open protocols, live experiments, and a keynote that names the problem the entire industry is avoiding.
Open Research

copyself.xyz

The next phase: taking the Copyself Protocol from Logitech's internal lab to an open research project. Live experiments in encoding creative fingerprints into portable, AI-readable protocols that any creator can use. Follow the work as it happens.

Follow the experiments
Keynote · April 2026

iF Design Berlin: "The Missing Layer"

The first major public presentation of the nonai research frameworks — Creativity Collapse, Creative Fingerprint Transmission, Vibe Designing, and the Copyself Protocol — to the global design community. Naming the problem. Showing the solution. Backed by 30,000 experiments.

brankolukic.com
About
Branko Lukić

Designer, researcher, and author of Nonobject (MIT Press, 2010). First AI Fellow at Logitech. Co-founder, CDAI Lab. Lead designer of Cerebras supercomputers since 2017.

Branko builds systems that amplify human originality through AI. At Logitech, he ran 30,000+ experiments to prove that we can transmit the human "Creative Fingerprint" into AI systems without losing its authenticity — then shipped the tools, trained the teams, and partnered with Legal to establish governance for safe enterprise adoption.

His work spans the full stack: from the silicon architecture of frontier AI hardware (Cerebras) to the cognitive architecture of creative collaboration (CDAI Lab). Creator of the UE Boom (IDEA Gold), which redefined portable audio. On the hardware side, he co-developed Physical Intelligence systems where AI reads human micro-movements and responds through haptics.

His 15-year trajectory — from Nonobject philosophy to physical product design to AI infrastructure to creative AI research — represents a single continuous inquiry: how do we preserve what makes human creativity irreducible, even as the tools grow more powerful?

"AI doesn't diminish human creativity.
It reveals who actually has something to say."

Cerebras Systems · Lead Designer

Designing the physical architecture of frontier AI supercomputers since 2017

Logitech · First AI Fellow & Design Fellow

Co-founded CDAI Lab. 30,000+ experiments. Led AI transformation for 250 designers.

Nonobject · MIT Press, 2010

The philosophical foundation. "Design starts from the space between."

Logi Brand Guru · AI Agent

Built an internal AI agent (85%+ accuracy) for brand guidance — proof that organizational fingerprints are transmissible.

Physical Intelligence · FUSE & CICADA

AI reads human micro-movements, responds through haptics. Closing the sensory loop between user and machine.

UE Boom Product Line · IDEA Gold

Created the product that redefined portable audio. Designed entire line through Megaboom, Roll, Blast.

iF Design Berlin 2026 · Keynote

"The Missing Layer" — presenting nonai research to the global design community

IXDC Beijing 2025 · Opening Keynote

UC Berkeley lectures. "AI Powered Design" published 2017 — five years before the transformer wave.

What happens when AI
makes humans more original,
not less?

That's not a rhetorical question. It's a research program.

brankolukic.com copyself.xyz