Crisis of Trust.

Media Ethics

in the Age of Deepfakes

Research
HSE SPB
2025
Bondar Julia
Vinogradova Mariana
Eliseeva Olga
Okhray Victoria
Shipelsky Martin

Click for full immersion
Vol 1:
What Are Deepfakes?Interview Context
and Research Perspective
This perspective sets the analytical framework
for the discussion that follows.
From the very beginning of the interview,
Aliev emphasized that deepfakes should not be treated as a marginal or purely technical phenomenon. Instead, he framed them as
a symptom of deeper changes in how the media constructs reality and trust. According to him, deepfakes force us to rethink.
This project is grounded not only in media ethics theory, but also in an expert interview conducted specifically for this research. We spoke with
Rastyam Tuktarovich Aliev, PhD in History
and Research Fellow at the Laboratory of Critical
Theory of Culture at HSE University (Saint Petersburg). His academic work focuses on media theory, cultural memory, and the transformation
of evidence in contemporary media environments.
What we consider authenticity and documentary truth in the digital age
FROM MANIPULATION
TO SYNTHETIC PRESENCE
He further noted that in everyday digital culture the term deepfake
is often used too loosely, almost as a synonym for any AI-generated fake, which obscures its ethical specificity. What truly
matters is that deepfakes create “a feeling
of authenticity and documentary truth where
none actually exists”.
The term deepfake is widely used in contemporary media discourse,
yet its meaning often remains vague and overstretched. In everyday language, deepfakes are frequently described as any AI-generated image, video, or audio. However, from the perspective of media ethics, deepfakes are more. As Aliev explained in the interview, deepfakes differ from ordinary fakes because “they imitate a person’s
presence itself, which changes how we trust images and sounds.
A deepfake is a form of synthetic media (that is, media content generated
or altered by algorithms rather than directly recorded from physical reality) created with the help of artificial intelligence, most commonly deep learning technologies. Its defining feature is not merely the falsification of information, but the simulation of a real human personality — including appearance, voice, facial expressions, and behavior.
i
This imitation creates a powerful illusion of authenticity, making audiences believe that someone actually said or did something that never happened.
Deepfakes
and the Crisis of Documentary Evidence
This illusion of authenticity distinguishes deepfakes from conventional media manipulation techniques such as editing, framing, or staged imagery. While these practices shape representation, they usually do not challenge the existence
of the event. Deepfakes, by contrast, fabricate a convincing trace of a person in the media space without it being real.

In the interview, Aliev described this shift as a fundamental break with traditional forms of falsification. According to him, deepfakes “change
how we trust images and sounds,” because they undermine the idea that visual
and audio records can function as reliable evidence. Journalism has historically reliedon such records as proof that something actually happened.
Deepfakes undermine this belief.

From an ethical standpoint, this represents transformation in media culture. When documentary evidence loses its authority, the boundary between fact
and fabrication becomes increasingly unstable. This instability lies at the core of the ethical problem posed by deepfakes.


Technology and Ethical Risk
It is important to emphasize that deepfake technologies are not inherently unethical. They rely on the same artificial intelligence tools widely
used in cinema, advertising, digital art, and accessibility technologies.

Synthetic media can serve:
/01
EDUCATIONAL PURPOSES
/02
ARTISTIC PURPOSES
/03
RESTORATIVE PURPOSES
when used transparently and responsibly.

Because deepfakes imitate the visual and auditory markers traditionally associated with authenticity, they exploit trust
rather than simply violate rules. Such technologies enter journalism and public communication without clear ethical
boundaries. There are currently almost no legal restrictions on the use of deepfakes and they are unlikely to appear
as quickly as deepfakes are developing. Due to the combination of these factors it can be stated that deepfakes threaten
the credibility of media institutions as a whole. Thus, deepfakes should be understood not merely as a technological innovation, but as a fundamental ethical challenge. They force media professionals and audiences alike to reconsider
how authenticity, evidence, and trust are constructed in the digital age — and why these concepts remain essential
for journalism as a social institution.


VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 VOL 2 
WHY DO WE CARE?
THE COLLAPSE
OF EVIDENCE
The central ethical problem posed by deepfakes lies in their systemic impact on trust. Journalism as a social institution
relies on the expectation that media representations correspond, at least in principle, to reality. Even when interpretations differ, audiences traditionally assume that images, sounds, and recordings refer to events or actions that actually took place.

Deepfakes fundamentally disrupt this expectation. By producing highly convincing synthetic representations, they weaken
the connection between media content and lived reality. As trust erodes, journalism’s ability to function as a reliable
mediator between reality and society is called into question.

As Aliev noted in the interview, deepfakes erode trust not only in individual news stories, but in the entire system
of verification. According to him, even verified materials are now often met with suspicion, as audiences increasingly
ask whether what they see or hear can be trusted at all. In this environment, proof no longer guarantees belief.
In the interview conducted for this project, Rastyam Tuktarovich Aliev emphasized that the current crisis is not simply about false information,
but about a deeper transformation of perception. He described the situation
as a form of “media schizophrenia”, which is “a split perception where
hyper-trust in the spectacular coexists with doubt toward the authentic”.
MEDIA
SHIOPHRENIA
The trust crisis in journalism is not only about verification but reflects cultural tensions — a hunger for sensation, fatigue with institutions,
and a need for emotionally charged explanations. A colleague of mine calls this the ‘fakt’ — a fake that works like a fact because it mirrors society’s fears, hopes, and dreams. For cultural theorists, the ‘fakt’ is a tool
to read media, revealing the social wounds and expectations that
make manipulation persuasive.
The Liar’s Dividend
and the Normalization of Denial
The collapse of evidence enables what is commonly referred to as the liar’s dividend. Public figures confronted with damaging recordings or documents
can deny their authenticity by claiming that they are fake or AI-generated. While denial has always existed in political communication, deepfakes provide
a technologically plausible justification for rejecting even genuine evidence.

However, in the interview, Aliev stressed that the most dangerous consequence
of deepfakes is not the production of more false content, but the normalization of disbelief.  Deepfakes give a socially accepted excuse to deny even real evidence. People know deepfakes exist, so ‘This recording is fake’ becomes believable. As he explained, “the real threat is not just more fakes, but
a willingness to reject genuine evidence”. When denial becomes socially acceptable, journalism’s watchdog role is significantly weakened, and public accountability becomes increasingly difficult to enforce.
Disinformation,
Cancel Culture
and Emotional Residue
Deepfakes also operate as powerful tools of disinformation and opinion manipulation. By imitating familiar faces and voices, they trigger strong emotional reactions that often precede critical reflection. In polarized media environments, such emotional impact makes deepfakes particularly effective.

Within the context of the woke agenda and cancel culture, deepfakes pose
an especially serious ethical risk. A fabricated video or audio recording can provoke public outrage and lead to the rapid “cancellation” of a celebrity, journalist, or politician. Even if the deepfake is later exposed and the person manages to defend their reputation, the emotional damage often persists.

Such content leaves what can be described as an emotional residue:
a lingering sense of distrust or negative association that factual correction cannot fully erase.
From an ethical perspective, deepfakes reveal a significant normative gap between
the rapid development of media technologies and the slower evolution of ethical
norms and institutional responses. Journalists are increasingly required to operate
in an environment where traditional verification practices are no longer sufficient,
while new standards of responsibility are still emerging.

When trust in evidence collapses, journalism risks losing its ability to inform
the public, hold power accountable, and maintain a shared information space.
For this reason, we believe it is crucial to foster a comprehensive public understanding of the deepfake phenomenon.
A Normative Gap
VOL 3:
Can Deepfakes Ever Be Ethical?
At first glance, deepfakes appear fundamentally
incompatible with the ethical foundations of journalism. Journalism is built on principles such as truthfulness, transparency, accountability, and the minimization
of harm. Deepfakes, by simulating reality, seem
to violate these principles by design.
However, ethical evaluation requires
a more nuanced approach.

From the perspective of media ethics, technology itself
is morally neutral. Ethical responsibility lies with human
actors and institutions that decide how technologies are
applied. And here is the question “Can deepfakes ever
be ethical?”.

YES THEY CAN


01 — Historical Reconstruction and Education
Deepfakes offer a powerful tool for "bringing history to life." Ethically produced synthetic media can recreate historical figures or events where no footage exists, making complex history more accessible to younger audiences, for example. When used
in museums or educational documentaries these deepfakes serve
the principle of truth-seeking by providing a clearer understanding of the past through immersive visualization.

However, it's worth noting the risk of allowing factual inaccuracies when representing and reproducing images from
the past. In such a case, a deepfake will only multiply
false images, reinforcing distorted facts about the past
in people's minds.

This is why accuracy and scientific precision are crucial
factors to consider when using deepfakes for scientific
and educational purposes.
02 — Accessibility Technologies and Human Dignity
One of the strongest ethical arguments in favor of synthetic media emerges in the context of accessibility technologies.
AI-generated voices are increasingly used by people who
have lost the ability to speak due to illness or injury.
In some cases, synthetic voices are reconstructed based
on earlier recordings, allowing individuals to communicate
using a voice that resembles their original one.

From an ethical standpoint, these applications challenge
the assumption that synthetic media is inherently manipulative. Here, AI serves autonomy, dignity, and inclusion rather than deception.
03 — Entertainment purposes
For example, museums around the world are beginning to implement AR guides created using neural networks. Visitors point their smartphone at an exhibit, and a historical figure — for example, Salvador Dalí or Abraham Lincoln — appears before them, engaging in a dialogue while maintaining the facial expressions, gestures, and unique voice of their real-life counterpart.

One of the most striking cases is the Salvador Dalí Museum project in Florida "Dalí Lives". Using AI, the artist
was recreated, interacting with visitors, talking about his paintings, and even taking selfies with them. In this context, deepfakes don't deceive the viewer, but serve as a tool
for empathy and deep immersion.

It is incredibly important to remember that any use of
deepfakes, whether for good or bad purposes, must be
transparent and credible.

Ethical uses are possible, but usually only when three conditions are met: consent (or legitimate representation), clear labeling, and efforts to minimize harm — including the risk that
the content will be taken out of context.
vol 4:
Examples
That Leave Hope
Abstract ethical discussions around deepfakes become clearer when examined through concrete cases. Real-world examples demonstrate that synthetic media
can operate in fairly positive ethical contexts.

So here we are to analyze the following cases.


Ethical judgment becomes possible only when technology is placed within
a specific social and cultural situation. The same technological tool can function either as manipulation or as care, depending on how and why
it is used.
EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1EXAMPLE 1
One frequently discussed example of ethically sensitive synthetic media use is Fast & Furious 7 (2015), in which digital technologies were employed to reconstruct the appearance of actor Paul Walker after his death in order
to complete the film. Although this case is often described as a deepfake, it is more accurately understood
as a digitally reconstructed performance.

Crucially, the reconstruction reportedly took place with the consent of the actor’s family and within a clearly fictional cinematic context. Audiences were aware that digital technologies had been used, and the goal was
to complete an already existing narrative rather than deceive viewers.

In the interview, Aliev emphasized that consent plays a decisive role in such cases. He argued that when
synthetic media is used “with permission and without pretending to be documentary truth,” its ethical status
changes significantly. This example illustrates how transparency and context can mitigate ethical risks.
FAST &
FURIOUS 7
EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2 EXAMPLE 2
Another illustrative case is the 2023 release of The Beatles’ song “Now and Then.” Artificial intelligence was used
to isolate and restore John Lennon’s voice from archival recordings, allowing the song to be completed decades after it was originally recorded.

In this case, AI functioned as a tool of restoration rather than fabrication. The process was openly explained
to the public, and the use of AI was framed as a technical means of preserving cultural heritage. The song
did not present a new or fabricated message attributed to Lennon, but rather clarified and completed
an existing archival trace.

Aliev described such uses of synthetic media as ethically significant because they “do not create false presence,
but work with what already exists”. This case demonstrates that synthetic media can support cultural memory without undermining trust, provided that transparency and historical accuracy are maintained.
The Beatles
“Now and Then”
CONCLUSION: DEEPFAKES YES OR NO CONCLUSION:
DEEPFAKES YES OR NO CONCLUSION: DEEPFAKES YES OR NO
The question of whether deepfakes can be ethical ultimately points to a broader normative challenge. Journalism must develop ethical frameworks capable of responding to new technologies without abandoning its core values.

Аt this point, normative uncertainty sets the stage for deeper philosophical questions about truth, presence, and reality — questions that go beyond professional ethics and touch on the foundations
of contemporary media culture.
As Aliev noted in the interview,
In a broader sense, deepfakes force us to rethink many basic ideas: what it means to be “present”
in the media, how evidence works, how trust is built, and how institutions should respond.
This requires a reassembly of familiar frameworks as it is not only an ethical issue
but also on an ontological one (what we consider “presence” and a person’s “trace” in media),
an epistemological one (how evidence and trust are structured), and, often overlooked,
a political-institutional one.
Synthetic media technologies
are unlikely to be “undone,” as they
are already embedded in the tools
of image and sound production. The real question is under what conditions they can exist without causing harm
or destroying trust.
team
BONDAR
JULIA
Interviewer,
Сo-author
Eliseeva
Olga
Redactor,
Co-author
Vinogradova Mariana
Designer,
Co-author
Okhray
Victoria
Interviewer,
Co-author
Daniel Kraus
Founder, CEO
Shipelsky Martin
Redactor,
Co-author
Give feedback and share your impressions