Deepfakes: When Reality Can Be Manufactured

Not all crimes rely on physical force; some rely on perception.

Deepfakes—AI-generated images, videos, and audio designed to convincingly imitate real people—represent a growing form of harm at the intersection of technology, consent, and identity. While often framed as a future problem, deepfakes are already being used to harass, exploit, and manipulate individuals in ways that existing legal systems are poorly equipped to address.

The danger of deepfakes is not simply that they are false, but that they are believable, widely shareable, and capable of causing lasting damage even after they are exposed as fabrications.

What Are Deepfakes?

A deepfake is synthetic media created using artificial intelligence trained on real images, videos, or audio recordings. These systems analyze thousands of data points—facial movements, vocal inflection, speech cadence, lighting, and expression—to generate new media that imitates a real person.

What distinguishes deepfakes from earlier forms of digital manipulation is their fluidity. They are not static edits or isolated alterations. Deepfakes move, speak, and respond in ways that mirror authentic human behavior. As a result, they often evade the immediate visual cues people rely on to detect manipulation, especially when viewed on small screens, shared rapidly, or stripped of context.

As this technology becomes more accessible, the barrier to creating convincing synthetic media continues to drop. What once required specialized expertise is now available through consumer-facing tools, placing the ability to fabricate reality into far more hands than traditional editing ever did. This shift has significant implications not only for individual victims but for how evidence, credibility, and trust are evaluated more broadly.

Historical Context: Manipulated Media Is Not New

Manipulated media did not begin with artificial intelligence. Long before deepfakes, photographs were altered, audio was selectively edited, and propaganda relied on framing and omission to manufacture credibility. What has changed is not the existence of deception but its speed, scale, and plausibility.

Deepfakes can be produced rapidly, tailored to specific targets, and distributed widely before verification occurs. They transform what was once a labor-intensive process into a repeatable, scalable one.

This matters because modern audiences are conditioned to treat audio and video as proof. Deepfakes exploit that expectation, creating the impression that someone said or did something they never did, in a way that feels authentic, even when it is entirely fabricated.

How Deepfakes Cause Harm

Deepfake-related harm is often framed as hypothetical or futuristic, but for many victims, the impact is immediate and deeply personal.

The most prevalent documented harm involves non-consensual sexual deepfakes, where a person’s likeness is inserted into explicit material without their knowledge or consent. These cases are not isolated incidents of embarrassment; they frequently result in sustained harassment, reputational damage, and emotional distress. Victims may experience anxiety, fear of professional consequences, and a persistent sense of vulnerability knowing that the content can be copied, altered, or redistributed indefinitely.

Deepfakes are also used as tools of coercion and intimidation. Fabricated audio recordings or videos may be deployed to threaten exposure, force compliance, or undermine credibility in personal or professional settings. In these cases, the mere existence of the content—regardless of whether everyone believes it—creates leverage. The harm lies in the uncertainty it introduces and the pressure it places on the victim to respond, explain, or defend themselves.

Importantly, deepfake harm does not require universal belief. It only requires that the content be plausible enough, for long enough, to cause doubt. Once doubt is introduced, the damage to trust and reputation can persist even after the content is disproven.

Case Study Non-Consensual Synthetic Media
Scarlett Johansson, public figure referenced in discussions of deepfake pornography and consent

Scarlett Johansson and Deepfake Pornography

Context: Actor :contentReference[oaicite:0]{index=0} has been one of the most frequently cited public figures targeted by deepfake pornography. Beginning in the late 2010s, sexually explicit videos were circulated online in which her face was digitally superimposed onto other individuals’ bodies without her consent.

The content was entirely fabricated, yet highly realistic. Despite being demonstrably false, the videos spread rapidly across online platforms and were difficult to remove. Johansson publicly acknowledged the existence of the deepfakes, noting that legal remedies were limited and that the internet allowed such material to persist even after being debunked.

Why it matters: This case illustrates a core failure in how the law addresses deepfake abuse. The harm did not depend on authenticity, but on the non-consensual use of Johansson’s likeness in sexualized content. Even with public visibility and resources, control over distribution and meaningful accountability proved elusive—highlighting the heightened vulnerability faced by private individuals subjected to similar abuse.

Common Misconceptions About Deepfakes

Several assumptions often minimize or obscure the harms of deepfakes.

One of the most persistent is the belief that falsity negates impact. In reality, being fake does not prevent damage. Victims are often forced to repeatedly disprove content they never created, and the burden of constant explanation becomes its own form of harm.

Another misconception is that deepfakes only affect celebrities. While public figures are visible targets, synthetic media abuse increasingly affects private individuals—particularly in cases involving harassment, coercion, workplace sabotage, or relationship-based abuse. Lower visibility often means fewer resources and fewer avenues for correction.

There is also a widespread belief that platforms will remove deepfakes once reported. In practice, removal is inconsistent. Content may be reuploaded, mirrored, archived, or shared privately, leaving victims to manage persistence rather than resolve it.

Finally, many assume deepfakes are easy to spot. Most people cannot reliably identify convincing synthetic media, especially when it appears briefly or out of context. Deepfake harm depends on plausibility, not perfection.

The Legal System Lags Behind Technology

🔎Legal Explainer

Are Deepfakes Illegal?

Short answer: sometimes—but not always. In many places, the law focuses less on the technology and more on how it is used.

Deepfakes may be prosecuted under existing laws such as:

  • Harassment or stalking statutes
  • Fraud or impersonation laws
  • Defamation (civil in many cases)
  • Identity theft / false personation
  • Non-consensual intimate image (NCII) or “revenge porn” laws (where applicable)

Because many statutes predate synthetic media, cases often come down to proving intent, harm, and who created or shared it.

Deepfakes highlight a recurring challenge within the legal system: laws tend to evolve in response to harm, rather than in anticipation of it.

Most criminal and civil statutes currently used in deepfake cases were written long before synthetic media was technologically feasible. As a result, victims and prosecutors are often forced to rely on legal theories that only partially capture the nature of the harm.

This mismatch places a significant burden on victims. They may be required to demonstrate intent, financial loss, or specific categories of damage that do not reflect the lived reality of deepfake abuse. In many cases, the law treats synthetic media as a speech issue rather than an identity violation, minimizing the personal and psychological impact experienced by those targeted.

The result is a system that recognizes harm in theory, but struggles to respond to it in practice.

Why Prosecution Is So Difficult

Even when deepfake harm is acknowledged, accountability is often elusive.

Identifying who created or distributed a deepfake can be difficult, particularly when content is shared through anonymous accounts, encrypted platforms, or servers located outside the victim’s jurisdiction. By the time investigators become involved, the original source may be obscured or the content widely replicated across multiple platforms.

Legal standards further complicate prosecution. Many statutes prioritize tangible harm—such as financial loss or demonstrable professional damage—while treating emotional distress or reputational injury as secondary concerns. This framework fails to account for the anticipatory nature of deepfake harm: the fear of future circulation, the erosion of credibility, and the ongoing need to manage a false narrative.

Timing is another critical issue. Deepfakes can spread rapidly, gaining traction within hours, while legal processes unfold slowly. For victims, this imbalance reinforces a sense of powerlessness, as meaningful intervention often comes only after the harm has already occurred.

Defamation, Consent, and the Limits of Existing Law

Deepfakes expose a fundamental tension in how the law understands harm. Traditional legal frameworks, particularly defamation law, are built around questions of truth and falsity—an approach that often fails to capture the lived reality of synthetic media abuse.

Defamation claims typically require proof that a false statement was presented as fact, published to others, and caused reputational harm. Deepfakes complicate this analysis by blurring the line between assertion and fabrication. Even when content is experienced as credible, creators may argue that it is parody, satire, or otherwise obviously artificial, placing victims in a difficult evidentiary position.

Consent provides a more transparent ethical and legal lens. A person’s face, voice, and likeness are extensions of identity, not neutral assets. When synthetic media is created or shared without permission, the harm arises from the unauthorized use of that identity—regardless of whether the content is real or fabricated.

Many existing statutes, however, continue to focus on authentic images or recordings. This leaves victims of synthetic abuse navigating legal gray areas, forced to rely on frameworks that were never designed to address identity-based digital harm.

Where the Law Stands Today

The legal response to deepfakes in the United States remains fragmented and uneven, shaped more by reactive legislation than by a comprehensive national framework. While awareness of the harms of synthetic media has increased, the protections available to victims still depend heavily on geography, context, and the specific use of a deepfake.

At the federal level, there is no single law that broadly criminalizes the creation or distribution of deepfakes. Instead, federal enforcement relies on existing statutes—such as those related to fraud, extortion, identity theft, or interstate harassment—when deepfakes intersect with other criminal conduct. This means that many forms of deepfake abuse, particularly those involving reputational or psychological harm without apparent financial loss, may fall outside federal jurisdiction entirely.

In response to these gaps, several states have enacted laws specifically addressing deepfakes, though their scope is limited and inconsistent. Most state legislation has focused on two narrow areas: non-consensual sexual imagery and election interference. Some states have updated revenge-porn statutes to explicitly include AI-generated or synthetic images, recognizing that consent—not authenticity—is the central harm. Others have passed laws restricting deceptive political deepfakes during defined pre-election windows to prevent last-minute voter manipulation.

Even where these laws exist, enforcement remains challenging. Many statutes require proof of intent to deceive or harm, high evidentiary thresholds, or rapid reporting timelines that do not reflect how quickly victims may become aware of the content. Civil remedies are sometimes available, but they place the burden on victims to initiate legal action, bear the costs, and relive the harm repeatedly through litigation.

By contrast, some international frameworks have begun approaching deepfakes through data protection, privacy, and digital safety laws rather than narrowly through speech or fraud laws. In parts of Europe, misuse of a person’s likeness may fall under broader protections related to personal data, image rights, or dignity. While these approaches are not without their own limitations, they reflect a growing recognition that synthetic media abuse is fundamentally about identity misuse, not merely deception.

What remains consistent across jurisdictions is the absence of a unified standard. Victims’ access to protection often depends less on the severity of harm and more on whether their experience fits neatly into an existing legal category. Until laws explicitly address synthetic media as a form of identity-based abuse, deepfake cases will continue to be handled unevenly—leaving many victims without clear paths to accountability.

Deepfakes and Public Trust

Beyond individual cases, deepfakes pose a broader threat to public trust and collective understanding of reality.

As synthetic media becomes more common, skepticism toward visual evidence increases. Legitimate recordings may be dismissed as fake, while fabricated content may be accepted as authentic. This erosion of shared reality benefits those seeking to evade accountability, allowing real misconduct to be denied under the guise of technological doubt.

In this environment, deepfakes do not merely distort individual reputations—they undermine the credibility of evidence itself. Once trust in visual documentation is weakened, institutions that rely on proof, testimony, and verification face new challenges in maintaining legitimacy.

Media Literacy in an Era of Uncertainty

Deepfakes do not simply create false media—they create false certainty. They exploit familiarity, authority, and emotional response in environments where verification rarely precedes reaction.

Media literacy, in this context, is not about confidently spotting deepfakes. Detection is unreliable for most people. A more realistic standard emphasizes context: checking sources, seeking independent confirmation, and resisting the impulse to share content that provokes immediate outrage or fear before it has been validated.

A Victim-Centered Reality

A recurring theme in deepfake cases is the minimization of harm. Victims are often told that the content “isn’t real,” as though falsity negates impact. In reality, the distress, fear, and reputational consequences are no less severe simply because an algorithm generated the media.

These limitations are not accidental. They reflect how the legal system has historically defined harm—favoring tangible losses and traditional categories of wrongdoing—while struggling to address identity-based digital abuse that spreads quickly and persists indefinitely. Deepfakes force the question the law often avoids: what protections exist when a person’s likeness can be replicated without consent, and the damage occurs before institutions can respond?

That gap is why reform cannot be limited to a single statute or a single election cycle. Any meaningful legal response has to recognize that deepfake abuse is not simply deception—it is the weaponization of identity at scale.

Where the Law Needs to Go

Deepfakes expose a fundamental mismatch between modern forms of harm and the legal frameworks used to address them. While existing laws can sometimes be stretched to cover synthetic media abuse, they were not designed to confront a reality in which a person’s identity can be replicated, manipulated, and redistributed without their knowledge or consent. As a result, accountability remains inconsistent, and protection for victims remains uneven.

For meaningful progress to occur, legal systems must move beyond treating deepfakes as isolated instances of deception or speech. At their core, deepfake abuses are identity-based harms. They exploit a person’s likeness, voice, and perceived credibility in ways that can permanently alter how others see that individual. Law must begin to explicitly recognize this category of harm, rather than forcing cases into ill-fitting frameworks developed for an earlier technological era.

One critical shift involves centering consent rather than authenticity. Many current statutes hinge on whether an image or recording is “real,” overlooking the reality that harm can exist even when the media is fabricated. A consent-based approach acknowledges that the unauthorized use of a person’s identity—especially in sexualized, defamatory, or coercive contexts—is harmful regardless of how the content was created. This shift would align deepfake regulation with broader principles of bodily autonomy and personal dignity.

Another necessary development is clearer criminalization of malicious synthetic media, paired with well-defined intent standards. Laws must distinguish between benign or artistic uses of AI and conduct intended to harass, exploit, manipulate, or coerce. Without this clarity, victims are left navigating vague statutes while bad actors exploit ambiguity. Well-drafted laws can protect free expression while still drawing firm boundaries around abuse.

Equally important is relieving the evidentiary burden placed on victims. Current legal processes often require those harmed to prove not only that content is false, but that it caused quantifiable damage. In the context of deepfakes, harm is frequently cumulative and anticipatory: reputations are undermined, trust is eroded, and victims live with the ongoing fear of re-circulation. Legal standards must evolve to recognize psychological harm, reputational injury, and loss of personal security as legitimate and actionable damages.

Platform accountability also plays a critical role. Deepfake content spreads primarily through digital platforms that profit from engagement but are often slow to respond to abuse. Legal reforms should establish clearer obligations for platforms to act promptly when synthetic media is reported, including preservation of evidence, transparent moderation processes, and meaningful consequences for failure to respond. Without structural accountability, victims are forced to fight both perpetrators and the systems that amplify them.

Finally, the law must adopt a forward-looking posture. Deepfake technology will continue to advance, becoming more accessible and more convincing over time. Reactive legislation will always lag behind innovation unless lawmakers adopt adaptable standards that focus on harm, consent, and misuse rather than on specific technologies. Future-proof legal frameworks should be flexible enough to address emerging forms of synthetic media without requiring constant statutory overhaul.

Until these changes occur, deepfake victims will remain caught between rapidly evolving technology and a legal system struggling to keep pace. Addressing deepfake abuse is not merely about regulating AI—it is about reaffirming the principle that a person’s identity cannot be weaponized without consequence.

Policy Reform Legal Frameworks

What Meaningful Deepfake Reform Requires

Addressing deepfake abuse requires more than adapting existing laws. Without clear, forward-looking standards, victims remain dependent on legal frameworks that were never designed to address synthetic identity misuse.

  • Consent-centered statutes that recognize identity misuse as harm, regardless of whether media is authentic.
  • Explicit deepfake provisions addressing malicious creation, distribution, and threats to distribute synthetic media.
  • Updated evidentiary standards that account for reputational, psychological, and anticipatory harm.
  • Platform accountability measures requiring timely removal, evidence preservation, and transparent reporting processes.
  • Technology-neutral legislation focused on misuse and harm rather than specific tools.

Why this matters: Without reform, deepfake abuse remains legally fragmented, leaving victims without consistent protections or clear avenues for accountability.

Closing Reflection

Resource Practical Guidance

If You’re Targeted by a Deepfake

Preserve what you can. Save links, timestamps, usernames, and screenshots of where the content appears. If possible, capture the surrounding page context, not only the media itself.

Document escalation. Note any threats, demands, coordinated harassment, or repeated uploads. Patterns often matter for reporting, workplace safety, and any future legal options.

Report strategically. Use platform reporting pathways that match the harm (impersonation, non-consensual imagery, harassment). Keep confirmation emails or ticket numbers when available.

Get support. Identity-based digital harm can be isolating. If you feel unsafe or overwhelmed, reach out to a trusted person, an advocacy organization, or qualified professional support.

Deepfakes force a reckoning with how harm is understood in a digital age. They challenge long-standing assumptions about evidence, credibility, and the boundaries of personal autonomy, exposing gaps in systems that were built for a world where seeing was believing.

For victims, the consequences are not abstract. Deepfake abuse is experienced as an erosion of identity—of control over one’s face, voice, and public presence. The knowledge that a likeness can be replicated and weaponized without consent creates an ongoing vulnerability that persists even after content is debunked or removed. Harm persists not because the media is real, but because the impact is.

Institutionally, deepfakes reveal how slowly law and policy respond to technological change. Existing legal frameworks continue to prioritize truth over consent, tangible loss over psychological harm, and reactive enforcement over preventative safeguards. In doing so, they leave many victims navigating systems that recognize injury only when it fits familiar categories.

At the same time, deepfakes undermine public trust more broadly. As fabricated media becomes easier to produce and harder to detect, skepticism toward all visual evidence increases. This erosion benefits those who seek to deny accountability while placing greater burdens on victims to prove authenticity, innocence, or credibility.

Deepfakes are not simply a problem of artificial intelligence. They are a problem of how society defines harm, responsibility, and protection in a digital environment. Addressing them requires more than technical solutions or isolated legal fixes. It requires a shift toward frameworks that recognize identity misuse, center consent, and acknowledge that damage can occur long before courts or platforms intervene.

Until those shifts occur, deepfakes will continue to exist in the space between innovation and accountability—where harm is real, but remedies remain uncertain.

References

Byman, D. L., Gao, C., Meserole, C., & Subrahmanian, V. S. (2023). Deepfakes and international conflict (Vol. 8). Washington, DC: Brookings Institution.

Chadha, A., Kumar, V., Kashyap, S., & Gupta, M. (2021, May). Deepfake: an overview. In Proceedings of second international conference on computing, communications, and cyber-security: IC4S 2020 (pp. 557-566). Singapore: Springer Singapore.

Citron, D. K. (2018). Sexual privacy. Yale LJ, 128, 1870.

Groh, M. (2023). Detect deepfakes: How to counteract misinformation created by AI. MIT Media Lab. https://www.media.mit.edu/projects/detect-fakes/overview/

Mihov, D. (2025). Scarlett Johansson calls viral AI deepfake ad “terrifying.” Forbes. https://www.forbes.com/sites/dimitarmixmihov/2025/02/12/terrifying-scarlett-johansson-denounces-viral-ai-ad-calls-for-deepfake-ban/

Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE access10, 25494-25513.

Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social media+ society, 6(1), 2056305120903408.

Wahab, A. (2025). Futures of deepfake and society: Myths, metaphors, and future implications for a trustworthy digital future. Futures, 173, 103672. https://doi.org/10.1016/j.futures.2025.103672

Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology innovation management review9(11).

Oh hi there 👋
It’s nice to meet you.

Sign up to receive awesome content in your inbox.

We don’t spam! Read our privacy policy for more info.

,

Leave a Reply

Discover more from Crime Central

Subscribe now to keep reading and get access to the full archive.

Continue reading