Generative AI, Copyright, and Ethics: How Are TV Stations Dealing?” – Ryo Imabeppu (President, Formulation ITS)

As AI enters video production, taking on roles such as planning, scriptwriting, CG, and editing assistance, the next issues inevitably arise: copyright and ethics. This isn’t just a legal issue; it’s a fundamental theme for the public-facing media of television stations.
Generative AI learns from vast amounts of historical data and outputs it. So, whose work is the AI-generated video and script? Is it the company that developed it? Is it the creators who input the prompts? Or is it the countless creators who learned from it?
At present, there is no clear answer to this question. While it is generally accepted in Japan that “AI-generated products are unlikely to qualify as copyrighted works,” the decision varies depending on the extent of human creative involvement. In other words, the more a product is left entirely to AI, the more unclear the ownership of the rights becomes. This creates a contradiction. That’s why, while many TV stations have begun using AI as a “convenient tool,” they are cautious about the “final product.”
In fact, even if AI is used to draft program structure proposals, narration scripts, and subtitle options, it is a strict practice to always have humans check the footage and descriptions that will be aired.
In news, information programs, and documentaries, many stations avoid using AI-generated recreations or CG images as is, instead using them only as “reference material.”
The reason is clear: it is impossible to 100% explain to viewers the authenticity and source of AI-generated footage. If AI-generated recreations closely resemble the composition or direction of a past movie or drama, the program producers and broadcasters will be held responsible, even if this was unintentional.
AI learns from the past, but broadcasters will also bear responsibility for the past.
Even more troubling is that problems arise even when “no one has malicious intent.” Plagiarism by a human is clearly illegal. However, with AI, the explanation that “it happened as a result of learning” is valid. This gray area is what makes it most difficult for TV stations to handle.
Even riskier than TV stations is the use of AI by individuals. While TV stations have legal departments and guidelines, individual creators do not.
Even if AI-generated video or music closely resembles an existing work or a real person, it can spread on social media under the excuse that “we didn’t know” or “there was no malice intended.”
On social media, speed trumps legality. By the time a video goes viral and becomes an issue, it has already been viewed millions of times, making it impossible to trace its origins. Expression has become democratized, but responsibility remains unclear.
AI does not make judgments. It is humans who make judgments and assume responsibility. Copyright in the age of AI is not simply about protecting rights. It is the question of “who will accept responsibility for releasing this expression to the world?”
Before regulations and laws can catch up, the judgment and ethics of those on the ground are being tested first. The accumulation of these judgments will determine the future of the industry.
※Translating Japanese articles into English with AI
