Sora: A AI app for Deepfakes-from Openaai

The Sora post: A Ki app for Deepfakes-from Openai first appeared at the online magazine Basic Thinking. You can start the day well every morning via our newsletter update.

Sora app sora 2 video openai ki artificial intelligence

Openai has with Sora 2 published a new model of his AI videoogenerer. Also: one associated appwith which users can create, share and look at generated clips. Legally and technically, the company acts both naively and just brazenly.

Openai app: What is Sora?

  • With the text-to-video model Sora 2, users can generate their own AI videos. It is part of the iOS app Sora, a kind of online platform on which you can create, share and watch moving contents. A videoin which a Ki clone from Sam Altman walk through a virtual world, the possibilities should demonstrate. The Sora app is initially only available in the USA and Canada. Openaai wants to make it usable worldwide.
  • With the Sora app, Openai is making a new and absurd way to use copyrighted content. Rightsers should be asked to expressly object to the use of their material – otherwise it will be used for Sora. The approach not only has fierce discussions triggered, but is probably legally not durable in many countries.
  • A function called Camos It enables users to insert themselves into generated scenes after taking a verification clip. The verification is to be prevented. The video AI should not be able to reproduce people in public life without their consent. Openai apparently wants to create political Prevent fake videos. The tool should reject sexual representations and mark videos as AI generated.

Classification: Attack on Tikkok and Co?

With Sora 2, Openai has undoubtedly released an impressive video AI that opens up completely new creative possibilities. The model not only seems to be better than the predecessor version, but also enables the synchronized edition of languages ​​for the first time. So far, only a few competitive models such as VEO 3 from Google have been able to do this.

See also  CRM for event agencies: automate participant management and tracking

Sora represents one at the same time Attack on platforms such as Tikkok, Instagram and YouTube with the app, the concern also grows against so-called AI Slops, a flood of inferior AI content, from which the competition can already sing a song.

The heart of the app is a function called “Cameo”, which allows users of their own recordings to stage themselves – with almost endless possibilities. But despite its security mechanisms, Openai seems that Potential for abuse to massively underestimate in the form of Deepfakes.

Because: The boundaries between entertaining clips and manipulations are blurred because even the best mechanisms cannot guarantee security. Fake news and fraud It should therefore be inevitable, while copyright may already feel cheated. Because silence should always mean protection, not approval.

Voices

  • Openai boss Sam Altman Cheers Sora 2 on X (formerly Twitter): “For many of us, it feels like the moment ‘Chatgpt for Creativity’, and it’s fun and is new. It is great that you get really and quickly get from the idea to the result and that this will develop a new social dynamic.”
  • Jose Marichal, professor at California Lutheran Universitywho deals with the Remodeling of society by AI Employed: “I think what captivates you is that they are somehow unbelievable, but look realistic. We are either the manipulators or the manipulators. And that leads us to things that are something other than liberal democracy, something other than representative democracy.”
  • Music producer Ed-Newtom Rex works with AI himself, But is horrified: “Openai indicates that it will use copyrighted figures in Sora videos, unless the rights holder contradicts. If you get through with it, why the copyright is still used? It is completely undermined by the lobbying work.

Outlook: a AI app for Deepfakes?

With Sora, a viral flash could hit the social media industry. Cameo clips, memes and bizarre moon walks will initially provide euphoria and smile. But Sora is a double -edged sword.

See also  Signal: Everything you need to know about the WhatsApp alternative

Because inferior clips should not only do that over time Water your user experiencebut also call fraudsters and machiavellism. Openai may still be able to counteract this.

However, the absurd legal opinion of wanting to shamelessly use the content of others should end in a legal pile of broken glass. However, the goals are clearly recognizable.

On the one hand, Openai wants to set a technological exclamation mark. On the other hand, you test the legal limits. Ultimately, it will be a platform with which the company Wants to make money. Because what many in the AI ​​hype quickly forget is that Openaai is currently acting deficient.

Also interesting:

  • Tilly Norwood: AI actress shocked Hollywood
  • Pulse: Chatgpt now also answers without being asked
  • The Wolfram Weimer experiment: First German Minister with Ki-Avatar
  • Openai for Germany: with AI bureaucracy against bureaucracy?

The article Sora: A Ki app for Deepfakes-from Openaai first appeared on Basic Thinking. Follow us too Google News and Flipboard Or subscribe to our update newsletter.


As a Tech Industry expert, I have mixed feelings about Sora, an AI app for deepfakes developed by OpenAI. On one hand, the technology behind Sora is undeniably impressive and showcases the potential for AI to create incredibly realistic and convincing deepfake videos. This could have a wide range of applications, from entertainment to digital marketing.

However, I also have concerns about the ethical implications of such technology. Deepfakes have the potential to be misused for malicious purposes, such as spreading misinformation, creating fake news, or even manipulating individuals into believing something that is not true. This could have serious consequences for society as a whole.

It will be crucial for OpenAI to implement robust safeguards and regulations to prevent the misuse of Sora and similar AI technologies. Additionally, there needs to be a greater emphasis on educating the public about the existence of deepfakes and how to spot them.

See also  Middle class: These are the best SUVs - according to ADAC

Overall, while Sora represents a significant advancement in AI technology, it also highlights the need for responsible and ethical development and usage of such tools in order to prevent harm and protect society from potential negative consequences.

Credits