Tag: OpenAI

  • What Is Sora That Creates 1-Min AI Video From Text? All About ChatGPT Maker OpenAI’s Instant Video Maker |

    New Delhi: ChatGPT maker OpenAI has gone another step further into generative AI by unveiling Sora — an AI model that can create realistic and imaginative scenes from text instructions.

    “Introducing Sora, our text-to-video model. Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions,” tweeted OpenAI.    

    In the tweet video, Open AI demonstrates a short video that has been generated by AI using the “Prompt: “Beautiful, snowy Tokyo city is bustling. The camera moves through the bustling city street, following several people enjoying the beautiful snowy weather and shopping at nearby stalls. Gorgeous sakura petals are flying through the wind along with snowflakes.”

    Introducing Sora, our text-to-video model.

    Sora can create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions. https://t.co/7j2JN27M3W

    Prompt: “Beautiful, snowy… pic.twitter.com/ruTEWn87vf


    — OpenAI (@OpenAI) February 15, 2024

    OpenAI has said that Sora — the text-to-video model –can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.

    “In addition to being able to generate a video solely from text instructions, the model is able to take an existing still image and generate a video from it, animating the image’s contents with accuracy and attention to small detail. The model can also take an existing video and extend it or fill in missing frame,” OpenAI said.

    Sora is not publicly available currently. It is now available only for red teamers. Sora builds on past research in DALL·E and GPT models, using the recaptioning technique from DALL·E 3, which involves generating highly descriptive captions for the visual training data. 

    “We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who will be adversarially testing the model,” OpenAI said.

  • Hackers Utilizing ChatGPT To Enhance Cyberattacks, Microsoft And OpenAI Reveal |

    New Delhi: Microsoft and OpenAI announced on Wednesday that hackers are utilizing large language models (LLMs) such as ChatGPT to enhance their current cyber-attack methods. The companies have identified efforts by groups supported by Russia, North Korea, Iran, and China to utilize tools such as ChatGPT for investigating targets and developing social engineering tactics.

    In partnership with Microsoft Threat Intelligence, OpenAI intervened to disrupt five state-affiliated actors who aimed to utilize AI services to facilitate malicious cyber operations. (Also Read: Meta CEO Mark Zuckerberg Tries Apple Vision Pro, Shares Video On Instagram)

    “We disrupted two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard,” said Sam Altman-run company. (Also Read: OpenAI’s ChatGPT Is Testing New ‘Memory’ Feature With Select Users)

    The identified OpenAI accounts associated with these actors were terminated. These bad actors sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks.

    “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” Microsoft said in a statement.

    While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context, said the company.

    “As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts,” the tech giant noted. (With IANS Inputs)

  • ChatGPT Users Can Now Bring GPTs Into Any Conversation In OpenAI

    OpenAI is currently only offering the ability to browse, create and use GPTs to its paying customers. 

  • ‘AI Girlfriends’ Flood GPT Store Shortly After Launch, OpenAI Rules Breached |

    New Delhi: OpenAI’s recently launched GPT store is encountering difficulties with moderation just a few days after its debut. The platform provides personalized editions of ChatGPT, but certain users are developing bots that violate OpenAI’s guidelines.

    These bots, with names such as “Your AI companion, Tsu,” enable users to personalize their virtual romantic companions, violating OpenAI’s restriction on bots explicitly created for nurturing romantic relationships.

    The company is actively working to address this problem. OpenAI revised its policies when the store was introduced on January 10, 2023. However, the violation of policy on the second day highlights the challenges associated with moderation.

    With the growing demand for relationship bots, it’s adding a layer of complexity to the situation. As reported, seven out of the 30 most downloaded AI chatbots were virtual friends or partners in the United States previous year. This trend is linked to the prevailing loneliness epidemic.

    To assess GPT models, OpenAI states that it uses automated systems, human reviews and user reports to assess GPT models applying warnings or sales bans for those considered harmful. However, the continued presence of girlfriend bots in the market raises doubts about the effectiveness of this assertion.

    The difficulty in moderation reflects the common challenges experienced by AI developers. OpenAI has faced issues in implementing safety measures for previous models such as GPT-3. With the GPT store available to a wide user audience, the potential for insufficient moderation is a significant concern.

     Other technology companies are also swiftly handling problems with their AI systems, understanding the significance of quick action in the growing competition. Yet, the initial breaches highlight the significant challenges in moderation that are expected in the future.

    Even within the specific environment of a specialized GPT store, managing narrowly focused bots seems to be a complicated task. As AI progresses, ensuring their safety is set to become more complex.

  • OpenAI Allows Its AI Technologies For Military Applications

    There are several “killing-adjacent tasks” that a large language model (LLM) like ChatGPT could augment, like writing code or processing procurement.