With the holiday season upon us, many companies are finding ways to take advantage through deals, promotions, or other campaigns. OpenAI has found a way to participate with its "12 days of OpenAI" event series.
On Wednesday, OpenAI announced via an X post that starting on Dec. 5, the company would host 12 days of live streams and release "a bunch of new things, big and small," according to the post.
Here's everything you need to know about the campaign, as well as a round-up of every day's drops.
What are the '12 days of OpenAI'?
OpenAI CEO Sam Altman shared a bit more details about the event, which kicked off at 10 a.m. PT on Dec. 5 and will occur daily for 12 weekdays with a live stream featuring a launch or demo. The launches will be both "big ones" or "stocking stuffers," according to Altman.
🎄🎅starting tomorrow at 10 am pacific, we are doing 12 days of openai. each weekday, we will have a livestream with a launch or demo, some big ones and some stocking stuffers. we’ve got some great stuff to share, hope you enjoy! merry christmas.
One of OpenAI's most highly requested features has been an organizational feature to better keep track of your conversations. Today, OpenAI delivered a new feature called "Projects."
Projects is a new way to organize and customize your chats in ChatGPT, meant to be a part of continuing to optimize the core experience of ChatGPT.
When creating a Project, you can include a title, a customized folder color, relevant project files, instructions for ChatGPT on how it can best help you with the project, and more in one place.
In the Project, you can start a chat and add previous chats from the sidebar to your Project. It can also answer questions using your context in a regular chat format. The chats can be saved in the Project, making it easier to pick up your conversations later and know exactly what to look for where.
It will be rolled out to Plus, Pro, and Teams users starting today. OpenAI says it's coming to free users as soon as possible. Enterprise and Edu users will see it rolled out early next year.
Thursday, December 12
When the live stream started, OpenAI addressed the elephant in the room -- the fact that the company's live stream went down the day before. OpenAI apologized for the inconvenience and said its team is working on a post-mortem to be posted later today.
Then it got straight into the news -- another highly-anticipated announcement:
Advanced Voice Mode now has screen-sharing and visual capabilities, meaning it can assist with the context of what it is viewing, whether that be from your phone camera or what's on your screen.
These capabilities build on what Advanced Voice could already do very well -- engaging in casual conversation as a human would. The natural-like conversations can be interrupted, have multi-turns, and understand non-linear trains of thought.
In the demo, the user gets directions from ChatGPT's Advanced Voice on how to make a cup of coffee. As the demoer goes through the steps, ChatGPT is verbally offering insights and directions.
There's another bonus for the Christmas season: Users can access a new Santa voice. To activate it, all users have to do is click on the snowflake icon. Santa is rolling out throughout today everywhere that users can access ChatGPT voice mode. The first time you talk to Santa, your usage limits reset, even if you have reached the limit already, so you can have a conversation with him.
Video and screen sharing are rolling out in the latest mobile apps starting today and throughout next week to all Team users and most Pro and Plus subscribers. Pro and Plus subscribers in Europe will get access "as soon as we can," and Enterprise and Edu users will get access early next year.
Wednesday, December 11
Apple released iOS 18.2 today. The release includes integrations with ChatGPT across Siri, Writing Tools, and Visual Intelligence. As a result, today's live stream focused on walking through the integration.
Siri can now recognize when you ask questions outside its scope that could benefit from being answered by ChatGPT instead. In those instances, it will ask if you'd like to process the query using ChatGPT. Before any request is sent to ChatGPT, a message notifying the user and asking for permission will always appear, placing control in the user's hands as much as possible.
Visual Intelligence refers to a new feature for the iPhone 16 lineup that users can access by tapping the Camera Control button. Once the camera is open, users can point it at something and search the web with Google, or use ChatGPT to learn more about what they are viewing or perform other tasks such as translating or summarizing text.
Writing Tools now features a new "Compose" tool, which allows users to create text from scratch by leveraging ChatGPT. With the feature, users can even generate images using DALL-E.
All of the above features are subject to ChatGPT's daily usage limits, the same way that users would reach limits while using the free version of the model on ChatGPT. Users can choose whether or not to enable the ChatGPT integration in Settings.
OpenAI teased the third-day announcement as "something you've been waiting for," followed by the much-anticipated drop of its video model -- Sora. Here's what you need to know:
Sora features an explore page where users can view each other's creations. Users can click on any video to see how it was created.
A live demo showed the model in use. The demo-ers entered a prompt and picked aspect ratio, duration, and even presets. I found the live demo video results to be realistic and stunning.
OpenAI also unveiled Storyboard, a tool that lets users generate inputs for every frame in a sequence.
Friday, December 6:
On the second day of "shipmas," OpenAI expanded access to its Reinforcement Fine-Tuning Research Program:
Thursday, December 5:
OpenAI started with a bang, unveiling two major upgrades to its chatbot: a new tier of ChatGPT subscription, ChatGPT Pro, and the full version of the company's o1 model.
The full version of o1:
ChatGPT Pro:
Is meant for ChatGPT Plus superusers, granting them unlimited access to the best OpenAI has to offer, including unlimited access to OpenAI o1-mini, GPT-4o, and Advanced Mode
Features o1 pro mode, which uses more computing to reason through the hardest science and math problems
Costs $200 per month
Where can you access the live stream?
The live streams are held on the OpenAI website, and posted to its YouTube channel immediately after. To make access easier, OpenAI will also post a link to the live stream on its X account 10 minutes before it starts, which will be at approximately 10 a.m. PT/1 p.m. ET daily.