


Pika's amplification trick: starting from today, video and sound effects can be produced 'in one pot'!
Just now, Pika released a new feature:
Sorry we have been muted before.
Starting from today, everyone can seamlessly generate sound effects for videos - Sound Effects!
There are two ways to generate:
- Either give a prompt and describe the sound you want;
- Or Directly let Pika automatically generate it based on the video content.
And Pika said very confidently: "If you think the sound effect sounds great, that's because it is."
The sound of cars, radios, eagles, swords, cheers... it can be said that the sound is endless, and in terms of effect, it is also highly consistent with the video picture.
Not only has the promotional video been released, but Pika’s official website has also released multiple demos.
For exampleWithout any prompt, the AI just watched the video of roasting bacon and can match the sound effects without any sense of violation.
Another prompt:
Super saturated color, fireworks over a field at sunset.
Super saturated color, fireworks over a field at sunset.
Pika can generate video and add sound at the same time. It is not difficult to see from the effect that the sound stuck at the moment when the fireworks bloom is also quite accurate.
Such a new feature was released during the big weekend. While netizens were shouting Pika "It's so curly and awesome" , some people also thought:
It's collecting all the "infinity stones" for multi-modal AI creation.
So let’s continue to look at how to operate Pika’s Sound Effects.
“make some noise” for videos
Pika’s operation of generating sound effects for videos is also extreme! That! simple! one!
For example, with just one prompt, video and sound effects can be "produced in one pot":
Mdieval trumpet player .
Medieval trumpeter.
#Compared with the previous operation of generating video, now you only need to turn on the "Sound effects" button below.
The second method of operation is to dub it separately after generating the video.
For example, in the video below, click "Edit" below, and then select "Sound Effects":
Then you can describe the sound you want, for example:
Race car revving its engine.
The car is starting its engine.
Then in just a few seconds, Pika can generate sound effects based on the description and video, and there are 6 sounds to choose from!
It is worth mentioning that the Sound Effects function is currently only open for testing to Super Collaborator (Super Collaborator) and Pro users.
However, Pika also said: "We will launch this feature to all users soon!"
Now a group of netizens have started testing this Beta version and said:
The sound effects sound very suitable for the video and add a lot of atmosphere.
What is the principle?
As for the principle behind Sound Effects, although Pika has not made it public this time, after Sora became popular, the voice startup company ElevenLabs has produced a similar dubbing function.
At that time, NVIDIA senior scientist Jim Fan made a more in-depth analysis of this.
He believes that AI learning accurate video to audio mapping also requires modeling some "implicit" physics in the latent space.
He detailed the problems that the end-to-end Transformer needs to solve when simulating sound waves:
- Identify the category, material and Spatial location.
- Recognize higher-order interactions between objects: For example, is it a stick, metal, or drumhead? At what speed does it hit?
- Identify the environment: Is it a restaurant, a space station, or Yellowstone Park?
- Retrieve typical sound patterns of objects and environments from the model's internal memory.
- Use "soft", learned physical rules to combine and adjust the parameters of sound patterns, and even create entirely new sounds on the fly. It's a bit like "procedural audio" in game engines.
- If the scene is complex, the model needs to superimpose multiple sound tracks according to the spatial position of the object.
All of this is not an explicit module, but is achieved by gradient descent learning from a large number of (video, audio) pairs, which are naturally found in most Internet videos. Time alignment. Attention layers will implement these algorithms in their weights to meet the diffusion goal.
In addition, Jim Fan said at the time that Nvidia’s related work did not have such a high-quality AI audio engine, but he recommended a paper from MIT five years ago The Sound of Pixels:
Interested friends can click on the link at the end of the article to learn more.
One More Thing
In terms of multimodal, LeCun’s views in the latest interview are also very popular. He believes:
Language (text) is low-bandwidth : less than 12 bytes/second. Modern LLMs typically use 1x10^13 double-byte tokens (i.e. 2x10^13 bytes) for training. It would take a human being approximately 100,000 years (12 hours a day) to read.
Visual bandwidth is much higher: about 20MB/s. Each of the two optic nerves has 1 million nerve fibers, each carrying about 10 bytes per second. A 4-year-old child spends about 16,000 hours in the awake state, which is about 1x10^15 when converted into bytes.
The data bandwidth of visual perception is approximately 16 million times that of text language data bandwidth.
The data a 4-year-old child sees is 50 times the largest LLM data for all text training publicly available on the Internet.
Thus, LeCun concluded:
If machines are not allowed to learn from high-bandwidth sensory input (such as vision), There is absolutely no way we can achieve human-level artificial intelligence.
So, do you agree with this view?
The above is the detailed content of Pika's amplification trick: starting from today, video and sound effects can be produced 'in one pot'!. For more information, please follow other related articles on the PHP Chinese website!

Scientists have extensively studied human and simpler neural networks (like those in C. elegans) to understand their functionality. However, a crucial question arises: how do we adapt our own neural networks to work effectively alongside novel AI s

Google's Gemini Advanced: New Subscription Tiers on the Horizon Currently, accessing Gemini Advanced requires a $19.99/month Google One AI Premium plan. However, an Android Authority report hints at upcoming changes. Code within the latest Google P

Despite the hype surrounding advanced AI capabilities, a significant challenge lurks within enterprise AI deployments: data processing bottlenecks. While CEOs celebrate AI advancements, engineers grapple with slow query times, overloaded pipelines, a

Handling documents is no longer just about opening files in your AI projects, it’s about transforming chaos into clarity. Docs such as PDFs, PowerPoints, and Word flood our workflows in every shape and size. Retrieving structured

Harness the power of Google's Agent Development Kit (ADK) to create intelligent agents with real-world capabilities! This tutorial guides you through building conversational agents using ADK, supporting various language models like Gemini and GPT. W

summary: Small Language Model (SLM) is designed for efficiency. They are better than the Large Language Model (LLM) in resource-deficient, real-time and privacy-sensitive environments. Best for focus-based tasks, especially where domain specificity, controllability, and interpretability are more important than general knowledge or creativity. SLMs are not a replacement for LLMs, but they are ideal when precision, speed and cost-effectiveness are critical. Technology helps us achieve more with fewer resources. It has always been a promoter, not a driver. From the steam engine era to the Internet bubble era, the power of technology lies in the extent to which it helps us solve problems. Artificial intelligence (AI) and more recently generative AI are no exception

Harness the Power of Google Gemini for Computer Vision: A Comprehensive Guide Google Gemini, a leading AI chatbot, extends its capabilities beyond conversation to encompass powerful computer vision functionalities. This guide details how to utilize

The AI landscape of 2025 is electrifying with the arrival of Google's Gemini 2.0 Flash and OpenAI's o4-mini. These cutting-edge models, launched weeks apart, boast comparable advanced features and impressive benchmark scores. This in-depth compariso


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 English version
Recommended: Win version, supports code prompts!

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SublimeText3 Chinese version
Chinese version, very easy to use

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function
