我通过live555打开本地的SDP文件,然后通过这个文件为我的IP Camera建立个传输会话,已经成功,DummySink中的 afterGettingFrame这回调函数能不间断的调用,我直接把表示每帧fReceiveBuffer和frameSize传入了FFMPEG的解码器,但是 avcodec_decode_video2总是调用失败。我查看了一些帧的fReceiveBuffer,里面没有插入相关的SPS和PPS,没有0x00000001的标记,因为解码器要解码IDR帧必须通过SPS和PPS正确初始化,才能够解码,我的SDP文件里面有PPS和SPS的Base64编码,我需要怎么处理这个SPS和PPS才能正确初始化解码器?不然解码器是无法解码H264的。FFMPEG中的AVCodecContext中的extradata也没有相关文档描述怎么用,听说是用来存放SPS和PPS的,如果是,该怎么用?我需要把SPS和PPS的Base64的编码解出来再传入吗?还是直接就传入(插入)SPS和PPS的Base64版本?
以下是我的代码片断:
void afterGettingFrame(unsigned frameSize, unsigned numTruncatedBytes,
struct timeval presentationTime, unsigned durationInMicroseconds)
{
if (fStreamId != NULL) envir() << "Stream \"" << fStreamId << "\"; ";
envir() << fSubsession.mediumName() << "/" << fSubsession.codecName() << ":\tReceived " << frameSize << " bytes";
if (numTruncatedBytes > 0) envir() << " (with " << numTruncatedBytes << " bytes truncated)";
char uSecsStr[6 + 1]; // used to output the 'microseconds' part of the presentation time
sprintf_s(uSecsStr, 7, "%06u", (unsigned)presentationTime.tv_usec);
envir() << ".\tPresentation time: " << (int)presentationTime.tv_sec << "." << uSecsStr;
gffmpeg->decodeFrame(fReceiveBuffer, frameSize, presentationTime.tv_sec, presentationTime.tv_usec);
// Then continue, to request the next frame of data:
continuePlaying();
}
void QFFmpeg::decodeFrame(uint8_t* frameBuffer, int frameLength, long second, long microSecond)
{
if (frameLength <= 0) return;
int frameFinished = 0;
AVPacket framePacket;
av_init_packet(&framePacket);
framePacket.size = frameLength;
framePacket.data = frameBuffer;
int ret = avcodec_decode_video2(m_pAVCodecContext, m_pAVFrame, &frameFinished, &framePacket);
if (ret < 0)
{
qDebug() << "Decode error";
return;
}
if (frameFinished)
{
m_playMutex.lock();
m_videoWidth = m_pAVFrame->width;
m_videoHeight = m_pAVFrame->height;
m_pSwsContext = sws_getContext(m_videoWidth, m_videoHeight, AVPixelFormat::AV_PIX_FMT_YUV420P, m_videoWidth, m_videoHeight, AVPixelFormat::AV_PIX_FMT_RGB24, SWS_BICUBIC, 0, 0, 0);
sws_scale(m_pSwsContext, (const uint8_t* const *)m_pAVFrame->data, m_pAVFrame->linesize, 0, m_videoHeight, m_pAVPicture.data, m_pAVPicture.linesize);
char timestamp[100];
sprintf_s(timestamp, 100, "Time Stamp is: %ld.%ld", second, microSecond);
//发送获取一帧图像信号
QImage image(m_pAVPicture.data[0], m_videoWidth, m_videoHeight, QImage::Format_RGB888);
QPainter pen(&image);
pen.setPen(Qt::white);
pen.setFont(QFont("Times", 10, QFont::Bold));
pen.drawText(image.rect(), Qt::AlignBottom, QString(timestamp));
emit GetImage(image);
m_playMutex.unlock();
}
}
以下是解码器的初始化,都能正确返回
bool QFFmpeg::Init()
{
m_pAVCodec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!m_pAVCodec) return false;
m_pAVCodecContext = avcodec_alloc_context3(m_pAVCodec);
if (avcodec_open2(m_pAVCodecContext, m_pAVCodec, NULL) < 0) return false;
return true;
}
PHP中文网2017-04-17 13:11:14
First understand the key knowledge points. I don’t know if this is misleading. H264 frames are transmitted in units of NAL units. One NAL unit contains one frame (I frame or P frame or B frame). These three types of The frame can be Baidu. The so-called NAL unit is a video frame with SPS and PPS removed. The I frame is a key frame, and all analysis depends on it. The space between two I frames is called a video sequence. SPS and PPS need to be added to the header of the I frame. This 0x00000001 is needed to separate the two, 0x00 0x00 0x00 0x01 + Base64 decoding form of SPS + 0x00 0x00 0x00 0x01 + decoding form of PPS + 0x00 0x00 0x00 0x01 video frame (IDR frame) A buffer composed like this, H264 decoding of FFMPEG must be successfully decoded.
The problem has been solved, I forgot to post it. Solution reference https://github.com/AlexiaChen/RtspMonitor
PHPz2017-04-17 13:11:14
Hello, I have also encountered this problem. I wonder if you have solved it. Can you share it with me?