Showing video with Qt toolbox and ffmpeg libraries

更新时间:2023-11-04 01:28:01 阅读量: 综合文库 文档下载

说明:文章内容仅供预览,部分内容可能不全。下载后的文档,内容与下面显示的完全一致。下载之前请确认下面内容是否您想要的,是否完整无缺。

Showing video with Qt toolbox and ffmpeg libraries

I recently had to build a demo client that shows short video messages for Ubuntu environment.

After checking out GTK+ I decided to go with the more natively OOP Qt toolbox (GTKmm didn't look right to me), and I think i made the right choice.

So anyway, I have my video files encoded in some unknown format and I need my program to show them in a some widget. I went around looking for an exiting example, but i couldn't find anything concrete, except for a good tip here that led me here for an example of using ffmpeg's libavformat and libavcodec, but no end-to-end example including the Qt code.

The ffmpeg example was simple enough to just copy-paste into my project, but the whole painting over the widget's canvas was not covered. Turns out painting video is not as simple as overriding paintEvent()...

Firstly, you need a separate thread for grabbing frames from the video file, because you won't let the GUI event thread do that.

That makes sense, but when the frame-grabbing thread (I called VideoThread) actually grabbed a frame and inserted it somewhere in the memory, I needed to tell the GUI thread to take that buffered pixels and paint them over the widget's canvas.

This is the moment where I praise Qt's excellent Signals/Slots mechanism. So I'll have my VideoThread emit a signal notifying some external entity that a new frame is in the buffer. Here's a little code:

void VideoThread::run() { /*

... Initialize libavformat & libavcodec data structures. You can see it in the example i referred to before */

// Open video file

if(av_open_input_file(&pFormatCtx, \

NULL, 0, NULL)!=0)

return -1; // Couldn't open file

// Retrieve stream information

if(av_find_stream_info(pFormatCtx)<0)

return -1; // Couldn't find stream information

// Find the first video stream ...

// Get a pointer to the codec context for the video // stream...

// Find the decoder for the video stream...

// Open codec...

// Allocate video frame

pFrame=avcodec_alloc_frame();

// Allocate an AVFrame structure pFrameRGB=avcodec_alloc_frame(); if(pFrameRGB==NULL) return -1;

int dst_fmt = PIX_FMT_RGB24; int dst_w = 160; int dst_h = 120;

// Determine required buffer size and allocate buffer numBytes = avpicture_get_size(dst_fmt, dst_w, dst_h); buffer = new uint8_t[numBytes + 64];

//put a PPM header on the buffer

int headerlen = sprintf((char *) buffer, \dst_w, dst_h);

_v->buf = (uchar*)buffer;

_v->len = avpicture_get_size(dst_fmt,dst_w,dst_h) + headerlen;

// Assign appropriate parts of buffer to image planes // in pFrameRGB...

// I use libswscale to scale the frames to the required // size.

// Setup the scaling context: SwsContext *img_convert_ctx;

img_convert_ctx = sws_getContext(

pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, dst_w, dst_h, dst_fmt,

SWS_BICUBIC, NULL, NULL, NULL);

// Read frames and notify i=0;

while(av_read_frame(pFormatCtx, &packet)>=0) {

// Is this a packet from the video stream? if(packet.stream_index==videoStream) {

// Decode video frame

avcodec_decode_video(pCodecCtx, pFrame,

&frameFinished, packet.data, packet.size);

// Did we get a video frame? if(frameFinished) {

// Convert the image to RGB sws_scale(img_convert_ctx, pFrame->data,

pFrame->linesize, 0,

pCodecCtx->height, pFrameRGB->data,

pFrameRGB->linesize);

emit frameReady();

//My video is 5FPS so sleep for 200ms. this->msleep(200); } }

// Free the packet that was allocated by // av_read_frame

av_free_packet(&packet); }

// Free the RGB image

delete [] buffer; av_free(pFrameRGB);

// Free the YUV frame av_free(pFrame);

// Close the codec...

// Close the video file... } //end VideoThread::run

Ok so I have a frame-grabber that emits a frameReady signal everytime the buffer is full and ready for painting.

A couple of things to notice:

? I convert the image format to PIX_FMT_RGB24 (avcodec.h), which is required by Qt's QImage::fromData() method.

? I scale the image using ffmpeg's libswscale. All conversion/scaling methods inside libavcodev are deprecated now.

But it's fairly simple, here's a good example. Just remember you need a sws_getContext and then sws_scale.

? I totally disregard actual frame rate here, I just sleep for 200ms because i know my file is 5FPS. For a (far-) more sophisticated way to get the FPS, very important if this is not a constant frame-rate video, you can find here.

? I don't cover audio in this example, although the mechanism to extract it from the file exists... you just need to grabe the audio stream's frame. For playing audio you also need some Qt-external library. In a different project I used SDL very easily, here's an example online. Now, for painting over the widget. This is fairly easy:

void VideoWidget::paintEvent(QPaintEvent * e) { QPainter painter(this);

if(buf) {

QImage i = QImage::fromData(buf,len,\painter.drawImage(QPoint(0,0),i); } }

Two things to note:

? The widget needs to be given the pointer to the video frame buffer (buf).

? The frame buffer needs to be in a PPM format. That means it needs to get a PPM header, which looks something like this: \in 3-byte per-pixel format (RGB24). You can see that i take care of that in the previous code block.

Finally we need to orchestrate this whole mess. So in my GUI-screen class I do:

....

vt = new VideoThread();

connect(vt,SIGNAL(frameReady()),this,SLOT(updateVideoWidget()));

vt->start(); .... And:

void playMessage::updateVideoWidget() { videoWidget->repaint(); //or update(). }

This will make the widget repaint on each frame ready. Note:

? In this example I don't take care of multi-threading issues. Since the GUI and the ffmpeg decoder threads share a memory buffer, I should probably have a mutex to protect it. It's a classic producer-consumer problem.

? Performance wise, Qt's paint mechanism is by far the worst way to go when displaying video... but it's great for a quick-and-dirty solution (I only needed 5fps). A more performance favorable solution will probably be using an overlay block and frame-serving with SDL. Enjoy! Roy.

本文来源:https://www.bwwdw.com/article/8nw2.html

Top