aboutsummaryrefslogtreecommitdiffhomepage
path: root/video/decode/vd_lavc.c
Commit message (Collapse)AuthorAge
* Replace mp_tmsg, mp_dbg -> mp_msg, remove mp_gtext(), remove set_osd_tmsgGravatar wm42013-12-16
| | | | | | | | | The tmsg stuff was for the internal gettext() based translation system, which nobody ever attempted to use and thus was removed. mp_gtext() and set_osd_tmsg() were also for this. mp_dbg was once enabled in debug mode only, but since we have log level for enabling debug messages, it seems utterly useless.
* video: move video filter chain initialization from decoder to playerGravatar wm42013-12-10
| | | | | | | | | | | | | This should help fixing some issues (like not draining video frames correctly on reinit), as well as decoupling the decoder, filter chain, and VO code. I also wanted to make the hardware video decoding fallback work properly if software-only video filters are inserted. This currently has the issue that the fallback is too violent, and throws away a bunch of demuxer packets needed to restart software decoding properly. But keeping "backup" packets turned out as too hacky, so I'm not doing this, at least not yet.
* video: create a separate context for video filter chainGravatar wm42013-12-07
| | | | | | This adds vf_chain, which unlike vf_instance refers to the filter chain as a whole. This makes the filter API less awkward, and will allow handling format negotiation better.
* vd_lavc: factor out libavcodec thread setupGravatar wm42013-12-04
|
* vd_lavc: don't check required hwdec fieldsGravatar wm42013-12-04
|
* av_common: add timebase parameter to mp_set_av_packet()Gravatar wm42013-12-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | If the timebase is set, it's used for converting the packet timestamps. Otherwise, the previous method of reinterpret-casting the mpv style double timestamps to libavcodec style int64_t timestamps is used. Also replace the kind of awkward mp_get_av_frame_pkt_ts() function by mp_pts_from_av(), which simply converts timestamps in a way the old function did. (Plus it takes a timebase parameter, similar to the addition to mp_set_av_packet().) Note that this should not change anything yet. The code in ad_lavc.c and vd_lavc.c passes NULL for the timebase parameters. We could set AVCodecContext.pkt_timebase and use that if we want to give libavcodec "proper" timestamps. This could be important for ad_lavc.c: some codecs (opus, probably mp3 and aac too) have weird requirements about doing decoding preroll on the container level, and thus require adjusting the audio start timestamps in some cases. libavcodec doesn't tell us how much was skipped, so we either get shifted timestamps (by the length of the skipped data), or we give it proper timestamps. (Note: libavcodec interprets or changes timestamps only if pkt_timebase is set, which by default it is not.) This would require selecting a timebase though, so I feel uncomfortable with the idea. At least this change paves the way, and will allow some testing.
* Take care of some libavutil deprecations, drop support for FFmpeg 1.0Gravatar wm42013-11-29
| | | | | | | | | | | | | | PIX_FMT_* -> AV_PIX_FMT_* (except some pixdesc constants) enum PixelFormat -> enum AVPixelFormat Losen some version checks in certain newer pixel formats. av_pix_fmt_descriptors -> av_pix_fmt_desc_get This removes support for FFmpeg 1.0.x, which is even older than Libav 9.x. Support for it probably was already broken, and its libswresample was rejected by our build system anyway because it's broken. Mostly untested; it does compile with Libav 9.9.
* video: refactor PTS code, add fall back heuristic to DTSGravatar wm42013-11-27
| | | | | | | | | | | | | | | | | | | | | | | | | | Refactor the PTS handling code to make it cleaner, and to separate the bits that use PTS sorting. Add a heuristic to fall back to DTS if the PTS us non-monotonic. This code is based on what FFmpeg/Libav use for ffplay/avplay and also best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non- monotonic, but DTS is sorted. The code is pretty much the same as in Libav [1]. I'm not sure if all of it is really needed, or if it does more than what the paragraph above mentions. But maybe it's fine to cargo-cult this. This heuristic fixes playback of mpeg4 in ogm, which returns packets with PTS==DTS, even though the PTS timestamps should follow codec reordering. This is probably a libavformat demuxer bug, but good luck trying to fix it. The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit inelegant, but maybe better than trying to mess the PTS back into the decoder callback again. [1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
* cosmetics: rename video/audio reset functionsGravatar wm42013-11-27
| | | | | | | | | | These used the suffix _resync_stream, which is a bit misleading. Nothing gets "resynchronized", they really just reset state. (Some audio decoders actually used to "resync" by reading packets for resuming playback, but that's not the case anymore.) Also move the function in dec_video.c to the top of the file.
* demux: export dts from demux_lavf, use it for aviGravatar wm42013-11-25
| | | | | | | | | Having the DTS directly can be useful for restoring PTS values. The avi file format doesn't actually store PTS values, just DTS. An older hack explicitly exported the DTS as PTS (ignoring the [I assume] genpts generated non-sense PTS), which is not necessary anymore due to this change.
* video: pass PTS as part of demux_packet/AVPacket and mp_image/AVFrameGravatar wm42013-11-25
| | | | | | | | | | | | | | | Instead of passing the PTS as separate field, pass it as part of the usual data structures. Basically, this removes strange artifacts from the API. (It's not finished, though: the final decoded PTS goes through strange paths, and filter_video() finally overwrites the decoded mp_image's pts field with it.) We also stop using libavcodec's reordered_opaque fields, and use AVPacket.pts and AVFrame.pkt_pts. This is slightly unorthodox, because these pts fields are not "really" opaque anymore, yet we treat them as such. But the end result should be the same, and reordered_opaque is marked as partially deprecated (it's not clear whether it's really deprecated).
* vd_lavc: improve a commentGravatar wm42013-11-24
|
* vd_lavc: when falling back to software, revert filter error statusGravatar wm42013-11-23
| | | | | | | | | | | | | | | | | | | When mpv is started with some video filters set (--vf is used), and hardware decoding is requested, and hardware decoding would be possible, but is prevented due to video filters that accept software formats only, the fallback didn't work properly sometimes. This fallback works rather violently: it tries to initialize the filter chain, and if it fails it throws away the frame decoded using the hardware, and retries with software. The case that didn't work was when decoding the current packet didn't immediately lead to a new frame. Then the filter chain wouldn't be reinitialized, and the playloop would stop playback as soon as it encounters the error flag. Fix this by resetting the filter error flag (back to "uninitialized"), which is a rather violent, but somewhat working solution. The fallback in general should perhaps be cleaned up later.
* video: move handling of container vs. stream AR out of vd_lavc.cGravatar wm42013-11-23
| | | | | | | Now the actual decoder doesn't need to care about this anymore, and it's handled in generic code instead. This simplifies vd_lavc.c, and in particular we don't need to detect format changes in the old way anymore.
* dec_video: make vf_input and hwdec_info statically allocatedGravatar wm42013-11-23
| | | | | | | | | | | The only reason why these structs were dynamically allocated was to avoid recursive includes in stheader.h, which is (or was) a very central file included by almost all other files. (If a struct is referenced via a pointer type only, it can be forward referenced, and the definition of the struct is not needed.) Now that they're out of stheader.h, this difference doesn't matter anymore, and the code can be simplified. Also sneak in some sanity checks.
* demux: remove gsh field from sh_audio/sh_video/sh_subGravatar wm42013-11-23
| | | | | | | | | This used to be needed to access the generic stream header from the specific headers, which in turn was needed because the decoders had access only to the specific headers. This is not the case anymore, so this can finally be removed again. Also move the "format" field from the specific headers to sh_stream.
* video: move decoder context from sh_video into new structGravatar wm42013-11-23
| | | | | | | | | | This is similar to the sh_audio commit. This is mostly cosmetic in nature, except that it also adds automatical freeing of the decoder driver's state struct (which was in sh_video->context, now in dec_video->priv). Also remove all the stheader.h fields that are not needed anymore.
* demux: rename demux_packet.h to packet.hGravatar wm42013-11-18
| | | | This always bothered me.
* vd_lavc: select correct hw decoder profile for constrained baseline h264Gravatar wm42013-11-14
| | | | | | | | | | | | | | | | | | | | The existing code tried to remove the "extra" profile flags for h264. FF_PROFILE_H264_INTRA doesn't matter for us at all, because it's set only for profiles the vdpau/vaapi APIs don't support. The FF_PROFILE_H264_CONSTRAINED flag on the other hand is added to H264_BASELINE, except that it makes the file a real subset of H264_MAIN and H264_HIGH. Removing that flag would select the BASELINE profile, which appears to be rarely supported by hardware decoders. This means we accidentally rejected perfectly hardware decodable files. Use MAIN for it instead. (vaapi has explicit support for CONSTRAINED_BASELINE, but it seems to be a new thing, and is not reported as supported where I tried. So don't bother to check it, and do the same as on vdpau.) See github issue #204.
* vd_lavc: remove explicit crystalhd supportGravatar wm42013-11-06
| | | | | | | | | | | | | | | | This removes "--hwdec=crystalhd". I doubt anyone even tried to use this. But even if someone wants to use it, the decoders can still be explicitly invoked with e.g.: --vd=lavc:h264_crystalhd The only advantage our special code provided was fallback to software decoding. (But I'm not sure how the ffmpeg crystalhd pseudo-decoder actually behaves.) Removing this will allow some simplifications as soon as we don't need vdpau_old.c anymore.
* Merge branch 'master' into have_configureGravatar wm42013-11-04
|\ | | | | | | | | Conflicts: configure
| * vo_opengl: add infrastructure for hardware decoding OpenGL interopGravatar wm42013-11-04
| | | | | | | | | | | | | | | | | | | | | | | | Most hardware decoding APIs provide some OpenGL interop. This allows using vo_opengl, without having to read the video data back from GPU. This requires adding a backend for each hardware decoding API. (Each backend is an entry in gl_hwdec_vaglx[].) The backends expose video data as a set of OpenGL textures. Add infrastructure to support this. The next commit will add support for VA-API.
* | configure: uniform the defines to #define HAVE_xxx (0|1)Gravatar Stefano Pigozzi2013-11-03
|/ | | | | | | | | | | | | | | | | | | | | The configure followed 5 different convetions of defines because the next guy always wanted to introduce a new better way to uniform it[1]. For an hypothetic feature 'hurr' you could have had: * #define HAVE_HURR 1 / #undef HAVE_DURR * #define HAVE_HURR / #undef HAVE_DURR * #define CONFIG_HURR 1 / #undef CONFIG_DURR * #define HAVE_HURR 1 / #define HAVE_DURR 0 * #define CONFIG_HURR 1 / #define CONFIG_DURR 0 All is now uniform and uses: * #define HAVE_HURR 1 * #define HAVE_DURR 0 We like definining to 0 as opposed to `undef` bcause it can help spot typos and is very helpful when doing big reorganizations in the code. [1]: http://xkcd.com/927/ related
* demux: rename Windows symbolsGravatar wm42013-11-02
| | | | | | | | | | | | | | | | | | | | There are some Microsoft Windows symbols which are traditionally used by the mplayer core, because it used to be convenient (avi was the big format, using binary windows decoders made sense...). So these symbols have the exact same definition as the Windows one, and if mplayer is compiled on Windows, the symbols from windows.h are used. This broke recently just because some files were shuffled around, and the symbols defined in ms_hdr.h collided with windows.h ones. Since we don't have windows binary decoders anymore, there's not the slightest reason our symbols should have the same names. Rename them to reduce the risk for collision, and to fix the recent regression. Drop WAVEFORMATEXTENSIBLE, because it's mostly unused. ao_dsound defines its own version if the windows headers don't define it, and ao_wasapi is not available on systems where this symbol is missing. Also reindent ms_hdr.h.
* video: check profiles with hardware decodingGravatar wm42013-11-01
| | | | | | | | | | | | | | | We had some code for checking profiles earlier, which was removed in commits 2508f38 and adfb71b. These commits mentioned that (working) hw decoding was sometimes prevented due to profile checking, but I can't find the samples anymore that showed this behavior. Also, I changed my opinion, and I think checking the profiles is something that should be done for better fallback to software decoding behavior. The checks roughly follow VLC's vdpau profile checks, although we do not check codec levels. (VLC's profile checks aren't necessarily completely correct, but they're a welcome help anyway.) Add a --vd-lavc-check-hw-profile option, which skips the profile check.
* vd_lavc: add more ifdeffery and ffmpeg cargo culting for correctnessGravatar wm42013-10-31
| | | | | | | | | | | | | We mixed the "old" AVFrame management functions (avcodec_alloc_frame, avcodec_free_frame) with reference counting. This doesn't work correctly; you must use av_frame_alloc and av_frame_free. Of course ffmpeg doesn't warn us about the bad usage, but will just mess up things silently. (Thanks a lot...) While the alloc function seems to be 100% compatible, the free function will do bad things, such as freeing memory that might still be referenced by another frame. I didn't experience any actual bugs, but maybe that was pure luck.
* core: add --force-windowGravatar wm42013-10-02
| | | | | | | | | | | | | | | | | This commit adds the --force-window option, which will cause mpv always to create a window when started. This can be useful when pretending that mpv is a GUI application (which it isn't, but users pretend anyway), and playing audio files would run mpv in the background without giving a window to control it. This doesn't actually create the window immediately: it only does so only after initializing playback and when it is clear that there won't be any actual video. This could be a problem when starting slow or completely stuck network streams (mpv would remain frozen in the background), or if video initialization somehow is stuck forever in an in-between state (like when the decoder doesn't output a video frame, but doesn't return an error either). Well, we can pretend only so much that mpv is a GUI application.
* video: let sh_video->aspect always be container aspect ratioGravatar wm42013-09-26
| | | | | | | Now writing -1 to the 'aspect' property resets the video to the auto aspect ratio. Returning the aspect from the property becomes a bit more complicated, because we still try to return the container aspect ratio if no frame has been decoded yet.
* vaapi: allow GPU read-back with --hwdec=vaapi-copyGravatar wm42013-09-25
| | | | | | | | | | | | | | This code is actually quite inefficient: it reuses the (slow, simple) screenshot code. It uses an inefficient method to read the image (vaGetImage() instead of vaDeriveImage()), allocates new memory for each frame that is read, and it tries all image formats again each time. Also, in my tests it always picked NV12 as image format, which is not ideal if you actually want to filter the video, and vo_xv can't handle this format without conversion either. However, a user confirmed that it worked for him, so everything is fine.
* vd_lavc: allow process_image to change image formatGravatar wm42013-09-25
| | | | | | | | | | | | | | This will allow GPU read-back with process_image. We have to restructure how init_vo() works. Instead of initializing the VO before process_image is called, rename init_vo() to update_image_params(), and let it update the params only. Then we really initialize the VO after process_image. As a consequence of these changes, already decoded hw frames are correctly unreferenced if creation of the filter chain fails. This could trigger assertions on VO uninitialization, because it's not allowed to reference hw frames past VO lifetime.
* vd_lavc: reset last_sample_aspect_ratio in uninit_avctx()Gravatar xylosper2013-09-13
| | | | | | | | In init_vo(), if sh->aspect is 0 or last_sample_aspect_ratio is set, sh->aspect is overwritten. With software decoding fallback behaviour, this makes the aspect ratio from container ignored since last_sample_aspect_ratio is already set in first try with hardware decoding.
* video/out: don't require VOs to handle screenshot aspect speciallyGravatar wm42013-08-24
| | | | | | | | | | | | | | | | This affects VOs which just reuse the mp_image from draw_image() to return screenshots. The aspect of these images is never different from the aspect the screenshots should be, so there's no reason to adjust the aspect in these cases. Other VOs still need it in order to restore the original image attributes. This requires some changes to the video filter code to make sure that the aspect in the passed mp_images is consistent. The changes in mplayer.c and vd_lavc.c are (probably) not strictly needed for this commit, but contribute to consistency.
* video: add vda decode support (with hwaccel) and direct renderingGravatar Stefano Pigozzi2013-08-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Decoding H264 using Video Decode Acceleration used the custom 'vda_h264_dec' decoder in FFmpeg. The Good: This new implementation has some advantages over the previous one: - It works with Libav: vda_h264_dec never got into Libav since they prefer client applications to use the hwaccel API. - It is way more efficient: in my tests this implementation yields a reduction of CPU usage of roughly ~50% compared to using `vda_h264_dec` and ~65-75% compared to h264 software decoding. This is mainly because `vo_corevideo` was adapted to perform direct rendering of the `CVPixelBufferRefs` created by the Video Decode Acceleration API Framework. The Bad: - `vo_corevideo` is required to use VDA decoding acceleration. - only works with versions of ffmpeg/libav new enough (needs reference refcounting). That is FFmpeg 2.0+ and Libav's git master currently. The Ugly: VDA was hardcoded to use UYVY (2vuy) for the uploaded video texture. One one end this makes the code simple since Apple's OpenGL implementation actually supports this out of the box. It would be nice to support other output image formats and choose the best format depending on the input, or at least making it configurable. My tests indicate that CPU usage actually increases with a 420p IMGFMT output which is not what I would have expected. NOTE: There is a small memory leak with old versions of FFmpeg and with Libav since the CVPixelBufferRef is not automatically released when the AVFrame is deallocated. This can cause leaks inside libavcodec for decoded frames that are discarded before mpv wraps them inside a refcounted mp_image (this only happens on seeks). For frames that enter mpv's refcounting facilities, this is not a problem since we rewrap the CVPixelBufferRef in our mp_image that properly forwards CVPixelBufferRetain/CvPixelBufferRelease calls to the underying CVPixelBufferRef. So, for FFmpeg use something more recent than `b3d63995` for Libav the patch was posted to the dev ML in July and in review since, apparently, the proposed fix is rather hacky.
* video/decode: change fix_image callbackGravatar wm42013-08-15
| | | | | This might make it slightly easier when trying to implement surface read-back for hardware decoding.
* vd_lavc: fix comment, document hwdec video frame size trickinessGravatar wm42013-08-15
| | | | | | | | | About this issue, it would be better if the surfaces could be allocated with the real size, and the vdpau video mixer could be created with that size as well. That would be a bit hard, because the real surface size had to be communicated to vdpau. So I'm going with this solution. vaapi seems to be fine with either surface size, so there's hopefully no problem.
* video/decode: pass parameters directly to hwdec allocate_image callbackGravatar wm42013-08-15
| | | | | | Instead of passing AVFrame. This also moves the mysterious logic about the size of the allocated image to common code, instead of duplicating it everywhere.
* video: add vaapi decode and output supportGravatar wm42013-08-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is based on the MPlayer VA API patches. To be exact it's based on a very stripped down version of commit f1ad459a263f8537f6c from git://gitorious.org/vaapi/mplayer.git. This doesn't contain useless things like benchmarking hacks and the demo code for GLX interop. Also, unlike in the original patch, decoding and video output are split into separate source files (the separation between decoding and display also makes pixel format hacks unnecessary). On the other hand, some features not present in the original patch were added, like screenshot support. VA API is rather bad for actual video output. Dealing with older libva versions or the completely broken vdpau backend doesn't help. OSD is low quality and should be rather slow. In some cases, only either OSD or subtitles can be shown at the same time (because OSD is drawn first, OSD is prefered). Also, libva can't decide whether it accepts straight or premultiplied alpha for OSD sub-pictures: the vdpau backend seems to assume premultiplied, while a native vaapi driver uses straight. So I picked straight alpha. It doesn't matter much, because the blending code for straight alpha I added to img_convert.c is probably buggy, and ASS subtitles might be blended incorrectly. Really good video output with VA API would probably use OpenGL and the GL interop features, but at this point you might just use vo_opengl. (Patches for making HW decoding with vo_opengl have a chance of being accepted.) Despite these issues, decoding seems to work ok. I still got tearing on the Intel system I tested (Intel(R) Core(TM) i3-2350M). It was also tested with the vdpau vaapi wrapper on a nvidia system; however this was rather broken. (Fortunately, there is no reason to use mpv's VAAPI support over native VDPAU.)
* video: redo hw decoding initialization, add --hwdec=autoGravatar wm42013-08-11
| | | | | | | | | | | | | | | | Change how the HW decoding stuff is organized, the way it's initialized in particular. Instead of duplicating the list of supported codecs for hwaccel decoders, add a probe function which allows each decoder to report whether it supports a given codec. Add an "auto" choice to the --hwdec option, which automatically enables hardware decoding if libavcodec and/or the VO supports it. What mpv prints on the terminal changes a bit. Now it will just print a single line whether hw decoding is used or not (and nothing at all if no hw decoding at all was requested). The pretty violent fallback from hw decoding to software decoding is still quite verbose and evil-looking though.
* core: move contents to mpvcore (2/2)Gravatar Stefano Pigozzi2013-08-06
| | | | Followup commit. Fixes all the files references.
* vd_lavc: print warning if hardware decoding API is not availableGravatar wm42013-07-30
| | | | | At least currently, this case pretty much happens only in the case vdpau is requested, but not compiled in.
* vd_lavc: fix CONFIG_VDPAU checkGravatar wm42013-07-30
| | | | | | CONFIG_VDPAU was just defined to 0, instead of undefined when vdpau was unavailable. I blame the old mplayer code, which apparently can't have consistent conventions.
* vdpau: split off decoder parts, use "new" libavcodec vdpau hwaccel APIGravatar wm42013-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the decoder parts from vo_vdpau.c to a new file vdpau_old.c. This file is named so because because it's written against the "old" libavcodec vdpau pseudo-decoder (e.g. "h264_vdpau"). Add support for the "new" libavcodec vdpau support. This was recently added and replaces the "old" vdpau parts. (In fact, Libav is about to deprecate and remove the "old" API without deprecation grace period, so we have to support it now. Moreover, there will probably be no Libav release which supports both, so the transition is even less smooth than we could hope, and we have to support both the old and new API.) Whether the old or new API is used is checked by a configure test: if the new API is found, it is used, otherwise the old API is assumed. Some details might be handled differently. Especially display preemption is a bit problematic with the "new" libavcodec vdpau support: it wants to keep a pointer to a specific vdpau API function (which can be driver specific, because preemption might switch drivers). Also, surface IDs are now directly stored in AVFrames (and mp_images), so they can't be forced to VDP_INVALID_HANDLE on preemption. (This changes even with older libavcodec versions, because mp_image always uses the newer representation to make vo_vdpau.c simpler.) Decoder initialization in the new code tries to deal with codec profiles, while the old code always uses the highest profile per codec. Surface allocation changes. Since the decoder won't call config() in vo_vdpau.c on video size change anymore, we allow allocating surfaces of arbitrary size instead of locking it to what the VO was configured. The non-hwdec code also has slightly different allocation behavior now. Enabling the old vdpau special decoders via e.g. --vd=lavc:h264_vdpau doesn't work anymore (a warning suggesting the --hwdec option is printed instead).
* Fix some -Wshadow warningsGravatar wm42013-07-23
| | | | | | In general, this warning can hint to actual bugs. We don't enable it yet, because it would conflict with some unmerged code, and we should check with clang too (this commit was done by testing with gcc).
* vd: add VDCTRL_GET_PARAMSGravatar wm42013-07-15
| | | | | | This is probably going to be unused, but might help with debugging and such. It returns the image parameters as determined by the video decoder.
* Remove old demuxersGravatar wm42013-07-07
| | | | | | | | | | Delete demux_avi, demux_asf, demux_mpg, demux_ts. libavformat does better than them (except in rare corner cases), and the demuxers have a bad influence on the rest of the code. Often they don't output proper packets, and require additional audio and video parsing. Most work only in --no-correct-pts mode. Remove them to facilitate further cleanups.
* vo_opengl: handle chroma locationGravatar wm42013-06-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the video decoder chroma location flags and render chroma locations other than centered. Until now, we've always used the intuitive and obvious centered chroma location, but H.264 uses something else. FFmpeg provides a small overview in libavcodec/avcodec.h: ----------- /** * X X 3 4 X X are luma samples, * 1 2 1-6 are possible chroma positions * X X 5 6 X 0 is undefined/unknown position */ enum AVChromaLocation{ AVCHROMA_LOC_UNSPECIFIED = 0, AVCHROMA_LOC_LEFT = 1, ///< mpeg2/4, h264 default AVCHROMA_LOC_CENTER = 2, ///< mpeg1, jpeg, h263 AVCHROMA_LOC_TOPLEFT = 3, ///< DV AVCHROMA_LOC_TOP = 4, AVCHROMA_LOC_BOTTOMLEFT = 5, AVCHROMA_LOC_BOTTOM = 6, AVCHROMA_LOC_NB , ///< Not part of ABI }; ----------- The visual difference is literally minimal, but since videophiles apparently consider this detail as quality mark of a video renderer, support it anyway. We don't bother with chroma locations other than centered and left, though. Not sure about correctness, but it's probably ok.
* video: add a new method to configure filters and VOsGravatar wm42013-06-28
| | | | | | | | | | | | | | | | | | The filter chain and the video ouputs have config() functions. They are strictly limited to transfering the video size and format. Other parameters (like color levels) have to be transferred separately. Improve upon this by introducing a separate set of reconfig() functions, which use mp_image_params to carry format parameters. This struct contains all image format related parameters from config(), plus additional parameters such as colorspace. Change vf_rotate to use it, as well as vo_opengl. vf_rotate is just an example/test case, but vo_opengl will need it later. The intention is also to get rid of VOCTRL_SET_YUV_COLORSPACE. This information is now handed to the VOs via reconfig(). The getter, VOCTRL_GET_YUV_COLORSPACE, will still be needed though.
* options: remove -lavdopts debug suboptionGravatar wm42013-06-28
| | | | This can be set as avopt instead.
* core: add common function to initialize AVPacketGravatar wm42013-06-03
| | | | | | | | | | Audio and video had their own (very similar) functions to initialize an AVPacket (ffmpeg's packet struct) from a demux_packet (mplayer's packet struct). Add a common function for these. Also use this function for sd_lavc_conv. This is actually a functional change, as some libavfilter subtitle demuxers add weird out-of-band stuff as side-data.
* options: remove some questionable -lavdopts suboptionsGravatar wm42013-05-29
| | | | | | Most of these are rather questionable, the rest you rarely need to set manually. You still can set all of them with -lavdopts-o (because libavcodec has AVOptions for them).