aboutsummaryrefslogtreecommitdiffhomepage
path: root/video/decode
Commit message (Collapse)AuthorAge
...
* dxva2: add interop (non-copyback) hwdec_typeGravatar Kevin Mitchell2016-02-17
| | | | | This always falls back to software decoding right now. VO support will be added in future commits.
* dxva2: avoid using AV_PIX_FMT_P010 directlyGravatar wm42016-02-17
| | | | | | | The new code is essentially equivalent, but compiles against older ffmpeg. Fixes #2832.
* dxva2: use mp_HESULT_to_str on FAILED(hr)Gravatar Kevin Mitchell2016-02-16
|
* dxva2: use mp_image_pool_get_no_alloc for decoder imagesGravatar Kevin Mitchell2016-02-16
| | | | | This makes it more explicit that the pool doesn't ever actually do any allocating itself.
* dxva2: another attempt at using mp_image poolGravatar Kevin Mitchell2016-02-16
| | | | | | | | | | Apparently, some drivers require you to allocate all of the decoder d3d surfaces at once. This commit changes the strategy from allocating surfaces as needed via mp_image_pool_set_allocator, to allocating all the surfaces in one call to IDirectXVideoDecoderService_CreateSurface and adding them to the pool with mp_image_pool_add. fixes #2822
* Rewrite ordered chapters and timeline stuffGravatar wm42016-02-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This uses a different method to piece segments together. The old approach basically changes to a new file (with a new start offset) any time a segment ends. This meant waiting for audio/video end on segment end, and then changing to the new segment all at once. It had a very weird impact on the playback core, and some things (like truly gapless segment transitions, or frame backstepping) just didn't work. The new approach adds the demux_timeline pseudo-demuxer, which presents an uniform packet stream from the many segments. This is pretty similar to how ordered chapters are implemented everywhere else. It also reminds of the FFmpeg concat pseudo-demuxer. The "pure" version of this approach doesn't work though. Segments can actually have different codec configurations (different extradata), and subtitles are most likely broken too. (Subtitles have multiple corner cases which break the pure stream-concatenation approach completely.) To counter this, we do two things: - Reinit the decoder with each segment. We go as far as allowing concatenating files with completely different codecs for the sake of EDL (which also uses the timeline infrastructure). A "lighter" approach would try to make use of decoder mechanism to update e.g. the extradata, but that seems fragile. - Clip decoded data to segment boundaries. This is equivalent to normal playback core mechanisms like hr-seek, but now the playback core doesn't need to care about these things. These two mechanisms are equivalent to what happened in the old implementation, except they don't happen in the playback core anymore. In other words, the playback core is completely relieved from timeline implementation details. (Which honestly is exactly what I'm trying to do here. I don't think ordered chapter behavior deserves improvement, even if it's bad - but I want to get it out from the playback core.) There is code duplication between audio and video decoder common code. This is awful and could be shareable - but this will happen later. Note that the audio path has some code to clip audio frames for the purpose of codec preroll/gapless handling, but it's not shared as sharing it would cause more pain than it would help.
* audio/video: expose codec info as separate fieldGravatar wm42016-02-15
| | | | | Preparation for the timeline rewrite. The codec will be able to change, the stream header not.
* video: remove pointless parameter indirectionGravatar wm42016-02-15
| | | | This is always the same value.
* dxva2: support HEVC Main 10Gravatar wm42016-02-15
|
* dxva2: use mp_image pool for d3d surfacesGravatar Kevin Mitchell2016-02-14
| | | | | | | | | | | | | This is required so that the individual surfaces can pass beyond the dxva2 decoder and be passed to the vo. This also adds additional data to mp_image->planes[0] for IMGFMT_DXVA2, which is required for maintaining and releasing the surface even if the decoder code is uninited. The IDirectXVideoDecoder itself is encapsulated together with its surface pool and configuration in a dxva2_decoder structure whose creation and destruction is managed by talloc.
* dxva2: remove unused structure membersGravatar Kevin Mitchell2016-02-14
|
* dxva2: streamline number of surface calculationGravatar Kevin Mitchell2016-02-14
| | | | | use hwdec_get_max_refs and put the "4 base work surfaces" into ADDITIONAL_SURFACES macro.
* video: approximate AVI timestamps via DTS handlingGravatar wm42016-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until now (and in mplayer traditionally), avi timestamps were handled with a timestamp FIFO. AVI timestamps are essentially just strictly increasing frame numbers and are not reordered like normal timestamps. Limiting the FIFO is required because frames can be dropped. To make it worse, frame dropping can't be distinguished from the decoder not returning output due to increasing the buffering required for B-frames. ("Measuring" the buffering at playback start seems like an interesting idea, but won't work as the buffering could be increased mid-playback.) Another problem are skipped frames (packets with data, but which do not contain a video frame). Besides dropped and skipped frames, there is the problem that we can't always know the delay. External decoders like MMAL are not going to tell us. (And later perhaps others, like direct VideoToolbox usage.) In general, this works not-well enough that I prefer the solution of passing through AVI timestamps as DTS. This is slightly incorrect, because most decoders treat DTS as mpeg-style timestamps, which already include a b-frame delay, and thus will be shifted by a few frames. This means there will be a problem with A/V sync in some situations. Note that the FFmpeg AVI demuxer shifts timestamps by an additional amount (which increases after the first seek!?!?), which makes the situation worse. It works well with VfW-muxed Matroska files, though. On RPI, the first X timestamps are broken until the MMAL decoder "locks on".
* player: fix crash if no video decoder can be initializedGravatar wm42016-02-10
| | | | Caused by the recent refactoring for complex filters.
* video/decode/dxva2.c: GUID_NULL conflictsGravatar kwkam2016-02-06
| | | | | | | GUID_NULL is defined in <ks.h> gcc 6.0 refuses to link the executable because of that Signed-off-by: wm4 <wm4@nowhere>
* vd_lavc: fix use after free in some hwdecsGravatar Kevin Mitchell2016-02-06
| | | | | | | | | | | | | | fd339e3f53996efd2dae9525990da433d1e1bf89 introduced a regression that caused segfault while uniniting dxva2 decoder (and possibly vdpau too). The problem was that it freed the avctx earlier, before calling the backend-specific uninit which referenced it. Revert some of the changes of that commit, and avoid calling flush by checking whether the codec is open instead. (Based on a PR by Kevin Mitchell.) Signed-off-by: wm4 <wm4@nowhere>
* vd_lavc: avoid calling flush on an unopened AVCodecContextGravatar wm42016-02-05
| | | | | | | It can be "dangerous". In particular, the decoder might have failed to initialize, and is now in a broken state. avcodec_flush_buffers() is not expected to be called in this state, and could trigger undefined behavior.
* video: remove AVI timestamps for dropped framesGravatar wm42016-02-04
| | | | | Might possible improve A/V sync, although this is at best approximate. (AVI is just fucked.)
* vd_lavc: remove redundant best_csp fieldGravatar wm42016-02-03
| | | | And some other simplifications.
* vd_lavc: force microsecond timestamps on RPIGravatar wm42016-02-03
| | | | | | | | | | Avoids "problems". In particular, it makes MMAL output a NOPTS timestamp if the input timestamp was NOPTS. Don't do it for other decoders. Ideally, we will at some point in the future switch to integer fractions for timestamps at least up until the filter layer. But this would be a larger change, and for now I'd prefer keeping the not-rounded demuxer timestamps (if we have them).
* audio/video: merge decoder return valuesGravatar wm42016-02-01
| | | | | | Will be helpful for the coming filter support. I planned on merging audio/video decoding, but this will have to wait a bit longer, so only remove the duplicate status codes.
* vd_lavc: release surfaces before destroying decoderGravatar wm42016-01-30
| | | | | | | | | | Commit b53cb8de added a delay queue for decoded frames. This is supposed to be used with copy-back decoders like dxva2-copy and vaapi-copy. Surfaces returned by them can't be referenced after uninitializing the decoders, so they have to be released before destroying the decoder. Move the flush_all() call above decoder uninit accordingly. Also move the destruction of the AVFrame used for decoding (just for being defensive - normally it doesn't hold any reference).
* vd_lavc: allow switching between hw/sw decoding any timeGravatar wm42016-01-29
| | | | | | | We just need to provide an entrypoint for it, and move the main init code to a separate function. This gets rid of the messy video chain full reinit in command.c, which completely destroyed and recreated the video state for the purpose of mid-stream hw/sw switching.
* vd_lavc: simplify an aspect of hwdec fallbackGravatar wm42016-01-29
| | | | | | Don't give the "software_fallback_decoder" field special meaning. Alwass set it, and rename it to "decoder". Whether hw decoding is used is determined by the "hwdec" field already.
* video: fix broken-PTS fallback determinationGravatar wm42016-01-29
| | | | | | | | | | This codes tries to deal with broken PTS timestamps, but since commit 271cabe6 it didn't always overwrite the previous timestamp as it should have. This mattered only if there were broken timestamps in the video stream. Also remove the pointless prev_codec_pts variables, since the decoder doesn't overwrite these fields anymore.
* rpi: add VC-1 supportGravatar wm42016-01-28
| | | | | Oops, this was forgotten earlier. Enables automatic use of the VC-1 hardware decoder on RPI.
* rpi: add mpeg-4 decoding supportGravatar wm42016-01-27
|
* vaapi: lower number of allocated surfaces againGravatar wm42016-01-26
| | | | | | | Commit b53cb8de increased this by the number of additionally delayed surfaces. But since this is only enabled in copy-back mode (which is what process_image is about), the other additional surfaces accounted for the direct rendering case can be ignored.
* vd_lavc: delay images before reading them backGravatar wm42016-01-25
| | | | Facilitates hardware pipelining in particular with nvidia/dxva.
* video: cleanup pts/dts passing between decoder componentsGravatar wm42016-01-25
| | | | | Instead of using semi-public codec_pts/codec_dts fields in struct dec_video, pass them via mp_image fields.
* vdpau: force driver to report preemption earlyGravatar wm42016-01-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Another fix for the crazy and insane nvidia preemption behavior. This time, the situation is that we are using vo_opengl with vdpau interop, and that vdpau got preempted in the background while mpv was sitting idly. This can be e.g. reproduced by using: --force-window=immediate --idle --hwdec=vdpau and switching VTs. Then after switching back, load a video file. This will not let mp_vdpau_handle_preemption() perform preemption recovery, simply because it will do so only once vdp_decoder_create() has been called. There are some other API calls which trigger preemption, but many don't. Due to the way the libavcodec API works, vdp_decoder_create() is way too late. It does so when get_format returns. It notices creating the decoder fails, and continues calling get_format without the vdpau format. We could perhaps force it to reinit again (by adding a call to vdpau.c, that checks for preemption, and sets hwdec_request_reinit), but this seems too much of a mess. Solve it by calling API in mp_vdpau_handle_preemption() that empirically does trigger preemption: output_surface_put_bits_native(). This call is useless, and in fact should be doing nothing (empty update VdpRect). There's the slight chance that in theory it will slow down operation, but in practice it's bound to be harmless. It's the likely cheapest and simplest API call I've found that can trigger the fallback this way. (The driver is closed source, so it was up to trial & error.) Also, when initializing decoding, allow initial preemption recovery, which is needed to pass the test mention above.
* player: fix some oversights in video refactoringGravatar wm42016-01-22
| | | | | | | | | | vo_chain_uninit() isn't supposed to care much about the decoder (although decoders and outputs still go strictly together, so there is not much of an actual difference now). Also unset track.d_video correctly. Remove a stale declaration from dec_video.h as well.
* Change 3 more files to LGPLGravatar wm42016-01-20
|
* vaapi: fix compilation on older FFmpeg/LibavGravatar wm42016-01-20
| | | | | | They don't define FF_PROFILE_VP9_0. Fixes #2737.
* Relicense some non-MPlayer source files to LGPL 2.1 or laterGravatar wm42016-01-19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This covers source files which were added in mplayer2 and mpv times only, and where all code is covered by LGPL relicensing agreements. There are probably more files to which this applies, but I'm being conservative here. A file named ao_sdl.c exists in MPlayer too, but the mpv one is a complete rewrite, and was added some time after the original ao_sdl.c was removed. The same applies to vo_sdl.c, for which the SDL2 API is radically different in addition (MPlayer supports SDL 1.2 only). common.c contains only code written by me. But common.h is a strange case: although it originally was named mp_common.h and exists in MPlayer too, by now it contains only definitions written by uau and me. The exceptions are the CONTROL_ defines - thus not changing the license of common.h yet. codec_tags.c contained once large tables generated from MPlayer's codecs.conf, but all of these tables were removed. From demux_playlist.c I'm removing a code fragment from someone who was not asked; this probably could be done later (see commit 15dccc37). misc.c is a bit complicated to reason about (it was split off mplayer.c and thus contains random functions out of this file), but actually all functions have been added post-MPlayer. Except get_relative_time(), which was written by uau, but looks similar to 3 different versions of something similar in each of the Unix/win32/OSX timer source files. I'm not sure what that means in regards to copyright, so I've just moved it into another still-GPL source file for now. screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but they're all gone.
* vd_lavc: feed A53_CC side data packets into the demuxer for eia_608 decodingGravatar Aman Gupta2016-01-18
|
* video: refactor: disentangle decoding/filtering some moreGravatar wm42016-01-16
| | | | | | | | | | | This moves some code related to decoding from video.c to dec_video.c, and also removes some accesses to dec_video.c from the filtering code. dec_video.ch is starting to make sense, and simply returns video frames from a demuxer stream. The API exposed is also somewhat intended to be easily changeable to move decoding to a separate thread, if we ever want this (due to libavcodec already being threaded, I don't see much of a reason, but it might still be helpful).
* video: fix interactively changing aspect ratioGravatar wm42016-01-14
| | | | | | | | | The aspect ratio calculations are cached (mainly so that aspect ratio related messages are not logged on every frame). The cache is not clared anymore when video filters are reconfigured, but changing the video-aspect-ratio property relied on it. Make it explicit. Fixes #2714.
* video: decouple filtering/decoding slightly moreGravatar wm42016-01-14
| | | | | | | | | | | | | | | | | | | Lots of noise to remove the vfilter/vo fields from dec_video. From now on, video filtering and output will still be done together, summarized under struct vo_chain. There is the question where exactly the vf_chain should go in such a decoupled architecture. The end goal is being able to place a "complex" filter between video decoders and output (which will culminate in natural integration of A->V filters for natural integration of libavfilter audio visualizations). The vf_chain is still useful for "final" processing, such as format conversions and deinterlacing. Also, there's only 1 VO and 1 --vf option. So having 1 vf_chain for a VO seems ideal, since otherwise there would be no natural way to handle all these existing options and mechanisms. There is still some work required to truly decouple decoding.
* video: refactor: shuffle code aroundGravatar wm42016-01-14
| | | | | | struct dec_video should have nothing to do with video filters or outputs, and this huge chunk of code was somehow stuck directly in dec_video.c.
* video: refactor: handle video format fixups closer to decoderGravatar wm42016-01-14
| | | | | | | | | | Instead of handling this on filter chain reinit, do it directly after the decoder. This makes the code less entangled. In particular, this gets rid of the really weird "override params" concept in the video filter code. The last_format/fixed_formats have some redundance with decoder_output, but unfortunately the latter has a slightly different use.
* demux: merge sh_video/sh_audio/sh_subGravatar wm42016-01-12
| | | | | | | | | | This is mainly a refactor. I'm hoping it will make some things easier in the future due to cleanly separating codec metadata and stream metadata. Also, declare that the "codec" field can not be NULL anymore. demux.c will set it to "" if it's NULL when added. This gets rid of a corner case everything had to handle, but which rarely happened.
* mpv_talloc.h: rename from talloc.hGravatar Dmitrij D. Czarkoff2016-01-11
| | | | This change helps avoiding conflict with talloc.h from libtalloc.
* dxva2: log more debug infosGravatar wm42016-01-11
| | | | | Dump the complete list of decoders and image formats. If it's a decoder we know, add a stringified name.
* Fix build on older libavcodec versionsGravatar wm42016-01-08
| | | | avcodec_profile_name() was added only a week ago or so.
* vd_lavc: log codec profile when attempting hardware decodingGravatar wm42016-01-08
| | | | Should be useful.
* vaapi: add VP9 profile entiresGravatar BtbN2015-12-20
|
* video: switch from using display aspect to sample aspectGravatar wm42015-12-19
| | | | | | | | | | | | | | | | MPlayer traditionally always used the display aspect ratio, e.g. 16:9, while FFmpeg uses the sample (aka pixel) aspect ratio. Both have a bunch of advantages and disadvantages. Actually, it seems using sample aspect ratio is generally nicer. The main reason for the change is making mpv closer to how FFmpeg works in order to make life easier. It's also nice that everything uses integer fractions instead of floats now (except --video-aspect option/property). Note that there is at least 1 user-visible change: vf_dsize now does not set the display size, only the display aspect ratio. This is because the image_params d_w/d_h fields did not just set the display aspect, but also the size (except in encoding mode).
* vd_lavc: fix avctx NULL checksGravatar wm42015-12-05
| | | | | | If reinit after a fallback from hardware fails, this field can be NULL. The check in control() was broken due to a typo (found by Coverity), and decode() lacked the check entirely.
* video: readd codec delay estimationGravatar wm42015-12-02
| | | | | | | | | | | | | | | | | Approximately reverts commit 3ccac74d. This failed with some avi files, which do pseudo-VFR by sending packets with empty frames (or repeat frames, depending on point of view). Specifically, these packets are not 0 bytes, so they don't get skipped by libavformat, as with the usual VFR avi hack. Instead, the packet contains a VOP with vop_coded=0, so libavcodec will just return no frame. We could probably distinguish such skipped frames and delayed frames by explicitly measuring the codec delay by counting how long it takes to get the very first frame (and then treat skips as explicit drops), but we may as well simply reinstate the old code. To appease to at least one semi-broken case, do not enable this logic on the RPI, as the FFmpeg MMAL wrapper has arbitrary buffering (and MMAL itself is asynchronous).