| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
These were confined to the video path, but resetting them even if no
video is available shouldn't really matter. Always resetting them makes
the logic easier to follow, I think.
|
|
|
|
|
|
|
| |
Sometimes, vf_pullup hanged on seek. This was because it never was
properly reset. Old timestamps messed up the timestamp calculations,
which made the player show frames for a ridiculously long time, which is
perceived as pausing or hanging.
|
|
|
|
|
|
|
|
| |
Recently, the check was moved, so it was printed only for source video
PTS (since that's easier, and filters should normally behave sane). But
it turns out it's trivial to print a warning in the filter case too by
reusing the code that normally checks for PTS forward jumps without
needing any additional code, so, fine, restore warning in this case.
|
|
|
|
|
| |
Now that vdpau always uses a single image format, this can be
simplified.
|
|
|
|
|
|
|
|
|
|
| |
The old ffmpeg vdpau support code uses separate vdpau pixel formats for
each decoder (pretty much because mplayer's architecture sucked), which
just gets into the way. Force the old decoder's output to IMGFMT_VDPAU,
and remove IMGFMT_IS_VDPAU() where we can remove it.
This should completely remove the differences betwene the old and new
vdpau decoder outside of the decoder.
|
|
|
|
|
|
|
|
|
|
| |
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
|
|
|
|
|
|
|
|
|
|
|
| |
pthreads should be available anywhere. Even if not, for environment
without threads a pthread wrapper could be provided that can't actually
start threads, thus disabling features that require threads.
Make pthreads mandatory in order to simplify build dependencies and to
reduce ifdeffery. (Admittedly, there wasn't much complexity, but maybe
we will use pthreads more in the future, and then it'd become a real
bother.)
|
|
|
|
| |
Feature request from github issue #376.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce a mp_cmd_def struct to define commands, instead of using
mp_cmd for this. This way each command parameter can be a m_option,
instead of m_option plus some more stuff.
Define the ARG_ macros directly in terms of the OPT_ macros. Not sure if
this makes it easier to read (maybe not, even if it looks simpler), but
at least it makes it easier to add other option types.
Another idea was adding a name for each parameter (so you could have
named parameters), but not today.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hr-seek code assumes that when seeking the demuxer, the first image
decoded after the seek will have a PTS exactly equal to the demuxer seek
target time, or before that target time. Incorrect timestamps,
implicitly dropped initial frames, or broken files/demuxers can all
break this assumption, and lead to hr-seek missing the seek target.
Generally, this is not much a problem (the user won't notice being off
by one frame), but it really shows when using the backstep feature. In
this case, backstepping would simply hang.
Add a simple hack that basically forces a minimal value for the --hr-
seek-demuxer-offset option (which is 0 by default) when doing a
backstep-seek. The chosen minimum value is arbitrary. There's no perfect
value, though in general it should perhaps be slightly longer than the
frametime, which the chosen value is more than enough for typical
framerates.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So, FFmpeg/Libav requires us to figure out video timestamps ourselves
(see last 10 commits or so), but the methods it provides for this aren't
even sufficient. In particular, everything that uses AVI-style DTS (avi,
vfw-muxed mkv, possibly mpeg4-in-ogm) with a codec that has an internal
frame delay is broken. In this case, libavcodec will shift the packet-
to-image correspondence by the codec delay, meaning that with a delay=1,
the first AVFrame.pkt_dts is not 0, but that of the second packet. All
timestamps will appear shifted. The start time (e.g. the time displayed
when doing "mpv file.avi --pause") will not be exactly 0.
(According to Libav developers, this is how it's supposed to work; just
that the first DTS values are normally negative with formats that use
DTS "properly". Who cares if it doesn't work at all with very common
video formats? There's no indication that they'll fix this soon,
either. An elegant workaround is missing too.)
Add a hack to re-enable the old PTS code for AVI and vfw-muxed MKV.
Since these timestamps are not reorderd, we wouldn't need to sort them,
but it's less code this way (and possibly more robust, should a demuxer
unexpectedly output PTS).
The original intention of all the timestamp changes recently was
actually to get rid of demuxer-specific hacks and the old timestamp
sorting code, but it looks like this didn't work out. Yet another case
where trying to replace native MPlayer functionality with FFmpeg/Libav
led to disadvantages and bugs. (Note that the old PTS sorting code
doesn't and can't handle frame dropping correctly, though.)
Bug reports:
https://trac.ffmpeg.org/ticket/3178
https://bugzilla.libav.org/show_bug.cgi?id=600
|
|
|
|
| |
And by non-monotonic, we mean "strictly non-monotonic".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using --start with files that use DTS only, or which simply have broken
PTS timestamps, would incorrectly drop frames and possibly not execute
the seek correctly.
Add yet another heuristic to detect this. The intent is that --start and
hr-seeks in general should work correctly, but in order to keep things
fast, we still want to allow frame dropping during hr-seek if there are
no problems doing so. Do this by disabling frame dropping by default,
but re-enabling it if there are no problems found for a while. As a
consequence, --start might be somewhat slower, but normal user
interaction should remain as fast as before.
Note that there's something subtle about the added code: the
has_broken_packet_pts field is checked even before the first packet is
fed to dec_video.c, so the field must not be set to 0 right on start.
It's not initially set to 0 anyway, because the heuristic requires
decoding some images before enabling frame drop anyway.
Note 2: it's not clear whether frame dropping during hr-seek really
helps; I didn't benchmark it.
|
|
|
|
| |
Fixes #369
|
|
|
|
| |
Fixup commit. .append() seems to to nothing.
|
|
|
|
|
|
|
|
| |
Previous code was using the values of the AudioChannelLabel enum directly to
create the channel bitmap. While this was quite smart it was pretty unreadable
and fragile (what if Apple changes the values of those enums?).
Change it to use a 'dumb' conversion table.
|
|
|
|
| |
Apparently this is needed for stuff like iconv.
|
| |
|
|
|
|
|
|
|
| |
This was a memory leak.
Also remove the AF_CONTROL_COMMAND_LINE code, which was inactive. (It's
never called if the new option parser is used.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The d_video->pts field was a bit strange. The code overwrote it multiple
times (on decoding, on filtering, then once again...), and it wasn't
really clear what purpose this field had exactly. Replace it with the
mpctx->video_next_pts field, which is relatively unambiguous.
Move the decreasing PTS check to dec_video.c. This means it acts on
decoder output, not on filter output. (Just like in the previous commit,
assume the filter chain is sane.) Drop the jitter vs. reset semantics;
the dec_video.c determined PTS never goes backwards, and demuxer
timestamps don't "jitter".
|
|
|
|
|
|
| |
Also get rid of the PTS check _after_ filters. This means if there's a
video filter which unsets PTS, no warning will be printed. But we assume
that all filters are well-behaved enough by now.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor the PTS handling code to make it cleaner, and to separate the
bits that use PTS sorting.
Add a heuristic to fall back to DTS if the PTS us non-monotonic. This
code is based on what FFmpeg/Libav use for ffplay/avplay and also
best_effort_timestamp (which is only in FFmpeg). Basically, this 1. just
uses the DTS if PTS is unset, and 2. ignores PTS entirely if PTS is non-
monotonic, but DTS is sorted.
The code is pretty much the same as in Libav [1]. I'm not sure if all of
it is really needed, or if it does more than what the paragraph above
mentions. But maybe it's fine to cargo-cult this.
This heuristic fixes playback of mpeg4 in ogm, which returns packets
with PTS==DTS, even though the PTS timestamps should follow codec
reordering. This is probably a libavformat demuxer bug, but good luck
trying to fix it.
The way vd_lavc.c returns the frame PTS and DTS to dec_video.c is a bit
inelegant, but maybe better than trying to mess the PTS back into the
decoder callback again.
[1] https://git.libav.org/?p=libav.git;a=blob;f=cmdutils.c;h=3f1c667075724c5cde69d840ed5ed7d992898334;hb=fa515c2088e1d082d45741bbd5c05e13b0500804#l1431
|
| |
|
|
|
|
|
|
|
|
|
|
| |
These used the suffix _resync_stream, which is a bit misleading. Nothing
gets "resynchronized", they really just reset state.
(Some audio decoders actually used to "resync" by reading packets for
resuming playback, but that's not the case anymore.)
Also move the function in dec_video.c to the top of the file.
|
| |
|
|
|
|
|
| |
This makes the new code equivalent with the old one, which often passed
dts as pts. Also rename some variables to clear up things.
|
|
|
|
|
|
| |
The code stopped at kAudioChannelLabel_TopBackRight and missed mapping for
5 more channel labels. These are in a completely different order that the mpv
ones so they must be mapped manually.
|
|
|
|
|
|
|
|
|
| |
This includes the case when lavc decodes audio with more than 8
channels, which our audio chain currently does not support.
the changes in ad_lavc.c are just simplifications. The code tried to
avoid overriding global parameters if it found something invalid, but
that is not needed anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using PIC on x86 (eg with hardened toolchains) the ebx register is
reserverd and cannot be used in assembly code.
For vf_eq we allow the compiler to use memory as input.
For vf_noise we temoporarily borrow the ebp register.
This fixes #361.
Signed-off-by: Natanael Copa <ncopa@alpinelinux.org>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was broken by the recent commits. Apparently realvideo timestamps
are severely mangled, and Matroska _of course_ doesn't have the sane,
umangled timestamps, but something unusable. The existing unmangling
code in demux_mkv.c didn't output proper timestamps either. Instead,
it was something weird that triggered sorting. Without sorting (it was
disabled by default recently), you'd get decreasing PTS warnings
In order to fix this, steal some code from libavcodec. Basically copy
the contents of rv34_parser.c (with some changes), which makes
everything magically work. (Maybe it would be better to use the
libavcodec parser API, but I don't want to do that just for this. An
alternative idea would be refusing to read files that have realvideo
tracks, and delegate this to demux_lavf.c, but maybe that's too redical
too.)
I wish I hadn't notice this...
|
| |
|
| |
|
|
|
|
| |
Fixes #370
|
|
|
|
| |
NSLock should be unlocked before dealloc is called on it.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Resampling with non-ancient ALSA setups works fine, so there is no
need to keep this around. Furthermore, as of writing, the default
builtin resampler used by many ALSA setups (taken from libspeex)
actually has higher quality than the default resampling modes of
avresample and swresample.
|
|
|
|
|
|
|
|
|
|
|
| |
Apparently just 5 packets is not enough for the initial audio decode
(which is needed to find the format). The old code (before the recent
refactor) appeared to use 5 packets, but there were apparently other
code paths which in the end amounted to more than 5 packets being read.
The sample that failed (see github issue #368) needed 9 packets.
Fixes #368.
|
|
|
|
|
| |
This used a really weird idiom: a loop that iterates only once, so you
can use break; to jump out of the block.
|
|
|
|
|
|
|
|
|
|
| |
These packets have to be explicitly dropped, because usually libavcodec
uses 0-sized packets to flush delayed frames, meaning just passing
through these packets would have bad consequences.
Normally, libavformat doesn't output 0-sized packets anyway. But I don't
want to take any chances, so don't delete it, and just move it out of
the way to demux.c.
|
| |
|
| |
|
|
|
|
|
| |
Incidentally this seems wrong in the configure as well because only gl-win32
depends on it instead of also making direct3d depend on this (as I did here).
|
|
|
|
|
|
|
| |
Solve this by passing through the language to the player, which then
uses the generic subtitle selection code to make a choice.
Fixes #367.
|
|
|
|
|
|
|
|
|
| |
If the value for --cache-on-pause is larger than --cache-min, and the
cache runs below --cache-on-pause, but above --cache-min, the logic
would demand to pause the player and then unpause immediately again.
This doesn't make much sense, and alternating the pause state in each
playloop iteration has negative consequences. Add an explicit check to
avoid this situation.
|
|
|
|
|
|
|
|
|
|
| |
This means the code that tries to figure out the timestamp from
demuxer and decoder output is now all in dec_video.c. We set the
final timestamp on the returned image (mp_image.pts), as well as
the d_video->pts field.
The way the player uses d_video->pts field is still a bit messy. Maybe
this could be cleaned up later.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It appears PTS sorting was useful only for avi files (and VfW-muxed
mkv). Maybe it was historically also important for decoders with broken
or non-existent PTS reordering (win32 codecs?). But now that we handle
demuxers which outputs DTS only correctly, it just seems dead weight.
Disable it by default. The --pts-association-mode option is now forced
to always use the decoder's PTS value. You can still enable the old
default (auto) or force sorting. But we will probably remove this option
entirely at some point.
Make demux_mkv export timestamps at DTS when it's in VfW mode. This is
needed to get correct timestamps with the new default mode. demux_lavf
already does that.
|
|
|
|
|
|
|
|
|
| |
This was needed to determine PTS from DTS, but the previous commits
make it unnecessary.
The builtin genpts hack was used for DVD, because libavformat's genpts
essentially went amok on DVD timestamp resets. See commit 65d87091 for
details.
|
|
|
|
|
|
|
|
|
| |
Having the DTS directly can be useful for restoring PTS values.
The avi file format doesn't actually store PTS values, just DTS. An
older hack explicitly exported the DTS as PTS (ignoring the [I assume]
genpts generated non-sense PTS), which is not necessary anymore due to
this change.
|