| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normally, OSD is updated every time the playloop is run. This has to be
done, because the OSD may implicitly reference various properties,
without knowing whether they really need to be updated or not. (There's
a property update mechanism, but it's mostly unavailable, because OSD is
special-cased and can not use the client API mechanism properly.)
Normally, these updates are no problem, because the OSD is only actually
printed when the OSD text actually changes.
But commit d23ffd24 added a rate-limiting mechanism, which tries to
limit OSD updates at most every 50ms (or the next video frame). Since it
can't know in advance whether the OSD is going to change or not, this
simply waked up the player every 50ms.
Change this so that the player is updated only as part of general
updates determined through mp_notify(). (This function also notifies the
client API of changed properties.) The desired result is that the player
will not wake up at all in normal idle mode, but still update properties
that can change when paused, such as the cache.
This is mostly a cosmetic change (in the sense of making runtime
behavior just slightly better). It has the slightly more negative
consequence that properties which update implicitly (such as "clock")
will not update periodically anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calculate the buffering percentage in the same code which determines
whether the player is or should be buffering. In particular it can't
happen that percentage and buffering state are slightly out of sync due
to calling DEMUXER_CTRL_GET_READER_STATE and reusing it with the
previously determined buffering state.
Now it's also easier to guarantee that the buffering state is updated
properly.
Add some more verbose output as well.
(Damn I hate this code, why did I write it?)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Subtitles can be preloaded, which means they're fully read and copied
into ASS_Track. This in turn is mainly for the sake of being able to do
subtitle seeking (when it comes down to it, subtitle seeking is the
cause for most trouble here).
Commit a714f8e92 broke preloaded subtitles which have events with
unknown duration, such as some MicroDVD samples. The event list gets
cleared on every seek, so the property of being preloaded obviously gets
lost.
Fix this by moving most of the preloading logic to dec_sub.c. If the
subtitle list gets cleared, they are not considered preloaded anymore,
and the logic for demuxed subtitles is used.
As another minor thing, preloadeding subtitles did neither disable the
demux stream, nor did it discard packets. Thus you could get queue
overflows in theory (harmless, but annoying). Fix this by explicitly
discarding packets in preloaded mode.
In summary, now the only difference between preloaded and normal
demuxing are:
1. a seek is issued, and all packets are read on start
2. during playback, discard the packets instead of feeding them to the
subtitle decoder
This is still petty annoying. It would be nice if maintaining the
subtitle index (and maybe a subtitle packet cache for instant subtitle
presentation when seeking back) could be maintained in the demuxer
instead. Half of all file formats with interleaved subtitles have
this anyway (mp4, mkv muxed with newer mkvmerge).
|
|
|
|
|
|
| |
No need to pass endpts down in such a dumb way.
Also remove an outdated comment somewhere.
|
|
|
|
|
| |
Instead of having reselect_demux_streams() look at all streams, make it
look at the current stream that is being enabled/disabled.
|
|
|
|
|
|
|
| |
It was just dead code.
Also fixes the stream-open-filename property, which is supposed to be
read-only if a file was already opened.
|
|
|
|
|
|
| |
Some oddity that is not needed anymore. The only thing which still
referenced them was avoiding loading external files more than once,
which is now prevented by checking the list of tracks instead.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See --lavfi-complex option.
This is still quite rough. There's no support for dynamic configuration
of any kind. There are probably corner cases where playback might freeze
or burn 100% CPU (due to dataflow problems when interaction with
libavfilter).
Future possible plans might include:
- freely switch tracks by providing some sort of default track graph
label
- automatically enabling audio visualization
- automatically mix audio or stack video when multiple tracks are
selected at once (similar to how multiple sub tracks can be selected)
|
|
|
|
| |
Preparation.
|
|
|
|
| |
Slightly better.
|
|
|
|
| |
Reduces the dependency of the filter/output code on the decoder.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Regression caused by commit 3b95dd47. Also see commit 4c25b000. We can
either use video_next_pts and add "delay", or we just use video_pts. Any
other combination breaks. The reason why the assumption that delay==0 at
this point was wrong exactly because after displaying the first video
frame (usually done before audio resync) a new frame might be "added"
immediately, resulting in a new video_next_pts and "delay", which will
still amount to video_pts.
Fixes #2770. (The reason why display-sync was blamed in this issue is
because enabling display-sync in the options forces a prefetch by 2
instead of 1 frames for seeks/playback restart, which triggers the
issue, even if display-sync is not actually enabled. In this case,
display-sync is never enabled because the frames have a unusually high
frame duration. This is also what exposed the initial desync issue.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
It doesn't need to be part of the big context, but is strictly part of
shuffling data from the audio filters to audio output, and thus belongs
into ao_chain.
It also turns out that clearing it in clear_audio_output_buffers() is
completely redundant.
(Of course ao_buffer is an abomination in the first place and shouldn't
exist at all.)
|
| |
|
|
|
|
|
|
|
|
|
| |
Similar to the video path. dec_audio.c now handles decoding only. It
also looks very similar to dec_video.c, and actually contains some of
the rewritten code from it. (A further goal might be unifying the
decoders, I guess.)
High potential for regressions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This covers source files which were added in mplayer2 and mpv times
only, and where all code is covered by LGPL relicensing agreements.
There are probably more files to which this applies, but I'm being
conservative here.
A file named ao_sdl.c exists in MPlayer too, but the mpv one is a
complete rewrite, and was added some time after the original ao_sdl.c
was removed. The same applies to vo_sdl.c, for which the SDL2 API is
radically different in addition (MPlayer supports SDL 1.2 only).
common.c contains only code written by me. But common.h is a strange
case: although it originally was named mp_common.h and exists in MPlayer
too, by now it contains only definitions written by uau and me. The
exceptions are the CONTROL_ defines - thus not changing the license of
common.h yet.
codec_tags.c contained once large tables generated from MPlayer's
codecs.conf, but all of these tables were removed.
From demux_playlist.c I'm removing a code fragment from someone who was
not asked; this probably could be done later (see commit 15dccc37).
misc.c is a bit complicated to reason about (it was split off mplayer.c
and thus contains random functions out of this file), but actually all
functions have been added post-MPlayer. Except get_relative_time(),
which was written by uau, but looks similar to 3 different versions of
something similar in each of the Unix/win32/OSX timer source files. I'm
not sure what that means in regards to copyright, so I've just moved it
into another still-GPL source file for now.
screenshot.c once had some minor parts of MPlayer's vf_screenshot.c, but
they're all gone.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Eventually we want the VO be driven by a A->V filter, so a decoder
doesn't even have to exist. Some features definitely require a decoder
though (like reporting the decoder in use, hardware decoding, etc.), so
for each thing which accessed d_video, it has to be redecided if and how
it can access decoder state.
At least the "framedrop" property slightly changes semantics: you can
now always set this property, even if no video is active.
Some untested changes in this commit, but our bio-based distributed
test suite has to take care of this.
|
|
|
|
| |
The same is going to happen to d_video and d_audio later.
|
|
|
|
|
|
|
|
|
|
|
| |
This moves some code related to decoding from video.c to dec_video.c,
and also removes some accesses to dec_video.c from the filtering code.
dec_video.ch is starting to make sense, and simply returns video frames
from a demuxer stream. The API exposed is also somewhat intended to be
easily changeable to move decoding to a separate thread, if we ever want
this (due to libavcodec already being threaded, I don't see much of a
reason, but it might still be helpful).
|
|
|
|
|
|
| |
Channel switching is treated inside the global DVB state
by now. Anyways the last switching direction is not really useful
and of no interest inside the player.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lots of noise to remove the vfilter/vo fields from dec_video.
From now on, video filtering and output will still be done together,
summarized under struct vo_chain.
There is the question where exactly the vf_chain should go in such a
decoupled architecture. The end goal is being able to place a "complex"
filter between video decoders and output (which will culminate in
natural integration of A->V filters for natural integration of
libavfilter audio visualizations). The vf_chain is still useful for
"final" processing, such as format conversions and deinterlacing. Also,
there's only 1 VO and 1 --vf option. So having 1 vf_chain for a VO seems
ideal, since otherwise there would be no natural way to handle all these
existing options and mechanisms.
There is still some work required to truly decouple decoding.
|
| |
|
|
|
|
|
|
| |
struct dec_video should have nothing to do with video filters or
outputs, and this huge chunk of code was somehow stuck directly in
dec_video.c.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Basically reimplement it. The old implementation was quite stupid, and
was probably done this way because video filtering and output used to be
way less decoupled. Now we can reimplement it in a very simple way: when
backstepping, seek to current time, but keep the last frame that was
supposed to be discarded when reaching the target time. When the seek
finishes, prepend the saved frame to the video frame queue.
A disadvantage is that the new implementation fails to skip over
timeline boundaries (ordered chapters etc.), but this never worked
properly anyway. It's possible that this will be fixed some time in the
future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This slightly changes behavior when seeking with external audio/subtitle
tracks if transport streams and mpeg files are played, as well as
behavior when seeking with such external tracks.
get_main_demux_pts() is evil because it always blocks on the demuxer (if
there isn't already a packet queued). Thus it could lock up the player,
which is a shame because all other possible causes have been removed.
The reduced "precision" when seeking in the ts/mpeg cases (where
SEEK_FACTOR is used, resulting in byte seeks instead of timestamp seeks)
might lead to issues. We should probably drop this heuristic. (It was
introduced because there is no other way to seek in files with PTS
resets with libavformat, but its value is still questionable.)
|
|
|
|
|
|
|
|
| |
Slightly change how it is decided when a new packet should be read.
Switch to demux_read_packet_async(), and let the player "wait properly"
until required subtitle packets arrive, instead of blocking everything.
Move distinguishing the cases of passive and active reading into the
demuxer, where it belongs.
|
|
|
|
|
|
|
| |
This includes the case of switching ordered chapter boundaries. It will
now be recreated on each timeline part switch. This shouldn't be much of
a problem with modern libass. (Older libass versions use fontconfig for
memory fonts, and will be very slow to reinitialize memory fonts.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since commit 6d9cb893, subtitle state doesn't survive timeline switches
(ordered chapters etc.). So there is no point in caching the state per
sh_stream anymore (which would be required to deal with multiple
segments). Move the cache to struct track.
(Whether it's worth caching the subtitle state just for the situation
when subtitle tracks get reselected is questionable. But for now, it's
nice to have the subtitles immediately show up when reselecting a
subtitle.)
|
|
|
|
| |
This tmp thing had not much of a purpose anymore.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of periodically trying to enable it again. There are two cases
that can happen:
1. A random discontinuity messed everything up,
2. Things are just broken and will desync all the time
Until now, it tried to deal with case 1 - but maybe this is really rare,
and we don't really need to care about it. On the other hand, case 2 is
kind of hard to diagnose if the user doesn't use the terminal.
Seeking will reenable display-sync, so you can fix playback if case 1
happens, but still get predictable behavior in case 2.
|
|
|
|
| |
But not much.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was used with --no-sub-ass (aka --no-ass). This option (which is
not yet removed) strips all styling from the subtitles, and renders them
as plaintext only. For some reason, it originally seemed convenient to
reuse all the OSD text rendering code (osd_libass.c). While this was
indeed simple, it had a bad influence on the rest of the code. For
example, it had to decide whether to go through the OSD code path, or
the proper subtitle renderer in sd_ass.c.
Kill the OSD subtitle renderer. Reimplement --no-sub-ass and also
"secondary" subtitles in sd_ass.c. fill_plaintext() contains some rather
minor code duplication with osd_libass.c for setting up a dummy
ASS_Event and escaping the stripped text. Since sd_ass.c already has to
handle "normal" text subtitles, and has code for stripping ASS tags,
this remains all relatively simple.
Remove all the unnecessary crap from the rest of the code.
|
|
|
|
|
|
|
|
|
| |
Use the demux_set_ts_offset() added in the previous commit to base each
timeline segment to use timestamps according to its relative position
within the overall timeline. As a consequence we don't need to care
about these timestamps anymore, and everything becomes simpler.
(Another minor but delicious nugget of sanity.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most of this is explained in the DOCS additions.
This gives us slightly more sanity, because there is less interaction
between the various parts. The goal is getting rid of the video_offset
entirely.
The simplification extends to the user API. In particular, we don't need
to fix missing parts in the API, such as the lack for a seek command
that seeks relatively to the start time. All these things are now
transparent.
(If someone really wants to know the real timestamps/start time, new
properties would have to be added.)
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for the progress indicator taskbar extension
that was introduced with Windows 7 and Windows Server 2008 R2.
I don’t like this solution because it keeps its own state and
introduces another VOCTRL, but I couldn’t come up with anything
less messy.
closes #2399
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We always let audio slowly desync until a threshold is reached, and then
pushed it back by applying a maximum compensation speed. Refine what
comes afterwards: instead of playing with the nominal video speed, use
the actual required audio speed for keeping sync as measured by the A/V
difference. (The "actual" speed is the ideal speed with A/V differences
added.)
Although this works in theory, it's somewhat questionable how much this
works in practice. The ideal time value is actually not exact, but is
the time at which the frame is scheduled (could be compensated by using
the time_left calculations in handle_display_sync_frame()). It doesn't
account for speed changes or catastrophic discontinuities. It uses only
10 past frames.
|
|
|
|
| |
We can implement it differently and drop a tiny bit of state.
|
|
|
|
|
|
|
|
| |
This is very "illustrative", unlike the video-speed-correction
property, and thus useful. It can also be used to observe scheduling
errors, which are not detected by the core. (These happen due to
rounding errors; possibly not evne our fault, but coming from
files with rounded timestamps and so on.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Get rid of get_past_frame_durations(), which was a bit too messy. Add
a past_frames array, which contains the same information in a more
reasonable way. This also means that we can get the exact current and
past frame durations without going through awful stuff. (The main
problem is that vo_pts_history contains future frames as well, which is
needed for frame backstepping etc., but gets in the way here.)
Also disable the automatic disabling of display-sync if the frame
duration changes, and extend the frame durations allowed for display
sync. To allow arbitrarily high durations, vo.c needs to be changed
to pause and potentially redraw OSD while showing a single frame, so
they're still limited.
In an attempt to deal with VFR, calculate the overall speed using the
average FPS. The frame scheduling itself does not use the average FPS,
but the duration of the current frame. This does not work too well,
but provides a good base for further improvements.
Where this commit actually helps a lot is dealing with rounded
timestamps, e.g. if the container framerate is wrong or unknown, or
if the muxer wrote incorrectly rounded timestamps. While the rounding
errors apparently can't be get rid of completely in the general case,
this is still much better than e.g. disabling display-sync completely
just because some frame durations go out of bounds.
|
|
|
|
|
|
|
|
|
|
| |
Discontinuities (like toggling fullscreen) can cause multiple frames to
be dropped in succession, which sounds very weird. It's better to drop
some video frames instead to compensate for larger desyncs.
We roughly base it on the maximum allowed speed changes (audio change is
"additional" to the video change to account for deviations when playing
at max. video speed change).
|
|
|
|
|
| |
Does what the manpage says. This is a replacement incrementing the
dropped frame counter (see previous commit).
|
|
|
|
| |
Not very robust in the moment.
|
|
|
|
|
| |
Pretty dumb (and doesn't handle pausing or other discontinuities), but
at least somewhat idiot-proof.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sigh... After the recent changes, another regression appeared. This
time, the VO window wasn't cleared when changing from video to a non-
video file (such as audio-only with no cover art). Fix this by properly
taking the handle_force_window() bool parameter into account.
Also, the info message could be printed twice, which is harmless but
ugly. So just remove the message.
Also, do some more minor cleanups (like fixing the comment, which was
completely outdated).
|
|
|
|
|
|
|
|
|
|
|
| |
The previous commit was incomplete (and I didn't notice due to a broken
test procedure).
The annoying part is that actually creating the VO was separate; redo
this and merge the code for this into handle_force_window() as well.
This will also make implementing proper reaction to runtime option
changes easier. (Only the part for actually listening to option changes
is missing.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If this mode is enabled, the player tries to strictly synchronize video
to display refresh. It will adjust playback speed to match the display,
so if you play 23.976 fps video on a 24 Hz screen, playback speed is
increased by approximately 1/1000. Audio wll be resampled to keep up
with playback.
This is different from the default sync mode, which will sync video to
audio, with the consequence that video might skip or repeat a frame once
in a while to make video keep up with audio.
This is still unpolished. There are some major problems as well; in
particular, mkv VFR files won't work well. The reason is that Matroska
is terrible and rounds timestamps to milliseconds. This makes it rather
hard to guess the framerate of a section of video that is playing. We
could probably fix this by just accepting jittery timestamps (instead
of explicitly disabling the sync code in this case), but I'm not ready
to accept such a solution yet.
Another issue is that we are extremely reliant on OS video and audio
APIs working in an expected manner, which of course is not too often
the case. Consequently, the new sync mode is a bit fragile.
|
|
|
|
|
|
|
|
|
|
| |
For video sync, we want separate playback speed controls for user-
requested speed and the "correction" speed for video timing. Further, we
use this separation to make sure only a resampler is inserted if
playback speed is only changed for video sync correction.
As of this commit, this is basically inactive code. It's just
preparation for the video sync code (the following commit).
|