| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
| |
This is always included in the Xorg development headers. Strictly
speaking it's not necessarily available with other X implementations,
but these are hopefully all dead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Drop use of the ancient XF86VM, and use the slightly less ancient Xrandr
extension to retrieve the refresh rate. Xrandr has the advantage that it
supports multiple monitors (at least the modern version of it).
For now, we don't attempt any dynamic reconfiguration. We don't request
and listen to Xrandr events, and we don't notify the VO code of changes
in the refresh rate. (The later works by assuming that X coordinates map
directly to Xrandr coordinates, which probably is wrong with compositing
window manager, at least if these use complicated transformations. But I
know of no API to handle this.)
It would be nice to drop use of the Xinerama extension too, but
unfortunately, at least one EWMH feature uses Xinerama screen numbers,
and I don't know how that maps to Xrandr outputs.
|
|
|
|
| |
And change the defaults for the other queue options to reduce latency.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
--demuxer-readahead-secs now controls how much the demuxer should
readahead by an amount of seconds. This is based on the raw packet
timestamps. It's not always very exact. For example, h264 in Matroska
does not store any linear timestamps (only PTS values which are going
to be reordered by the decoder), so this heuristic is usually off by
several hundred milliseconds.
The decision whether to readahead is basically OR-ed with the other
--demuxer-readahead-packets options. Change the manpage descriptions
to subtly convey these semantics.
|
| |
|
| |
|
|
|
|
|
|
| |
I must have broken it some time ago. The error case dealing with an
unavailable backbuffer was broken, and didn't handle memory management
correctly.
|
|
|
|
|
|
| |
Since the display FPS is currently detected on X11 only (and even there
it's known to be wrong on certain setups), it seems like a good idea to
make this user-configurable.
|
|
|
|
|
|
| |
This is probably a stupid idea, but it can't be denied that this
actually allows playing video without larger desync, even if video is
too slow.
|
|
|
|
|
|
|
|
|
| |
I'm not sure about the merit, though it does print nice numbers if debug
output is enabled.
Basically, this tries to achieve similar results as the glFinish()
business, but again it entirely depends on the drivers whether this
does anything meaningful, or whether it's actively harmful.
|
| |
|
|
|
|
|
|
| |
It seems that at least on nvidia systems with composting disabled, we
can get it to block deterministically on the actual vsync event, which
should improve framedropping.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This mostly uses the same idea as with vo_vdpau.c, but much simplified.
On X11, it tries to get the display framerate with XF86VM, and limits
the frequency of new video frames against it. Note that this is an old
extension, and is confirmed not to work correctly with multi-monitor
setups. But we're using it because it was already around (it is also
used by vo_vdpau).
This attempts to predict the next vsync event by using the time of the
last frame and the display FPS. Even if that goes completely wrong,
the results are still relatively good.
On other systems, or if the X11 code doesn't return a display FPS, a
framerate of 1000 is assumed. This is infinite for all practical
purposes, and means that only frames which are definitely too late are
dropped. This probably has worse results, but is still useful.
"--framedrop=yes" is basically replaced with "--framedrop=decoder". The
old framedropping mode is kept around, and should perhaps be improved.
Dropping on the decoder level is still useful if decoding itself is too
slow.
|
|
|
|
|
| |
Originally, I probably had plans to allow NULL images to handle things
like the last frame case, but that idea was dropped later.
|
|
|
|
|
|
|
|
|
| |
Apparently users prefer this behavior.
It was used for subtitles too, so move the code to calculate the video
offset into a separate function. Seeking also needs to be fixed.
Fixes #1018.
|
|
|
|
|
|
|
|
|
|
| |
The OSD is marked as changed, but the core isn't notified and this
doesn't immediately wakeup. (Possibly the OSD code should wakeup the
core instead, but maybe that woudl be overkill.)
Observed when using "mp.use_suspend = false" in the OSC, and pausing,
and moving the mouse pointer out of the window. The last part of the
fade remained visible for longer than intended.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code reorganized to make layouts exchangeable
alternative test layout can be tested with
layout=slimbox
in the OSC config
timers are now used to properly animate the fade out when the
player is paused
duplicate seeks are discarded again
|
|
|
|
| |
See additions to options.rst.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sub_reset() was called on cycling subtitle tracks and on seeking. Since
we don't want that subtitles disppear on cycling, sd_lavc.c didn't clear
its internal subtitle queue on reset, which meant that seeking with PGS
subtitles could leave the subtitle on screen (PGS subtitles usually
don't have a duration set).
Call it only on seeking, so we can also strictly clear the subtitle
queue in sd_lavc.
(This still can go very wrong if you disable a subtitle, seek, and
enable it again - for example, if used with libavformat that uses "SSA"
style demuxed ASS subtitle packets. That shouldn't happen with newer
libavformat versions, and the user can "correct" it anyway by executing
a seek while the subtitle is selected.)
|
|
|
|
|
|
|
|
| |
Until recently, vo_opengl could be accessed from a single thread only,
due to the OpenGL API context being thread-specific. This issue doesn't
exist anymore, because VOs run on their own thread. This means we can
simply lock/unlock the playloop instead of doing something complicated
to get the playloop thread to execute our code.
|
|
|
|
|
| |
I'd like to enable this by default, but unfortunately the OSC seems to
have some problems with it.
|
|
|
|
|
|
|
| |
It's not true anymore that the core will stop replying for 50ms
(waiting for video) without calling this function. Simplify the
documentation accordingly. Accessing properties that go through
the VO still have this problem, though.
|
| |
|
| |
|
|
|
|
|
| |
This Libav-invented API is of course completely different from the
FFmpeg-one. (The fun part is that I approved of both.)
|
|
|
|
|
|
| |
The previous commit made the completion script always return non-zero, even when
a match is found. This explicitly sets the return value to zero whenever a match
is found but defaults to non-zero in case nothing is matched.
|
|
|
|
|
|
|
|
| |
Returning a non-zero value signals to the zsh completion system that no matches
were added by the script so that it can try the user-defined matchers (e.g.
those defined with matcher-list).
Fixes #1008.
|
|
|
|
|
|
|
|
| |
Just always load the theme. It gets freed properly and dosn't bother anyone.
Fixes #1012.
CC: @mpv-player/stable
|
|
|
|
| |
Don't print PTS warnings by skipping the normal video path.
|
|
|
|
|
|
|
| |
This ran adjust_sync() on every playloop iteration, instead of every
newly decoded frame. It seems this was idempotent in the common case,
but the code was originally designed to be run once only, so restore
that.
|
|
|
|
| |
No functional changes.
|
| |
|
|
|
|
|
| |
These cases were probably confusing. Exit early, which makes it much
clearer what's going on. Should not change anything functionally.
|
|
|
|
|
| |
No changes in functionality, other than being slightly more correct at
stream EOF.
|
|
|
|
|
|
| |
Fixes #1009.
CC: @mpv-player/stable
|
|
|
|
|
| |
Otherwise vdp_video_mixer_destroy() would later fail when called on an invalid
video mixer handle. With mesa r600 vdpau driver, this would cause a segfault.
|
|
|
|
|
|
|
| |
Since the 'syms' tool is shipped in waf's extras, when using system waf the
default tool overrides our own. Force our syms tool by providing the tooldir.
Fixes #1006
|
|
|
|
|
|
| |
Fixes #1007.
CC: @mpv-player/stable
|
|
|
|
|
|
|
|
|
|
| |
Xlib is not thread-safe. Or actually it is, but it's an incomprehensible
hack that was added later, and which needs to be acitvated manually
(this makes no sense). And it appears that the vdpau accesses X from the
decoder thread if GLX interop is used (and not in any other situations -
this doesn't make too much sense either).
So, just call the magic function that enables Xlib thread-safety.
|
|
|
|
| |
It seems only stereo PCM should be passed through.
|
|
|
|
| |
Also, imitate the qt example somewhat.
|
| |
|
|
|
|
|
|
|
| |
No reason to use less.
Since the name "default" is misleading now, replace it with "auto"
(still recognize the old name).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous commit broke these things, and fixing them is separate in
this commit in order to reduce the volume of changes.
Move the image queue from the VO to the playback core. The image queue
is a remnant of the old way how vdpau was implemented, and increasingly
became more and more an artifact. In the end, it did only one thing:
computing the duration of the current frame. This was done by taking the
PTS difference between the current and the future frame. We keep this,
but by moving it out of the VO, we don't have to special-case format
changes anymore. This simplifies the code a lot.
Since we need the queue to compute the duration only, a queue size
larger than 2 makes no sense, and we can hardcode that.
Also change how the last frame is handled. The last frame is a bit of a
problem, because video timing works by showing one frame after another,
which makes it a special case. Make the VO provide a function to notify
us when the frame is done, instead. The frame duration is used for that.
This is not perfect. For example, changing playback speed during the
last frame doesn't update the end time. Pausing will not stop the clock
that times the last frame. But I don't think this matters for such a
corner case.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The VO is run inside its own thread. It also does most of video timing.
The playloop hands the image data and a realtime timestamp to the VO,
and the VO does the rest.
In particular, this allows the playloop to do other things, instead of
blocking for video redraw. But if anything accesses the VO during video
timing, it will block.
This also fixes vo_sdl.c event handling; but that is only a side-effect,
since reimplementing the broken way would require more effort.
Also drop --softsleep. In theory, this option helps if the kernel's
sleeping mechanism is too inaccurate for video timing. In practice, I
haven't ever encountered a situation where it helps, and it just burns
CPU cycles. On the other hand it's probably actively harmful, because
it prevents the libavcodec decoder threads from doing real work.
Side note:
Originally, I intended that multiple frames can be queued to the VO. But
this is not done, due to problems with OSD and other certain features.
OSD in particular is simply designed in a way that it can be neither
timed nor copied, so you do have to render it into the video frame
before you can draw the next frame. (Subtitles have no such restriction.
sd_lavc was even updated to fix this.) It seems the right solution to
queuing multiple VO frames is rendering on VO-backed framebuffers, like
vo_vdpau.c does. This requires VO driver support, and is out of scope
of this commit.
As consequence, the VO has a queue size of 1. The existing video queue
is just needed to compute frame duration, and will be moved out in the
next commit.
|
|
|
|
|
|
|
| |
Also add instructions to release-policy.md, since this can be easily
forgotten.
CC: @mpv-player/stable
|
|
|
|
|
|
| |
Requested on: https://github.com/mpv-player/mpv/commit/90ec3334174e80c16f00971886223a3afabc1aca#commitcomment-7331673
Might remove or remap them again later.
|
|
|
|
|
| |
Or at leats this is the intention. It's a bit hard to tell which
information is needed, and which not.
|
|
|
|
| |
This makes a certain corner case simpler at a later point.
|