| Commit message (Collapse) | Author | Age |
... | |
|
|
|
| |
See #5670.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Hardware decoding things often need access to additional handles from
the windowing system, such as the X11 or Wayland display when using
vaapi. The opengl-cb had nothing dedicated for this, and used the weird
GL_MP_MPGetNativeDisplay GL extension (which was mpv specific and not
officially registered with OpenGL).
This was awkward, and a pain due to having to emulate GL context
behavior (like needing a TLS variable to store context for the pseudo GL
extension function). In addition (and not inherently due to this), we
could pass only one resource from mpv builtin context backends to
hwdecs. It was also all GL specific.
Replace this with a newer mechanism. It works for all RA backends, not
just GL. the API user can explicitly pass the objects at init time via
mpv_render_context_create(). Multiple resources are naturally possible.
The API uses MPV_RENDER_PARAM_* defines, but internally we use strings.
This is done for 2 reasons: 1. trying to leave libmpv and internal
mechanisms decoupled, 2. not having to add public API for some of the
internal resource types (especially D3D/GL interop stuff).
To remain sane, drop support for obscure half-working opengl-cb things,
like the DRM interop (was missing necessary things), the RPI window
thing (nobody used it), and obscure D3D interop things (not needed with
ANGLE, others were undocumented). In order not to break ABI and the C
API, we don't remove the associated structs from opengl_cb.h.
The parts which are still needed (in particular DRM interop) needs to be
ported to the render API.
|
|
|
|
| |
Fixes #5640.
|
|
|
|
| |
Probably mostly useful for the libmpv render API.
|
|
|
|
|
|
|
| |
It's a WTF that we have something as specific in the API. It could be
argued that we should provide helpers for other language and GUI toolkit
combinations. Obviously that's not going to scale, and it's somewhat
likely that it will bitrot. The rest is said in the API changelog.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this change, mpv_wait_event() could inconsistently return
multiple MPV_EVENT_SHUTDOWN events to a single mpv_handle, up to the
point of spamming the event queue under certain circumstances. Change
this and just send it exactly once to each mpv_handle.
Some client API users might have weird requirements about destroying
their state asynchronously (and not reacting immediately to the SHUTDOWN
event). This change will help a bit to make this less weird and
surprising.
|
|
|
|
|
| |
Since this has clearer semantics now, the old name is just clunky and
confusing.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This changes how mpv_terminate_destroy() and mpv_detach_destroy()
behave. The doxygen in client.h tries to point out the differences. The
goal is to make this more useful to the API user (making it behave like
refcounting).
This will be refined in follow up commits.
Initialization is unfortunately closely tied to termination, so that
changes as well. This also removes earlier hacks that make sure that
some parts of FFmpeg initialization are run in the playback thread
(instead of the user's thread). This does not matter with standard
FFmpeg, and I have no reason to care about this anymore.
|
| |
|
|
|
|
|
|
|
|
|
| |
This adds key bindings for some semi-popular features. It also tries to
cleanup some old bindings. For example w/e for panscan is now changed to
w/W. In all cases, the old bindings are still kept and work, though.
Part of an ongoing attempt to cleanup the default key bindings.
See #973 for some context.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The playback start logic explicitly waits until the first frame has been
displayed. Usually this will introduce a wait of 1 vsync. For normal
playback this doesn't matter, but with respect to low latency needs,
this only leads to additional data getting queued up in the demuxer or
network buffers.
Another thing is that the timing logic decodes 1 frame ahead (= 1 frame
extra latency) to determine the exact duration of a frame.
To be fair, there doesn't really seem to be a hard reason why this is
needed. With the current code, enabling the option does lead to A/V
desync sometimes (if the demuxer FPS is too inaccurate), and also frame
drops at playback start in some situations. But this all seems to be
avoidable, if the timing logic were to be rewritten completely, which
should probably happen in the future. Thus the new option comes with the
warning that it can be removed any time. This is also why the option has
"hack" in the name.
|
|
|
|
|
| |
This is all documented elsewhere in the manpage, but hard to find from
here.
|
|
|
|
|
|
|
|
|
|
| |
Well I guess it doesn't help that much.
Also add some stuff that might help to the manpage.
The fundamental problem with some "live" sources (e.g. x11grab) is
actually that the player gets behind initially, and never thinks it has
to catch up. This is also why --untimed can help.
|
|
|
|
| |
Another attempt to try to make it behave in certain situations.
|
|
|
|
| |
For example af_loudnorm is a known filter with this behavior.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The purpose of the new API is to make it useable with other APIs than
OpenGL, especially D3D11 and vulkan. In theory it's now possible to
support other vo_gpu backends, as well as backends that don't use the
vo_gpu code at all.
This also aims to get rid of the dumb mpv_get_sub_api() function. The
life cycle of the new mpv_render_context is a bit different from
mpv_opengl_cb_context, and you explicitly create/destroy the new
context, instead of calling init/uninit on an object returned by
mpv_get_sub_api().
In other to make the render API generic, it's annoyingly EGL style, and
requires you to pass in API-specific objects to generic functions. This
is to avoid explicit objects like the internal ra API has, because that
sounds more complicated and annoying for an API that's supposed to never
change.
The opengl_cb API will continue to exist for a bit longer, but
internally there are already a few tradeoffs, like reduced
thread-safety.
Mostly untested. Seems to work fine with mpc-qt.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
the title bar is now within the window bounds instead of outside. same
as QuickTime Player. it supports several standard styles, two dark and
two light ones. additionally we have properly rounded corners now and
the borderless window also has the proper window shadow.
Also make the earliest supported macOS version 10.10.
Fixes #4789, #3944
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces the option --drm-format (currently used only by
context_drm_egl, vo_drm implementation is pending) which allows you to
pick between a xrgb8888 or a xrgb2101010 visual for --gpu-context=drm.
Requires a recent mesa (18.0.0_rc4 or later) to work.
This also fixes a bug when using --gpu-context=drm on a 30bpp-enabled
mesa (allow_rgb10_configs set to true). Previously it would've set up
an XRGB8888 format at the DRM/GBM level, while a 30bpp EGLConfig would
be picked, resulting in a garbled image.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do this because retrying reading on higher levels (like the demuxer)
usually causes tons of problems. A hack like this is simpler and could
allow to remove some of the higher level retry behavior.
This works by trying to detect whether the file is appended. If we reach
EOF, check if the file size changed compared to the initial value. If it
did, it means the file was appended at least once, and we set the
p->appending flag. If that flag is set, we simply retry reading more
data every time we encounter EOF. The only way to do this is polling,
and we poll for at most 10 times, after waiting for 200ms every time.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This solves a number of problems simultaneously:
1. When outputting HLG, this allows tuning the OOTF based on the display
characteristics.
2. When outputting PQ or other HDR curves, this allows soft-limiting the
output brightness using the tone mapping algorithm.
3. When outputting SDR, this allows HDR-in-SDR style output, by
controlling the output brightness directly.
Closes #5521
|
|
|
|
|
|
|
| |
Usable for uniquely identifying mpv instances from
subprocesses, controlling mpv with AppleScript, ...
Adds a new mp_getpid() wrapper for cross-platform reasons.
|
|
|
|
|
|
|
| |
This should be helpful for the new OSX Cocoa backend, which uses
opengl-cb internally. Since it comes with a behavior change that could
possibly interfere with libmpv/opengl_cb users, we mark it as explicit
API change.
|
|
|
|
|
|
|
|
| |
This switches the default away from "bob" to the best algorithm reported
as supported by the driver. This is convenient for users, and there is
no reason to use something worse by default.
Untested.
|
|
|
|
| |
This doesn't work anymore.
|
|
|
|
|
| |
We sure as hell won't enable hardware decoding by default, but we can
make it more accessible with a key binding.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before this, we made deinterlacing dependent on the video codec metadata
(AVFrame.interlaced_frame for libavcodec). So even if --deinterlace=yes
was set, we skipped deinterlacing if the flag wasn't set. This is very
unreliable and there are many streams with flags incorrectly set.
The potential problem is that this might upset people who alwase enabled
deinterlace and hoped it worked. But it's likely these people were
screwed by this setting anyway. The new behavior is less tricky and
easier to understand, and this preferable. Maybe one day we could
introduce a --deinterlace=auto, which does the right thing, but of
course this would be hard to implement (esecially with hwdec).
Fixes #5219.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
this is meant to replace the old and not properly working vo_gpu/opengl
cocoa backend in the future. the problems are various shortcomings of
Apple's opengl implementation and buggy behaviour in certain
circumstances that couldn't be properly worked around. there are also
certain regressions on newer macOS versions from 10.11 onwards.
- awful opengl performance with a none layer backed context
- huge amount of dropped frames with an early context flush
- flickering of system elements like the dock or volume indicator
- double buffering not properly working with a none layer backed context
- bad performance in fullscreen because of system optimisations
all the problems were caused by using a normal opengl context, that
seems somewhat abandoned by apple, and are fixed by using a layer backed
opengl context instead. problems that couldn't be fixed could be
properly worked around.
this has all features our old backend has sans the wid embedding,
the possibility to disable the automatic GPU switching and taking
screenshots of the window content. the first was deemed unnecessary by
me for now, since i just use the libmpv API that others can use anyway.
second is technically not possible atm because we have to pre-allocate
our opengl context at a time the config isn't read yet, so we can't get
the needed property. third one is a bit tricky because of deadlocking
and it needed to be in sync, hopefully i can work around that in the
future.
this also has at least one additional feature or eye-candy. a properly
working fullscreen animation with the native fs. also since this is a
direct port of the old backend of the parts that could be used, though
with adaptions and improvements, this looks a lot cleaner and easier to
understand.
some credit goes to @pigoz for the initial swift build support which
i could improve upon.
Fixes: #5478, #5393, #5152, #5151, #4615, #4476, #3978, #3746, #3739,
#2392, #2217
|
|
|
|
|
|
|
|
|
|
|
| |
early flushing only caused problems on macOS, which includes:
- performance problems and huge amount of dropped frames
- problems with playing back video files with fps close to the display
refresh rate
- rendering at twice the rate of the video fps
- not properly detected display refresh rate
we always deactivate any early flush for macOS to fix these problems.
|
|
|
|
|
|
|
|
|
|
|
| |
Disable by default.
This feature was added in 7eb342757, which allowed stream selection
in runtime. Problem with this atm is that FFmpeg will try to demux
every first packet of every track leading to noticeable delay opening
the URL.
This option can be changed to enabled by default or removed when
HLS/DASH demuxers are improved upstream.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using the GL renderer for color conversion will make sure screenshots
will use the same conversion as normal video rendering. It can do this
for all types of screenshots.
The logic when to write 16 bit PNGs changes. To approximate the old
behavior, we decide by looking whether the source video format has more
than 8 bits per component. We apply this logic even for window
screenshots. Also, 16 bit PNGs now always include an unused alpha
channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no
RGB064. RGB48 is 3 bytes and usually not supported by GPUs for
rendering, so we have to use RGBA64, which forces an alpha channel.
Will break for users who use --target-trc and similar options.
I considered creating a new gl_video context, but it could double GPU
memory use, so I didn't.
This uses FBOs instead of glGetTexImage(), because that increases the
chance it could work on GLES (e.g. ANGLE). Untested. No support for the
Vulkan and D3D11 backends yet.
Fixes #5498. Also fixes #5240, because the code for reading back is not
used with the new code path.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current peak detection algorithm was very bugged (which contributed
to the excessive cross-frame flicker without long normalization) and
also didn't take into account the frame average brightness level.
The new algorithm both takes into account frame average brightness (in
addition to peak brightness), and also computes the values in a more
stable/correct way. (The old path was basically undefined behavior)
In addition to improving the algorithm, we also switch to hable tone
mapping by default, and try to enable peak computation automatically
whever possible (compute shaders + SSBOs supported). We also make the
desaturation milder, after extensive testing during libplacebo
development.
I also had to compensate a bit for the representational differences
between mpv and libplacebo (libplacebo treats 1.0 as the reference peak,
but mpv treats it as the nominal peak), but it shouldn't have caused any
problems.
This is still not quite the same as libplacebo, since libplacebo also
allows tagging the desired scene average brightness on the output, and
it also supports reading the scene average brightness from static
metadata (MaxFALL) where available. But those changes are a bit more
involved. It's possible we could also read this from metadata in the
future, but we have problems communicating with AVFrames as it is and I
don't want to touch the mpv colorimetry structs for the time being.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to the previous commit, and for the same reasons. Unlike with
af_scaletempo, resampling does not have a natural frame size, so we set
an arbitrary size limit on output frames. We add a new option to control
this size, although I'm not sure whether anyone will use it, so mark it
for testing only.
Note that we go through some effort to avoid buffering data in
libswresample itself. One reason is that we might have to reinitialize
the resampler completely when changing speed, which drops the buffered
data. Another is that I'm not sure whether the resampler will do the
right thing when applying dynamic speed changes.
|
|
|
|
|
|
|
|
|
|
| |
MPlayer used this to distinguish multiple decoder wrappers (such as
libavcodec vs. binary codec loader vs. builtin decoders). It lost
meaning in mpv as non-libavcodec things were dropped. Now it doesn't
serve any purpose anymore.
Parsing was removed quite a while ago, and the recent filter change
removed any use of the internal family field. Get rid of it.
|
|
|
|
| |
In particular, mention deprecated things.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FFmpeg only suppports http proxies and ignores it if
the resulting url is https. Also, no SOCKS.
Use it like `--ytdl-raw-options=proxy=[http://127.0.0.1:3128]` so
it doesn't confuse mpv because of the colons.
You need to pass it as an option because youtube-dl doesn't give
us the proxy.
Or just set `http_proxy` environment variable as recommended before.
Added example using -append, which doesn't need escaping.
|
|
|
|
| |
Helpful especially to test spdif fallback and so on.
|
|
|
|
|
| |
lavfi.c is not necessary anymore, because f_lavfi.c (which was actually
converted from it) can be used now.
|
|
|
|
| |
Use the new filtering code for audio too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Get rid of the old vf.c code. Replace it with a generic filtering
framework, which can potentially handle more than just --vf. At least
reimplementing --af with this code is planned.
This changes some --vf semantics (including runtime behavior and the
"vf" command). The most important ones are listed in interface-changes.
vf_convert.c is renamed to f_swscale.c. It is now an internal filter
that can not be inserted by the user manually.
f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed
once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is
conceptually easy, but a big mess due to the data flow changes).
The existing filters are all changed heavily. The data flow of the new
filter framework is different. Especially EOF handling changes - EOF is
now a "frame" rather than a state, and must be passed through exactly
once.
Another major thing is that all filters must support dynamic format
changes. The filter reconfig() function goes away. (This sounds complex,
but since all filters need to handle EOF draining anyway, they can use
the same code, and it removes the mess with reconfig() having to predict
the output format, which completely breaks with libavfilter anyway.)
In addition, there is no automatic format negotiation or conversion.
libavfilter's primitive and insufficient API simply doesn't allow us to
do this in a reasonable way. Instead, filters can use f_autoconvert as
sub-filter, and tell it which formats they support. This filter will in
turn add actual conversion filters, such as f_swscale, to perform
necessary format changes.
vf_vapoursynth.c uses the same basic principle of operation as before,
but with worryingly different details in data flow. Still appears to
work.
The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are
heavily changed. Fortunately, they all used refqueue.c, which is for
sharing the data flow logic (especially for managing future/past
surfaces and such). It turns out it can be used to factor out most of
the data flow. Some of these filters accepted software input. Instead of
having ad-hoc upload code in each filter, surface upload is now
delegated to f_autoconvert, which can use f_hwupload to perform this.
Exporting VO capabilities is still a big mess (mp_stream_info stuff).
The D3D11 code drops the redundant image formats, and all code uses the
hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a
big mess for now.
f_async_queue is unused.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Restores behaviour prior to aef2ed5dc13e37dec0670c451b4369b151d5c65f.
That change was apparently unpopular. However, given the amount of
complaining over how hard it is to change the defaults by rebinding every
key, I think the extra option introduced by this commit is justified.
Technically not all behaviour is restored, because now --no-osd-bar will
not instead display the msg text on seek. I think that feature was a
little weird and is now easy enough to remedy with the --osd-on-seek
option.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reverts commit 9812e276aa1bb0bddeb73677aa9e9f87e73cd930.
This was apparently unpopular. I still think the pause OSD should be the
same as seek even if it's not visible by default, but it seems that
whether to display a given property change is currently conflated with
what to display.
The reverted behaviour can be restored by adding something like the
following to input.conf:
SPACE cycle pause; show_progress
|
|
|
|
|
|
| |
Not much we can do, too hard to work around.
Fixes #3361.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Requested. See manpage additions.
The main reason why this goes through the trouble to keep the
action/operation parameter separate is so that we don't expose some
option parser implementation details to the command (although that is a
relatively weak reason), and also to make it more different from the
"set" command, which can't support this type of option as it goes
through the property layer.
Fixes #5435.
|
|
|
|
|
|
| |
And use it for 2 demuxer options. It could be used for more options
later. (Though the --cache options can not use this, because they use KB
as base unit.)
|
|
|
|
| |
ISO files have been supported by bd:// for a while.
|