| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is with video/hwdec.h. mp_hwdec_info is made opaque (and
renamed to mp_hwdec_devices). Its accessors are mainly thread-safe (or
documented where not), which makes the whole thing saner and cleaner. In
particular, thread-safety rules become less subtle and more obvious.
The new internal API makes it easier to support multiple OpenGL interop
backends. (Although this is not done yet, and it's not clear whether it
ever will.)
This also removes all the API-specific fields from mp_hwdec_ctx and
replaces them with a "ctx" field. For d3d in particular, we drop the
mp_d3d_ctx struct completely, and pass the interfaces directly.
Remove the emulation checks from vaapi.c and vdpau.c; they are
pointless, and the checks that matter are done on the VO layer.
The d3d hardware decoders might slightly change behavior: dxva2-copy
will not use the VO device anymore if the VO supports proper interop.
This pretty much assumes that any in such cases the VO will not use any
form of exclusive mode, which makes using the VO device in copy mode
unnecessary.
This is a big refactor. Some things may be untested and could be broken.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This parameter has been unused for years (the last flag was removed in
commit d658b115). Get rid of it.
This affects the general VO API, as well as the vo_opengl backend API,
so it touches a lot of files.
The VOFLAGs are still used to control OpenGL context creation, so move
them to the OpenGL backend code.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This gets rid of an old hack, VOFLAG_HIDDEN. Although handling of it has
been sane for a while, it used to cause much pain, and is still
unintuitive and weird even today.
The main reason for this hack is that OpenGL selects a X11 Visual for
you, and you're supposed to use this Visual when creating the X window
for the OpenGL context. Which means the X window can't be created early
in the common X11 init code, but the OpenGL code needs to do something
before that. API-wise you need separate functions for X11 init and X11
window creation. The VOFLAG_HIDDEN hack conflated window creation and
the entrypoint for resizing on video resolution change into one
function, vo_x11_config_vo_window(). This required all platform backends
to handle this flag, even if they didn't need this mechanism.
Wayland still uses this for minor reasons (alpha support?), so the
wayland backend must be changed before the flag can be entirely removed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Drop libva versions below 0.34.0. These are ancient, so I don't care.
Drop the vo_vaapi deinterlacer as well. With 0.34.0, VPP is always
available, and deinterlacing is done with vf_vavpp.
The vaCreateSurfaces() function changes its signature - actually it did
in 0.34.0 or so, and the <va/va_compat.h> defined a macro to make it use
the old signature.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using --hwdec=auto, about half of all systems will print:
"[vdpau] Error when calling vdp_device_create_x11: 1"
this happens because usually mpv will be linked against both vdpau and
vaapi libs, but the drivers are not necessarily available. Then trying
to load a driver will fail. This is a normal part of probing, but the
error messages were printed anyway. Silence them by explicitly
distinguishing probing.
This pretty much goes through all the layers. We actually consider
loading hw backends for vo_opengl always "auto probed", even if a hw
backend is explicitly requested. In this case vd_lavc will print a
warning message anyway (adjust this message a bit).
|
|
|
|
| |
Signed-off-by: wm4 <wm4@nowhere>
|
|
|
|
|
|
|
|
| |
Rarely used and essentially useless. The only VO for which this was
implemented correctly and for which this did anything was vo_xv, but you
shouldn't use vo_xv anyway (plus it support BT.601 only, plus a vendor
specific extension for BT.709, whose presence this function essentially
reported - use xvinfo instead).
|
|
|
|
|
|
|
|
|
| |
There's literally no reason why these functions have to be inline (they
might be performance critical, but then the function call overhead isn't
going to matter at all).
Uninline them and move them to mp_image.c. Drop the header file and fix
all uses of it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was a somewhat obscure optimization in the OSD and subtitle
rendering path: if only the position of the sub-images changed, and not
the actual image data, uploading of the image data could be skipped. In
theory, this could speed up things like scrolling subtitles.
But it turns out that even in the rare cases subtitles have such scrolls
or axis-aligned movement, modern libass rarely signals this kind of
change. Possibly this is because of sub-pixel handling and such, which
break this.
As such, it's a worthless optimization and just introduces additional
complexity and subtle bugs (especially in cases libass does the
opposite: incorrectly signaling a position change only, which happened
before). Remove this optimization, and rename bitmap_pos_id to
change_id.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The vaapi equalizer have a custom range, and can have a smaller range
than mpv's normalized video equalizer values. The result is that a vaapi
equalizer value can map to multiple mpv values, so changing a mpv value
by 1 can get "stuck". Fix by remember the mpv value, and returning it if
it still corresponds to the vaapi value.
Really fixes #1647.
(Why am I even bothering with this irredeemable crap?)
|
|
|
|
|
| |
Probably fixes #1647 (if it's correct at all). I couldn't reproduce with
the vdpau libva driver, but a driver can use different ranges.
|
|
|
|
|
|
|
|
|
|
|
| |
At the time screenshot support was added, images weren't refcounted yet,
so screenshots required specialized implementations in the VOs. But now
we can handle these things much simpler. Also see commit 5bb24980.
If there are VOs in the future which can't do this (e.g. they need to
write to the image passed to vo_driver->draw_image), this still could be
disabled on a per-VO basis etc., so we lose no potential performance
advantages.
|
|
|
|
|
|
|
|
|
| |
Use different VOCTRLs for "window" and normal screenshot modes. The
normal one will probably be removed, and replaced by generic code in
vo.c, and this commit is preparation for this. (Doing it the other way
around would be slightly simpler, but I haven't decided yet about the
second one, and touching every VO is needed anyway in order to remove
the unneeded crap. E.g. has_osd has been unused for a long time.)
|
|
|
|
|
|
|
|
| |
Instead of converting the hw surface to an image in the VO, provide a
generic way to convet hw surfaces, and use this in the screenshot code.
It's all relatively straightforward, except vdpau is being terrible. It
needs a huge chunk of new code, because copying back is not simple.
|
|
|
|
|
|
|
|
|
|
|
| |
Before this commit, each hw backend had their own specific struct types
for context, and some, like VDA, had none at all. Add a context struct
(mp_hwdec_ctx) that provides a somewhat generic way to pass the hwdec
context around. Some things get slightly better, some slightly more
verbose.
mp_hwdec_info is still around; it's still needed, but is reduced to its
role of handling delayed loading of the hwdec backend.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
And remove all uses of the VFCAP_CSP_SUPPORTED* constants. This is
supposed to reduce conversions if many filters are used (with many
incompatible pixel formats), and also for preferring the VO's natively
supported pixel formats (as opposed to conversion).
This is worthless by now. Not only do the main VOs not use software
conversion, but also the way vf_lavfi and libavfilter work mostly break
the way the old MPlayer mechanism worked. Other important filters like
vf_vapoursynth do not support "proper" format negotation either.
Part of this was already removed with the vf_scale cleanup from today.
While I'm touching every single VO, also fix the query_format argument
(it's not a FourCC anymore).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a generic mechanism to the VO to relay "extra" events from VO to
player. Use it to notify the core of window resizes, which in turn will
be used to mark all affected properties ("window-scale" in this case) as
changed.
(I refrained from hacking this as internal command into input_ctx, or to
poll the state change, etc. - but in the end, maybe it would be best to
actually pass the client API context directly to the places where events
can happen.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
So talking to a certain Intel dev, it sounded like modern VA-API drivers
are reasonable thread-safe. But apparently that is not the case. Not at
all. So add approximate locking around all vaapi API calls.
The problem appeared once we moved decoding and display to different
threads. That means the "vaapi-copy" mode was unaffected, but decoding
with vo_vaapi or vo_opengl lead to random crashes.
Untested on real Intel hardware. With the vdpau emulation, it seems to
work fine - but actually it worked fine even before this commit, because
vdpau was written and designed not by morons, but competent people
(vdpau is guaranteed to be fully thread-safe).
There is some probability that this commit doesn't fix things entirely.
One problem is that locking might not be complete. For one, libavcodec
_also_ accesses vaapi, so we have to rely on our own guesses how and
when lavc uses vaapi (since we disable multithreading when doing hw
decoding, our guess should be relatively good, but it's still a lavc
implementation detail). One other reason that this commit might not
help is Intel's amazing potential to fuckup anything that is good and
holy.
|
|
|
|
| |
This makes a certain corner case simpler at a later point.
|
|
|
|
| |
This makes it more consistent with the more important VOs.
|
|
|
|
| |
Recent regression.
|
|
|
|
| |
Basically a cosmetic change. This is probably more intuitive.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, failure to allocate image data resulted in a crash (i.e.
abort() was called). This was intentional, because it's pretty silly to
degrade playback, and in almost all situations, the OOM will probably
kill you anyway. (And then there's the standard Linux overcommit
behavior, which also will kill you at some point.)
But I changed my opinion, so here we go. This change does not affect
_all_ memory allocations, just image data. Now in most failure cases,
the output will just be skipped. For video filters, this coincidentally
means that failure is treated as EOF (because the playback core assumes
EOF if nothing comes out of the video filter chain). In other
situations, output might be in some way degraded, like skipping frames,
not scaling OSD, and such.
Functions whose return values changed semantics:
mp_image_alloc
mp_image_new_copy
mp_image_new_ref
mp_image_make_writeable
mp_image_setrefp
mp_image_to_av_frame_and_unref
mp_image_from_av_frame
mp_image_new_external_ref
mp_image_new_custom_ref
mp_image_pool_make_writeable
mp_image_pool_get
mp_image_pool_new_copy
mp_vdpau_mixed_frame_create
vf_alloc_out_image
vf_make_out_image_writeable
glGetWindowScreenshot
|
|
|
|
|
|
|
|
|
| |
Let the VOs draw the OSD on their own, instead of making OSD drawing a
separate VO driver call. Further, let it be the VOs responsibility to
request subtitles with the correct PTS. We also basically allow the VO
to request OSD/subtitles at any time.
OSX changes untested.
|
|
|
|
| |
No X display or libva can't be initialized -> crash.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mpv supports two hardware decoding APIs on Linux: vdpau and vaapi. Each
of these has emulation wrappers. The wrappers are usually slower and
have fewer features than their native opposites. In particular the libva
vdpau driver is practically unmaintained.
Check the vendor string and print a warning if emulation is detected.
Checking vendor strings is a very stupid thing to do, but I find the
thought of people using an emulated API for no reason worse.
Also, make --hwdec=auto never use an API that is detected as emulated.
This doesn't work quite right yet, because once one API is loaded,
vo_opengl doesn't unload it, so no hardware decoding will be used if the
first probed API (usually vdpau) is rejected. But good enough.
|
|
|
|
| |
Close the X connection if initializing vaapi fails.
|
|
|
|
|
| |
After VOCTRL_REDRAW_FRAME, flip_page is called, which renders the frame.
The current code rendered the frame twice; drop the redundant call.
|
|
|
|
| |
See previous commit.
|
|
|
|
|
|
|
|
|
| |
Reduce most dependencies on struct mp_csp_details, which was a bad first
attempt at dealing with colorspace stuff. Instead, consistently use
mp_image_params.
Code which retrieves colorspace matrices from csputils.c still uses this
type, though.
|
|
|
|
|
|
| |
It's not really needed to be public. Other code can just use mp_image.
The only disadvantage is that the other code needs to call an accessor
to get the VASurfaceID.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Although I at first thought it would be better to have a separate
implementation for hwaccels because the difference to software images
are too large, it turns out you can actually save some code with it.
Note that the old implementation had a small memory management bug. This
got painted over in commit 269c1e1, but is hereby solved properly.
Also note that I couldn't test vf_vavpp.c (due to lack of hardware), and
I hope I didn't accidentally break it.
|
|
|
|
|
|
|
|
|
|
|
| |
vo->aspdat is basically an outdated version of vo->params, plus some
weirdness. Get rid of it, which will allow further cleanups and which
will make multithreading easier (less state to care about).
Also, simplify some VO code by using mp_image_set_attributes() instead
of caring about display size, colorspace, etc. manually. Add the
function osd_res_from_image_params(), which is often needed in the case
OSD renders into an image.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Do two things:
1. add locking to struct osd_state
2. make struct osd_state opaque
While 1. is somewhat simple, 2. is quite horrible. Lots of code accesses
lots of osd_state (and osd_object) members. To make sure everything is
accessed synchronously, I prefer making osd_state opaque, even if it
means adding pretty dumb accessors.
All of this is meant to allow running VO in their own threads.
Eventually, VOs will request OSD on their own, which means osd_state
will be accessed from foreign threads.
|
|
|
|
|
| |
This never made any real sense; the "backend" has to access vo->dx/dy
anyway.
|
|
|
|
|
|
| |
This ended up a little bit messy. In order to get a mp_log everywhere,
mostly make use of the fact that va_surface already references global
state anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
PIX_FMT_VDA_VLD and PIX_FMT_VAAPI_VLD were never used anywhere. I'm not
sure why they were even added, and they sound like they are just for
compatibility with XvMC-style decoding, which sucks anyway.
Now that there's only a single vaapi format, remove the
IMGFMT_IS_VAAPI() macro. Also get rid of IMGFMT_IS_VDA(), which was
unused.
|
|
|
|
|
|
| |
This is not needed anymore, because we decided that the PAR of the
decoded video matters, and not the PAR of the filtered video that
arrives at the VO.
|
|
|
|
|
| |
This was way too misleading. osd.c merely calls the subtitle renderers,
instead of actually dealing with subtitles.
|
|
|
|
|
|
|
|
| |
This means most code accessing this struct must now include hwdec.h
instead of dec_video.h. I just put it into dec_video.h at first because
I thought a separate file would be a waste, but it's more proper to do
it this way, as there are too many files which include dec_video.h only
to get the mp_hwdec_info definition.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The configure followed 5 different convetions of defines because the next guy
always wanted to introduce a new better way to uniform it[1]. For an
hypothetic feature 'hurr' you could have had:
* #define HAVE_HURR 1 / #undef HAVE_DURR
* #define HAVE_HURR / #undef HAVE_DURR
* #define CONFIG_HURR 1 / #undef CONFIG_DURR
* #define HAVE_HURR 1 / #define HAVE_DURR 0
* #define CONFIG_HURR 1 / #define CONFIG_DURR 0
All is now uniform and uses:
* #define HAVE_HURR 1
* #define HAVE_DURR 0
We like definining to 0 as opposed to `undef` bcause it can help spot typos
and is very helpful when doing big reorganizations in the code.
[1]: http://xkcd.com/927/ related
|
|
|
|
| |
The author and comment fields were printed only in -v mode.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, a VO could easily refuse to respond to VOCTRL_REDRAW_FRAME,
which means the VO wouldn't redraw OSD and window contents, and the
player would appear frozen to the user. This was a bit stupid, and makes
dealing with some corner cases much harder (think of --keep-open, which
was hard to implement, because the VO gets into this state if there are
no new video frames after a seek reset).
Change this, and require VOs to always react to VOCTRL_REDRAW_FRAME.
There are two aspects of this: First, behavior after a (successful)
vo_reconfig() call, but before any video frame has been displayed.
Second, behavior after a vo_seek_reset().
For the first issue, we define that sending VOCTRL_REDRAW_FRAME after
vo_reconfig() should clear the window with black. This requires minor
changes to some VOs. In particular vaapi makes this horribly
complicated, because OSD rendering is bound to a video surface. We
create a black dummy surface for this purpose.
The second issue is much simpler and works already with most VOs: they
simply redraw whatever has been uploaded previously. The exception is
vdpau, which has a complicated mechanism to track and filter video
frames. The state associated with this mechanism is completely cleared
with vo_seek_reset(), so implementing this to work as expected is not
trivial. For now, we just clear the window with black.
|
|
|
|
| |
Just for robustness. Also print a warning in vo_vaapi if this happens.
|
|
|
|
|
|
|
|
| |
Don't allocate a VAImage and a mp_image every time. VAImage are cached
in the surfaces themselves, and for mp_image an explicit pool is
created. The retry loop runs only once for each surface now.
This also makes use of vaDeriveImage() if possible.
|