| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
| |
In theory, mp_image_params with hw_subfmt set to non-0 if imgfmt is not
a hwaccel format is invalid. (It worked fine because nothing checks this
yet.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hw_subfmt field roughly corresponds to the field
AVHWFramesContext.sw_format in ffmpeg. The ffmpeg one is of the type
AVPixelFormat (instead of the underlying hardware format), so it's a
good idea to switch to this too for preparation.
Now the hw_subfmt field is an mp_imgfmt instead of an opaque/API-
specific number. VDPAU and Direct3D11 already used mp_imgfmt, but
Videotoolbox and VAAPI had to be switched.
One somewhat user-visible change is that the verbose log will now always
show the hw_subfmt as image format, instead of as nonsensical number.
(In the end it would be good if we could switch to AVHWFramesContext
completely, but the upstream API is incomplete and doesn't cover
Direct3D11 and Videotoolbox.)
|
|
|
|
| |
with video filters
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This should get mpv working on Windows 7 machines without hardware
accelerated graphics adapters. It already worked on Windows 8 and up
because those systems would silently fall back to WARP if there was no
graphics hardware installed.
The normal MPGL_CAP_SW flag is not set, so unlike other opengl backends,
this will choose a software adapter even if opengl:sw is not specified.
The reason for this is, unlike on Linux, where vo_xv and vo_x11 can be
used, mpv on Windows does not have any VO to fall back on when hardware
acceleration isn't available, so if software adapters are rejected, the
user won't see any video output when using the default settings. WARP
seems to perform quite well, so it should be used in this case.
|
|
|
|
|
| |
This will happen when D3D11 is present on the machine but the supported
feature level is too low.
|
| |
|
|
|
|
|
|
|
|
| |
I made this call up because I sort of thought this makes senssssse. I'm
now convinced that it does not, and even is actively harmful. I'm still
quite in the dark how the DirectD 11 video API is supposed to work with
synchronization, but at least for normal graphics this call would not
make much sense.
|
|
|
|
|
| |
So we actually get FocusOut events. Disables key repeat when losing
focus while a key is down.
|
|
|
|
| |
Remove some indirections that aren't needed anymore.
|
| |
|
|
|
|
|
| |
Sounds fair. Can be used to determine if the filter chain was mutated at
all, and avoiding unconditional reinit if it wasn't.
|
|
|
|
| |
I think this is more robust, and future commits will rely on it.
|
|
|
|
|
|
|
|
|
|
| |
These mostly happen in situations where the correct behavior is
relatively new and not found in the wild (therefore not worth
implementing) and/or extremely complicated (and thus not worth worrying
about the potential edge cases and UI changes).
Still, it's best to document these where they happen to guide the poor
souls maintaining these files in the future.
|
|
|
|
|
| |
Some screen lockers have a habit of dumping output to the terminal when
their output is reset. Ignore its output to keep the TTY output clean.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This uses eglPostSubBufferNV to trigger ANGLE to check the window size
and update the size of the swapchain to match, which is recommended
here: https://groups.google.com/d/msg/angleproject/RvyVkjRCQGU/gfKfT64IAgAJ
With the D3D11 backend, using eglPostSubBufferNV with a 0-sized update
region will even skip the Present() call, meaning it won't block for a
vsync period. Hopefully ANGLE will have a less hacky way of doing this
in future. See the relevant ANGLE issue: http://anglebug.com/1438
Fixes #3301
|
|
|
|
|
|
|
|
|
|
|
| |
This can for example happen with vo_opengl_cb, if it is used with a GL
implementation that does not supports FBOs. (mpv itself should never
attempt to use FBOs if they're not available.)
Without this check it would trigger an assert() in our dummy
glBindFramebuffer wrapper.
Suspected cause of #3308, although it's still unlikely.
|
|
|
|
|
|
|
| |
For example it should be set to IMGFMT_D3D11NV12 if it isn't already.
Otherwise, an assertion in vf.c could trigger.
This probably couldn't be provoked yet.
|
|
|
|
|
|
| |
This moves some of the bulky user-shader specific logic into the file
dedicated to it. Rather than expose video.c state, variable lookup is
now done via a simulated closure.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This greatly improves the result when decoding typical (ST.2084) HDR
content, since the job of tone mapping gets significantly easier when
you're only mapping from 1000 to 250, rather than 10000 to 250.
The difference is so drastic that we can now even reasonably use
`hdr-tone-mapping=linear` and get a very perceptually uniform result
that is only slightly darker than normal. (To compensate for the extra
dynamic range)
Due to weird implementation details, this only seems to be present on
keyframes (or something like that), so we have to cache the last seen
value for the frames in between.
Also, in some files the metadata is just completely broken /
nonsensical, so I decided to apply a simple heuristic to detect
completely broken metadata.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This involves multiple changes:
1. Brightness metadata is split into nominal peak and signal peak.
For a quick and dirty explanation: nominal peak is the brightest value
that your color space can represent (i.e. the brightness of an encoded
1.0), and signal peak is the brightest value that actually occurs in
the video (i.e. the brightest thing that's displayed).
2. vo_opengl uses a new decision logic to figure out the right nom_peak
and sig_peak for all situations. It also does a better job of picking
the right target gamut/colorspace to use for the OSD. (Which still is
and still should be treated as sRGB). This change in logic also
fixes #3293 en passant.
3. Since it was growing rapidly, the logic for auto-guessing / inferring
the right colorimetry configuration (in pass_colormanage) was split from
the logic for actually performing the adaptation (now pass_color_map).
Right now, the new logic doesn't do a whole lot since HDR metadata is
still ignored (but not for long).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This has two reasons:
1. I tend to add new fields to this metadata, and every time I've done
so I've consistently forgotten to update all of the dozens of places in
which this colorimetry metadata might end up getting used. While most
usages don't really care about most of the metadata, sometimes the
intend was simply to “copy” the colorimetry metadata from one struct to
another. With this being inside a substruct, those lines of code can now
simply read a.color = b.color without having to care about added or
removed fields.
2. It makes the type definitions nicer for upcoming refactors.
In going through all of the usages, I also expanded a few where I felt
that omitting the “young” fields was a bug.
|
|
|
|
|
|
|
|
|
|
| |
Commit 883d3114 seems to have (accidentally?) dropped the FBOTEX_FUZZY
from the output_fbo resize, which means that current master will keep
resizing and resizing the FBO as you change the window size, introducing
severe memory leaking after a while. (Not sure why that would cause
memory leaks, but I blame nvidia)
Either way, it's bad for performance too, so it's worth fixing.
|
|
|
|
|
|
|
|
|
| |
vo_vaapi is the only thing which can't scale RGBA on the GPU. (Other
cases of RGBA scaling are handled in draw_bmp.c for some reason.)
Move this code and get rid of the osd_conv_cache thing.
Functionally, nothing changes.
|
|
|
|
|
|
|
| |
No real need to cache this, and we need fewer fields in the OSD part
struct.
Also add logging for when the OSD texture is reallocated.
|
|
|
|
|
|
|
|
| |
This is how PBOs are normally supposed to be used.
Unfortunately I can't see an any absolute improvement on nVidia binary
drivers and playing 4K material. Compared to the "old" PBO path with 1
buffer, the measured GL time decreases significantly, though.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL generally does not support flipping the image on upload, meaning
negative strides are not supported. vo_opengl handles this by flipping
rendering if the stride is inverted, and gl_pbo_upload() "ignores"
negative strides by uploading without flipping the image.
If individual planes had strides with different signs, this broke. The
flipping affected the entire image, and only the sign of the first plane
was respected.
This is just a crazy corner case that will never happen, but it turns
out this is quite simple to support, and actually improves the code
somewhat.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a gl_pbo_upload_tex() function, which works almost like
our gl_upload_tex() glTexSubImage2D() wrapper, except it takes a struct
which caches the PBO handles. It also takes the full texture size (to
make allocating an ideal buffer size easier), and a parameter to disable
PBOs (so that the caller doesn't have to duplicate the gl_upload_tex()
call if PBOs are disabled or unavailable).
This also removes warnings and fallbacks on PBO failure. We just
silently try using PBOs on every frame, and if that fails at some point,
revert to normal texture uploads. Probably doesn't matter.
|
|
|
|
|
|
|
|
| |
This makes the geometry of the sizing borders more like the ones in
Windows 10. It also fixes an off-by-one error that made the right and
bottom borders thinner than the left and top borders, which made it
difficult to resize the window when using the Windows 7 classic theme
(because it has pretty thin sizing borders to begin with.)
|
|
|
|
| |
Otherwise, warnings about them being unused appear.
|
| |
|
|
|
|
| |
See previous comments.
|
|
|
|
| |
See previous commit.
|
|
|
|
| |
It's packed in the OSD common layer already.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
No method of taking a screenshot was implemented at all. vo_opengl
lacked window screenshotting, because ANGLE doesn't allow reading the
frontbuffer. There was no way to read back from a D3D11 texture either.
Implement reading image data from D3D11 textures. This is a low-quality
effort to get basic screenshots done. Eventually there will be a better
implementation: once we use AVHWFramesContext natively, the readback
implementation will be in libavcodec, and will be able to cache the
staging texture correctly. Hopefully. (For now it doesn't even have a
AVHWFramesContext for D3D11 yet. But the abstraction is more appropriate
for this purpose.)
|
|
|
|
|
|
|
|
| |
OK, this was dumb. The file didn't have much to do with ANGLE, and the
functionality can simply be moved to d3d.c. That file contains helpers
for decoding, but can always be present (on Windows) since it doesn't
access any D3D specific libavcodec APIs. Thus it doesn't need to be
conditionally built like the actual hwaccel wrappers.
|
|
|
|
|
|
| |
logically, scaler should know its input and output size
Signed-off-by: wm4 <wm4@nowhere>
|
| |
|
|
|
|
|
|
|
|
| |
Instead of hard-coding a big list, move some of the functionality
to csputils. Affects both the auto-guess blacklist and the peak
estimation.
Also update the comments.
|
|
|
|
|
| |
All these blank lines felt sort of weird, especially since the functions
were semantically related.
|
|
|
|
|
|
|
| |
Too many "exceptions" these days, it's easier to just hard-code a
whitelist instead of a blacklist. And besides, it only really makes
sense to avoid adaptation for BT.601 specifically, since that's the one
we auto-guess based on the resolution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm not even sure why we ever consulted *_src to begin with, since that
just describes the current image format - and not the original metadata.
(And in fact, we specifically had logic to work around the impliciations
this had on linear scaling)
image_params is *the* authoritative source on the intended (i.e.
reference) image metadata, whereas *_src may be changed by previous
passes already. So only consult image_params for picking auto-generated
values.
Also, add some more missing "wide gamut" and "non-gamma" curves to the
autoconfig blacklist. (Maybe it would make sense to move this list to
csputils in the future? Or perhaps even auto-detect it based on the
associated primaries)
|
|
|
|
|
|
|
|
|
|
| |
User request and not that hard. Closes #3157.
Note that FFmpeg doesn't support this and there's no signalling in HEVC
etc., so the only way users can access it is by using vf_format
manually.
Mind: This encoding uses full range values, not TV range.
|
|
|
|
|
|
|
| |
This is actually not entirely trivial since it involves negative Yxy
coordinates, so the CMM has to be capable of full floating point
operation. Fortunately, LittleCMS is, so we can just blindly implement
it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This HDR function is unique in that it's still display-referred, it just
allows for values above the reference peak (super-highlights). The
official standard doesn't actually document this very well, but the
nominal peak turns out to be exactly 12.0 - so we normalize to this
value internally in mpv. (This lets us preserve the property that the
textures are encoded in the range [0,1], preventing clipping and making
the best use of an integer texture's range)
This was grouped together with SMPTE ST2084 when checking libavutil
compatibility since they were added in the same release window, in a
similar timeframe.
|
|
|
|
|
| |
Also marked some places for possible later refactoring, as they became
quite similar in this commit.
|
| |
|
|
|
|
|
|
|
|
| |
The main framebuffer is not the default framebuffer for the dxinterop
backend. Bind the main framebuffer and use the appropriate attachment
when reading the window content.
Fix #3284
|
|
|
|
|
|
|
|
|
|
| |
The size check introduced in commit d941a57b did not consider that Xv
can round up the image size to the next chroma boundary. Doing that
makes sense, so it can't certainly be considered server misbehavior.
Do 2 things against this: allow if the server returns a larger image (we
just crop it then), and also allocate a properly aligned image in the
first place.
|
| |
|