| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
|
| |
Since these need to be refcounted, we throw them directly into struct
mp_image instead of being part of mp_colorspace. Even though they would
semantically make more sense in mp_colorspace, having them there is
really awkward because mp_colorspace is passed around and stored a lot,
and this way their lifetime is exactly tied to the lifetime of the
mp_image associated with it.
|
|
|
|
|
|
|
|
|
| |
mesa won't pick client storage unless this bit is set, and we
*absolutely* want to be using client storage for our DR PBOs.
Performance is shit on AMD otherwise. (Nvidia always uses client storage
for persistent coherent buffers whether you tell it it or not, probably
because it's way faster and nvidia doesn't trust users to figure that
out on their own)
|
|
|
|
|
|
|
|
|
|
|
| |
It makes no sense to have this on an already created buffer.
If anything, the ra backend would have to export this as a global value
(e.g. struct ra field), so that whatever allocates the buffer can
account for the required alignment. Since this code is in vo_opengl.c in
the first place, and since GL doesn't dictate any special alignment
here, it doesn't make sense in the first place to export this. (Maybe
something like this will be required later.)
|
|
|
|
|
| |
This was an oversight. The ID shouldn't be hard-coded here, so add it to
sampler_prelude instead.
|
|
|
|
|
|
|
|
|
|
|
| |
Breaks on mesa for whatever reason... even though it doesn't generate a
GLSL shader compiler error
Shouldn't make a performance difference for us because we cache `pos`
anyway, and most compute shaders will probably cache all of their
samples to shmem. Might have to re-visit this when we have an actual use
case for repeated sampling inside CS though. (RAVU + anti-ringing is a
possible candidate for that)
|
|
|
|
|
|
| |
This allows users to do their own custom sample writing, mainly meant to
address use cases such as RAVU. Also clean up the compute shader code a
bit.
|
| |
|
|
|
|
|
|
|
|
| |
Or less appropriate, as some would argue. The new name is short for
"Apple YUV packed".
(This format is needed only for hardware decoding on rather old Apple
hardware, and a very annoying special case.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This broke float textures, which were actually used by some shaders.
There were probably some other bugs as well.
Lots of code can be avoided by using ra_tex_params directly, so do that.
The main change is that COMPONENT/FORMAT are replaced by a single FORMAT
directive, which takes different parameters now. Due to the mess with
16/32 bit float textures, and because we want to support other APIs than
just GL in the future, it's not really clear how this should be handled,
and the nice component/type separation makes things actually harder. So
just jump the gun and use the ra_format.name names, which were
originally meant mostly for debugging. (This is probably something that
will be regretted later.)
Still only superficially tested, but seems to work.
Fixes #4708.
|
|
|
|
| |
I hate GLES
|
|
|
|
| |
Got the "sign" of the second multiplication wrong.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since this code was already written for HDR, and is now per-channel
(because it works better for HDR as well), we can actually reuse this to
get very high quality gamut mapping without clipping. The only required
change is to move the tone mapping from before the gamut map to after
the gamut map. Additonally, we need to also account for changes in the
signal range as a result of applying the CMS when we compute ref_peak,
which is fortunately pretty easy because we only need to consider the
case of primaries mapping to themselves.
Since `HDR` no longer really makes sense as a label, rename it to
`--tone-mapping` in general. Also fits better with
`--tone-mapping-desat` etc.
Arguably we could also rename `--hdr-compute-peak`, but that option is
basically only useful for HDR content anyway because we don't need
information about the signal range for gamut mapping.
This (finally!) gives us reasonably high quality gamut mapping even in
the absence of an ICC profile / 3DLUT.
|
|
|
|
|
|
|
| |
Huge thanks to @rusxg for finding this solution, which was previously
believed not to exist. Of course, we still don't actually need it, but I
don't want to leave this half-implemented in case somebody does in the
future.
|
|
|
|
|
|
|
| |
Original author has agreed now.
Also fix the notice in dec_video.c - all GPL-only code is gone
(unrelated to --priority/its author).
|
|
|
|
|
|
|
|
|
|
|
|
| |
So far, switching between integrated and discrete GPU would cause the
kernel to kill mpv due to an indecipherable buffer error. The technical
note TN2229 from Apple recommends to enable OpenGL Offline Renderers for
every Mac with more GPUs than displays to handle the switch between GPU.
By ordering the array from the least commonly rejected to the most,
we can sequentially remove PixelFormat attributes to fit the host.
Fixes #2371
|
| |
|
|
|
|
|
|
|
|
|
|
| |
we need to switch the x and y deltas when Shift is being held because
macOS switches them around. otherwise we would get a horizontal scroll
on a vertical one and vice versa.
additional we switch from deltaX/Y to scrollingDeltaX/Y since the Apple
docs suggest it's the preferred way now. in my tests both reported the
same values on imprecise scrolls though.
|
|
|
|
|
| |
Drops some features I guess, no idea if those were needed. Untested due
to lack of test cases.
|
|
|
|
| |
Should be GL_NEAREST, not GL_LINEAR.
|
|
|
|
|
| |
Also move the capability check to gl_video_get_lut3d(), because it
seems more convenient (ra won't have a _CAP_EXT16).
|
|
|
|
| |
Also fix the RA_CAP_ bitmask nonsense.
|
|
|
|
|
|
|
|
|
| |
Also add some more helpers.
Fix the broken math.h include statement.
utils.c uses ra_gl.h internals, which it shouldn't, and which will be
removed again as soon as this code gets converted to ra fully.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dither texture data is created as a float array, but uploaded to a
texture with GL_R16 as internal format. We relied on GL to do the
conversion from float to uint16_t. Not all GL variants even support
this: GLES does not provide this conversion (one of the reasons why this
code has a float16 code path). Also, ra is not going to do this. So just
convert on the fly.
Still keep the float16 texture format fallback, because not all GLES
implementations provide GL_R16.
There is some possibility that we'll need to provide some kind of upload
conversion anyway for float->float16. We still rely on GL doing this
implicitly, and all GL variants support it, but with RA there might be
the need for explicit conversion. Even then, it might be best to reduce
the number of conversion cases. I'll worry about this later.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Format handling via ra_* was added earlier, but the format negotiation
part was forgotten.
Actually move some aspects of it to ra_get_imgfmt_desc(). Also make sure
the unorm and float formats selected by the common format lookup
functions are linear filterable. (For OpenGL, this is implicitly
guaranteed, so it wasn't done before.) Whether these assumptions should
be checked/enforced in the ra code at all is a bit fuzzy, but with ra
being helper code only for the actual video renderer, it's probably
justified.
|
|
|
|
|
|
|
|
|
|
|
| |
Parsing the texture data as raw strings makes the textures the most
portable and self-contained. In order to facilitate different types of
shaders, the parse_user_shader interaction has been changed to instead
have it loop through blocks and call the passed functions for each valid
block parsed. This is more modular and also cleaner, with better code
separation.
Closes #4586.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Each struct tex_hook now stores multiple hooks, this allows us to
avoid the awkward way of the current code has to add the same pass
multiple times.
- As a consequence, SHADER_MAX_HOOKS was split up into SHADER_MAX_PASSES
(number of tex_hooks) and SHADER_MAX_HOOKS (number of hooked textures
per tex_hook), and both numbers decreased correspondingly.
- Instead of having a weird free() callback, we can just leverage
talloc's recursive free behavior. The only user is the user shaders code
anyway.
|
|
|
|
|
|
|
|
|
|
| |
This actually makes sure we don't decolor due to clipping even when the
signal itself exceeds the luma by a significant factor, which was pretty
common for saturated blues (and to a lesser degree, reds) - most
noticeable in skies etc.
This prevents the turn-the-sky-cyan effect of mobius tone mapping, and
should also improve the other tone mapping modes in quality.
|
|
|
|
|
| |
As pointed out by @bjin, this would match if _any_ of the reqs are set.
Need to test for explicit equality.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This starts work on moving OpenGL-specific code out of the general
renderer code, so that we can support other other GPU APIs. This is in
a very early stage and it's only a proof of concept. It's unknown
whether this will succeed or result in other backends.
For now, the GL rendering API ("ra") and its only provider (ra_gl) does
texture creation/upload/destruction only. And it's used for the main
video texture only. All other code is still hardcoded to GL.
There is some duplication with ra_format and gl_format handling. In the
end, only the ra variants will be needed (plus the gl_format table of
course). For now, this is simpler, because for some reason lots of hwdec
code still requires the GL variants, and would have to be updated to
use the ra ones.
Currently, the video.c code accesses private ra_gl fields. In the end,
it should not do that of course, and it would not include ra_gl.h.
Probably adds bugs, but you can keep them.
|
|
|
|
|
| |
Be a bit more transparent here, which is especially helpful when people
are sending me screenshots of stats pages.
|
|
|
|
|
|
| |
The radius check was not strict enough, especially not for all
platforms. To fix this, actually check the hardware capabilities instead
of relying on a hard-coded maximum radius.
|
|
|
|
|
| |
This explicitly enables the GL_ARB_shader_image_load_store extension,
which seems to fix compute shaders for Intel/GL 3.0.
|
| |
|
|
|
|
|
| |
Doesn't uncover any bugs, but apparently we're getting in the habit of
this anyway.
|
|
|
|
|
|
|
|
|
|
|
| |
The textures not having an FBO actually caused regressions when trying
to render the subtitles on top of this texture (--blend-subtitles),
which still relied on an FBO.
So just kill off the logic entirely. Why worry about a single FBO wasted
when we're allocating like 10 anyway.
Fixes #4657.
|
|
|
|
|
|
|
| |
According to the OpenGL spec, atomic access to SSBO variables is *not*
guaranteed to be coherent, even when reusing the same SSBO attached to
the same shader across different frames. So we actually need a
glMemoryBarrier here, at least in theory.
|
| |
|
|
|
|
|
|
|
|
|
| |
This bug slipped past my attention because nvidia ignores memory
barriers, but this is not necessarily always the case. Since
image_load_store is incoherent (specifically, writing to images from
compute shaders is incoherent) we need to insert a memory barrier to
make it coherent again. Since we only care about texture fetches, that's
the only barrier we need.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Two changes, compounded into one since they affect the same logic:
1. Never use linearization for HDR downscaling
2. Always use linearization for interpolation
Instead of fixing p->use_linear at the beginning of pass_render_frame,
we flip it on "dynamically" as needed. I plan on killing this
p->use_linear frame (along with other per-pass metadata) and moving them
into their own struct for tracking the "current" state of the video, but
that's a separate/upcoming refactor.
As a small bonus, reduce some code duplication in the interpolation
logic.
Fixes #4631
|
|
|
|
|
|
|
|
|
| |
Mesa 17.1 supports compute shader but not full specs of OpenGL 4.3.
Change the code to detect OpenGL extension "GL_ARB_compute_shader"
rather than OpenGL version 4.3.
HDR peak detection requires SSBO, and polar scaler requires 2D array
extension. Add these extensions as requirement as well.
|
|
|
|
|
| |
These are identical to regular fragment shader hooks, but with extra
metadata indicating the preferred block size.
|
|
|
|
|
|
|
|
|
|
|
| |
This performs almost 50% faster on my machine (!!), from 4650μs down to
about 3176μs for ewa_lanczossharp.
It's possible we could use a similar approach to speed up the separable
scalers, although with vastly simpler code. For separable scalers we'd
also have the additional huge benefit of only needing padding in one
direction, so we could potentially use a big 256x1 kernel or something
to essentially compute an entire row at once.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is done via compute shaders. As a consequence, the tone mapping
algorithms had to be rewritten to compute their known constants in GLSL
(ahead of time), instead of doing it once. Didn't affect performance.
Using shmem/SSBO atomics in this way is extremely fast on nvidia, but it
might be slow on other platforms. Needs testing.
Unfortunately, setting up the SSBO still requires OpenGL calls, which
means I can't have it in video_shaders.c, where it belongs. But I'll
defer worrying about that until the backend refactor, since then I'll be
breaking up the video/video_shaders structure anyway.
|
|
|
|
|
|
|
|
| |
These can either be invoked as dispatch_compute to do a single
computation, or finish_pass_fbo (after setting compute_size_minimum) to
render to a new texture using a compute shader. To make this stuff all
work transparently, we try really, really hard to make compute shaders
as identical to fragment shaders as possible in their behavior.
|
|
|
|
|
|
|
|
| |
Don't use FBOTEX_FUZZY where the FBO is sized according to
p->texture_w/h, since this changes infrequently (and when it does, we
need to reset everything anyway). No real reason to make this change
other than that it possibly prevents nasty surprises in the future, so I
feel more comfortable about it.
|
|
|
|
|
|
|
|
|
|
|
| |
Seems like I really like this C99 idiom. No reason not to generalize it
do snprintf(). Introduce mp_tprintf(), which basically this idiom to
snprintf(). This macro looks like it returns a string that was allocated
with alloca() on the caller site, except it's portable C99/C11. (And
unlike alloca(), the result is valid only within block scope.)
Use it in 2 places in the vo_opengl code. But it has the potential to
make a whole bunch of weird looking code look slightly nicer.
|
|
|
|
| |
Fix 1 incorrect use.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Can be enabled via --vd-lavc-dr=yes. See manpage additions for what it
does.
This reminds of the MPlayer -dr flag, but the implementation is
completely different. It's the same basic concept: letting the decoder
render into a GPU buffer to avoid a copy. Unlike MPlayer, this doesn't
try to go through filters (libavfilter doesn't support this anyway).
Unless a filter can work in-place, DR will be silently disabled. MPlayer
had very complex semantics about buffer types and management (which
apparently nobody ever understood) and weird restrictions that mostly
limited it to mpeg2 style codecs. The mpv code does not do any of this,
and just lets the decoder allocate an arbitrary number of untyped
images. (No MPlayer code was used.)
Parts of the code based on work by atomnuker (starting point for the
generic code) and haasn (some GL definitions, some basic PBO code, and
correct fencing).
|
|
|
|
|
|
|
| |
Refactor the image allocation code, and expose part of it as helper
code. This aims towards allowing callers to easily allocate mp_image
references from custom-allocated linear buffers. This is exposing only
as much as what should be actually required.
|
|
|
|
|
|
| |
Remove the feature of adding read-only frames to mp_image_pool_add().
This makes no sense, because an image pool is an allocator, and must
always return writable images. Also check these assumptions earlier.
|