aboutsummaryrefslogtreecommitdiffhomepage
path: root/video/img_format.h
Commit message (Collapse)AuthorAge
* img_format.h: cosmetics: fix whitespaceGravatar wm42018-03-15
|
* vo_gpu: make screenshots use the GL rendererGravatar wm42018-02-11
| | | | | | | | | | | | | | | | | | | | | | | | | | Using the GL renderer for color conversion will make sure screenshots will use the same conversion as normal video rendering. It can do this for all types of screenshots. The logic when to write 16 bit PNGs changes. To approximate the old behavior, we decide by looking whether the source video format has more than 8 bits per component. We apply this logic even for window screenshots. Also, 16 bit PNGs now always include an unused alpha channel. The reason is that FFmpeg has RGB48 and RGBA64 formats, but no RGB064. RGB48 is 3 bytes and usually not supported by GPUs for rendering, so we have to use RGBA64, which forces an alpha channel. Will break for users who use --target-trc and similar options. I considered creating a new gl_video context, but it could double GPU memory use, so I didn't. This uses FBOs instead of glGetTexImage(), because that increases the chance it could work on GLES (e.g. ANGLE). Untested. No support for the Vulkan and D3D11 backends yet. Fixes #5498. Also fixes #5240, because the code for reading back is not used with the new code path.
* video: rewrite filtering glue codeGravatar wm42018-01-30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Get rid of the old vf.c code. Replace it with a generic filtering framework, which can potentially handle more than just --vf. At least reimplementing --af with this code is planned. This changes some --vf semantics (including runtime behavior and the "vf" command). The most important ones are listed in interface-changes. vf_convert.c is renamed to f_swscale.c. It is now an internal filter that can not be inserted by the user manually. f_lavfi.c is a refactor of player/lavfi.c. The latter will be removed once --lavfi-complex is reimplemented on top of f_lavfi.c. (which is conceptually easy, but a big mess due to the data flow changes). The existing filters are all changed heavily. The data flow of the new filter framework is different. Especially EOF handling changes - EOF is now a "frame" rather than a state, and must be passed through exactly once. Another major thing is that all filters must support dynamic format changes. The filter reconfig() function goes away. (This sounds complex, but since all filters need to handle EOF draining anyway, they can use the same code, and it removes the mess with reconfig() having to predict the output format, which completely breaks with libavfilter anyway.) In addition, there is no automatic format negotiation or conversion. libavfilter's primitive and insufficient API simply doesn't allow us to do this in a reasonable way. Instead, filters can use f_autoconvert as sub-filter, and tell it which formats they support. This filter will in turn add actual conversion filters, such as f_swscale, to perform necessary format changes. vf_vapoursynth.c uses the same basic principle of operation as before, but with worryingly different details in data flow. Still appears to work. The hardware deint filters (vf_vavpp.c, vf_d3d11vpp.c, vf_vdpaupp.c) are heavily changed. Fortunately, they all used refqueue.c, which is for sharing the data flow logic (especially for managing future/past surfaces and such). It turns out it can be used to factor out most of the data flow. Some of these filters accepted software input. Instead of having ad-hoc upload code in each filter, surface upload is now delegated to f_autoconvert, which can use f_hwupload to perform this. Exporting VO capabilities is still a big mess (mp_stream_info stuff). The D3D11 code drops the redundant image formats, and all code uses the hw_subfmt (sw_format in FFmpeg) instead. Although that too seems to be a big mess for now. f_async_queue is unused.
* video: make IMGFMT_IS_HWACCEL() return 0 or 1Gravatar wm42018-01-18
| | | | Sometimes helps avoiding usage mistakes.
* video: add utility function to pick conversion image format from a listGravatar wm42018-01-18
|
* Add DRM_PRIME Format Handling and Display for RockChip MPP decodersGravatar Lionel CHAZALLON2017-10-23
| | | | | | | | | | | This commit allows to use the AV_PIX_FMT_DRM_PRIME newly introduced format in ffmpeg that allows decoders to provide an AVDRMFrameDescriptor struct. That struct holds dmabuf fds and information allowing zerocopy rendering using KMS / DRM Atomic. This has been tested on RockChip ROCK64 device.
* hwdec: add mediacodec hardware decoder for IMGFMT_MEDIACODEC framesGravatar Aman Gupta2017-10-09
|
* vo_opengl: support float pixel formatsGravatar wm42017-08-15
| | | | Like AV_PIX_FMT_GBRPF32LE.
* img_format: fix a commentGravatar wm42017-07-15
| | | | | This was changed a while ago. Part of it might still apply to the old D3D hwaccel glue code, though.
* img_format: drop some unused thingsGravatar wm42017-06-30
|
* vo_direct3d: remove non-working nv12 shader supportGravatar wm42017-06-30
| | | | | | | | | It never worked. It relied on some obscure texture format to provide the equivalent of GL_RG or GL_LUMINANCE_ALPHA, but no hardware seemed to report support for it ever. No idea what's the correct way to do this. On D3D11 it exists, of course. (Actually I'd like to remove the whole VO.)
* video: get rid of swapped packed YUVGravatar wm42017-06-30
| | | | | | Another legacy annoyance. The only place where packed YUV is still important is slightly older Apple hardware or drivers, which require it for efficient hardware decoding.
* video: drop some more IMGFMT aliasesGravatar wm42017-06-29
| | | | | | | | | | | For vo_opengl and vo_direct3d, these are supported in a generic way. For vf_vapoursynth, we could probably map its VSFormat struct in a generic way, but for now do some bullshit. vf_eq.c actually loses support for these formats. We could add generic support too (anything that has 8 bit planes will work), but why bother. The filter is deprecated anyway.
* video: drop some unused IMGFMT aliasesGravatar wm42017-06-29
| | | | | | These formats are supported in a generic way. To get rid of IMGFMT_NV21, remove support from vo_direct3d.c completely.
* vo_opengl: rely on FFmpeg pixdesc a bit moreGravatar wm42017-06-29
| | | | | | | | | | | | | Add something that allows is to extract the component order from various RGBA formats. In fact, also handle YUV, GBRP, and XYZ formats with this. It introduces a new struct mp_regular_imgfmt, that hopefully will eventually replace struct mp_imgfmt_desc. The latter is still needed by a lot of code though, especially generic code. Also vo_opengl still uses the old one, so this commit is sort of incomplete. Due to its genericness, it's also possible that this commit introduces rendering bugs, or accepts formats it shouldn't accept.
* video/fmt-conversion, img_format: change license to LGPLGravatar wm42017-06-18
| | | | | | | | | | | | | | | | | | | | | | The problem with fmt-conversion.h is that "lucabe", who disagreed with LGPL, originally wrote it. But it was actually rewritten by "reimar" later. The original switch statement was replaced with a lookup table. No code other than the imgfmt2pixfmt() function signature survives. Neither the format pairs (PIXFMT<->IMGFMT), nor the concept of mapping them, can be copyrighted. So changing the license should be fine, because reimar and all other authors involved with the new code agreed to LGPL. We also don't consider format pairs added later as copyrightable. (The direct-mapping idea mentioned in the "Copyright" file seems attractive, and I might implement in later anyway.) Likewise, there might be some format names added to img_format.h, which are not covered by relicensing agreements. These all affect "later" additions, and they follow either the FFmpeg PIXFMT naming or some other pre-existing logic, so this should be fine.
* img_format: drop unused aliasesGravatar wm42017-06-18
| | | | vo_opengl uses those automatically via pixdesc.
* img_format: minor simplificationGravatar wm42017-06-18
|
* vo_opengl: hwdec_cuda: Support P016 output surfacesGravatar Philip Langdale2016-11-22
| | | | | | | | | The latest 375.xx nvidia drivers add support for P016 output surfaces. In combination with an ffmpeg change to return those surfaces, we can display them. The bulk of the work is related to knowing which format you're dealing with at the right time. Once you know, it's straight forward.
* img_format: remove some unneeded format definitionsGravatar wm42016-09-28
| | | | They're still supported, just that they have no IMGFMT_ alias.
* hwdec/opengl: Add support for CUDA and cuvid/NvDecodeGravatar Philip Langdale2016-09-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nvidia's "NvDecode" API (up until recently called "cuvid" is a cross platform, but nvidia proprietary API that exposes their hardware video decoding capabilities. It is analogous to their DXVA or VDPAU support on Windows or Linux but without using platform specific API calls. As a rule, you'd rather use DXVA or VDPAU as these are more mature and well supported APIs, but on Linux, VDPAU is falling behind the hardware capabilities, and there's no sign that nvidia are making the investments to update it. Most concretely, this means that there is no VP8/9 or HEVC Main10 support in VDPAU. On the other hand, NvDecode does export vp8/9 and partial support for HEVC Main10 (more on that below). ffmpeg already has support in the form of the "cuvid" family of decoders. Due to the design of the API, it is best exposed as a full decoder rather than an hwaccel. As such, there are decoders like h264_cuvid, hevc_cuvid, etc. These decoders support two output paths today - in both cases, NV12 frames are returned, either in CUDA device memory or regular system memory. In the case of the system memory path, the decoders can be used as-is in mpv today with a command line like: mpv --vd=lavc:h264_cuvid foobar.mp4 Doing this will take advantage of hardware decoding, but the cost of the memcpy to system memory adds up, especially for high resolution video (4K etc). To avoid that, we need an hwdec that takes advantage of CUDA's OpenGL interop to copy from device memory into OpenGL textures. That is what this change implements. The process is relatively simple as only basic device context aquisition needs to be done by us - the CUDA buffer pool is managed by the decoder - thankfully. The hwdec looks a bit like the vdpau interop one - the hwdec maintains a single set of plane textures and each output frame is repeatedly mapped into these textures to pass on. The frames are always in NV12 format, at least until 10bit output supports emerges. The only slightly interesting part of the copying process is that CUDA works by associating PBOs, so we need to define these for each of the textures. TODO Items: * I need to add a download_image function for screenshots. This would do the same copy to system memory that the decoder's system memory output does. * There are items to investigate on the ffmpeg side. There appears to be a problem with timestamps for some content. Final note: I mentioned HEVC Main10. While there is no 10bit output support, NvDecode can return dithered 8bit NV12 so you can take advantage of the hardware acceleration. This particular mode requires compiling ffmpeg with a modified header (or possibly the CUDA 8 RC) and is not upstream in ffmpeg yet. Usage: You will need to specify vo=opengl and hwdec=cuda. Note that hwdec=auto will probably not work as it will try to use vdpau first. mpv --hwdec=cuda --vo=opengl foobar.mp4 If you want to use filters that require frames in system memory, just use the decoder directly without the hwdec, as documented above.
* video: remove d3d11 video processor use from OpenGL interopGravatar wm42016-05-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now have a video filter that uses the d3d11 video processor, so it makes no sense to have one in the VO interop code. The VO uses it for formats not directly supported by ANGLE (so the video data is converted to a RGB texture, which ANGLE can take in). Change this so that the video filter is automatically inserted if needed. Move the code that maps RGB surfaces to its own inteorp backend. Add a bunch of new image formats, which are used to enforce the new constraints, and to automatically insert the filter only when needed. The added vf mechanism to auto-insert the d3d11vpp filter is very dumb and primitive, and will work only for this specific purpose. The format negotiation mechanism in the filter chain is generally not very pretty, and mostly broken as well. (libavfilter has a different mechanism, and these mechanisms don't match well, so vf_lavfi uses some sort of hack. It only works because hwaccel and non-hwaccel formats are strictly separated.) The RGB interop is now only used with older ANGLE versions. The only reason I'm keeping it is because it's relatively isolated (uses only existing mechanisms and adds no new concepts), and because I want to be able to compare the behavior of the old code with the new one for testing. It will be removed eventually. If ANGLE has NV12 interop, P010 is now handled by converting to NV12 with the video processor, instead of converting it to RGB and using the old mechanism to import that as a texture.
* video: add IMGFMT_P010 aliasGravatar wm42016-04-29
| | | | Gets rid of some silliness, and might be useful in the future.
* vd_lavc: add d3d11va hwdecGravatar Kevin Mitchell2016-03-30
| | | | | | This commit adds the d3d11va-copy hwdec mode using the ffmpeg d3d11va api. Functions in common with dxva2 are handled in a separate decode/d3d.c file. A future commit will rewrite decode/dxva2.c to share this code.
* vo_opengl: refactor pass_read_video and texture bindingGravatar Niklas Haas2016-03-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a pretty major rewrite of the internal texture binding mechanic, which makes it more flexible. In general, the difference between the old and current approaches is that now, all texture description is held in a struct img_tex and only explicitly bound with pass_bind. (Once bound, a texture unit is assumed to be set in stone and no longer tied to the img_tex) This approach makes the code inside pass_read_video significantly more flexible and cuts down on the number of weird special cases and spaghetti logic. It also has some improvements, e.g. cutting down greatly on the number of unnecessary conversion passes inside pass_read_video (which was previously mostly done to cope with the fact that the alternative would have resulted in a combinatorial explosion of code complexity). Some other notable changes (and potential improvements): - texture expansion is now *always* handled in pass_read_video, and the colormatrix never does this anymore. (Which means the code could probably be removed from the colormatrix generation logic, modulo some other VOs) - struct fbo_tex now stores both its "physical" and "logical" (configured) size, which cuts down on the amount of width/height baggage on some function calls - vo_opengl can now technically support textures with different bit depths (e.g. 10 bit luma, 8 bit chroma) - but the APIs it queries inside img_format.c doesn't export this (nor does ffmpeg support it, really) so the status quo of using the same tex_mul for all planes is kept. - dumb_mode is now only needed because of the indirect_fbo being in the main rendering pipeline. If we reintroduce p->use_indirect and thread a transform through the entire program this could be skipped where unnecessary, allowing for the removal of dumb_mode. But I'm not sure how to do this in a clean way. (Which is part of why it got introduced to begin with) - It would be trivial to resurrect source-shader now (it would just be one extra 'if' inside pass_read_video).
* video: remove some useless old RGB formatsGravatar wm42016-01-25
| | | | | | | | | | | | | Some VOs had support for these - remove them. Typically, these formats will have only some use in cases where using RGB software conversion with libswscale is faster than letting the VO/GPU do it (i.e. almost never). For the sake of testing this case, keep IMGFMT_RGB565. This is the least messy format, because it has no padding/alpha bits with unknown semantics. Note that decoding to these formats still works. We'll let libswscale repack the data to whatever the VO in use can take.
* img_format: add a generic flag for semi-planar formatsGravatar wm42016-01-07
|
* sub: find GBRP format automatically when rendering to RGBGravatar wm42015-12-24
| | | | | | | | | | | | | | | | This removes the need to define IMGFMT_GBRAP, which fixes compilation with the current Libav release. This also makes it automatically pick up a GBRP format with the same bit width. (Unfortunately, it seems libswscale does not support conversion to AV_PIX_FMT_GBRAP16, so our code falls back to 8 bit, removing precision for video covered by subtitles in cases this code is used.) Also, when the source video is e.g. 10 bit YUV, upsample to 16 bit. Whether this is good or bad, it fixes behavior with alpha. Although I'm not sure if the alpha range is really correct ([0,2^16-1] vs. [0,255*256]). Keep in mind that libswscale doesn't even agree with the way we do it.
* sub: better alpha blending when rendering to alpha surfacesGravatar wm42015-12-24
| | | | | | | | | | | This actually treats destination alpha correctly, and gives much better results than before. I don't know if this is perfectly correct yet, though. Slight difference with vo_opengl behavior suggests it might not be. Note that this does not affect VOs with true alpha support. vo_opengl does not use this code at all, and does the alpha calculations in OpenGL instead.
* vo_opengl: fix issues with some obscure pixel formatsGravatar wm42015-12-07
| | | | | | | | | | | | | | | | | | | The computation of the tex_mul variable was broken in multiple ways. This variable is used e.g. by debanding for moving expansion of 10 bit fixed-point input to normalized range to another stage of processing. One obvious bug was that the rgb555 pixel format was broken. This format has component_bits=5, but obviously it's already sampled in normalized range, and does not need expansion. The tex_mul-free code path avoids this by not using the colormatrix. (The code was originally designed to work around dealing with the generally complicated pixel formats by only using the colormatrix in the YUV case.) Another possible bug was with 10 bit input. It expanded the input by bringing the [0,2^10) range to [0,1], and then treating the expanded input as 16 bit input. I didn't bother to check what this actually computed, but it's somewhat likely it was wrong anyway. Now it uses mp_get_csp_mul(), and disables expansion when computing the YUV matrix.
* vo_opengl: support all kinds of GBRP formatsGravatar wm42015-10-18
| | | | | | | | Adds support for AV_PIX_FMT_GBRP9, AV_PIX_FMT_GBRP10, AV_PIX_FMT_GBRP12, AV_PIX_FMT_GBRP14, AV_PIX_FMT_GBRP16, AV_PIX_FMT_GBRAP, and AV_PIX_FMT_GBRAP16. (Not that it matters, because nobody uses these anyway.)
* video: remove VDA supportGravatar wm42015-09-28
| | | | | | | | | VideoToolbox is preferred. Now that FFmpeg released 2.8, there's no reason to support VDA anymore. In fact, we had a bug that made VDA not useable with older FFmpeg versions in some newer mpv releases. VideoToolbox is supported even on slightly older OSX versions, and if not, you still can run mpv without hw decoding.
* video: fix VideoToolbox/VDA autodetectionGravatar wm42015-08-17
| | | | | | | | | | | | | | | This affects vo_opengl_cb in particular: it'll most likely auto-load VDA, and then the VideoToolbox decoder won't work. And everything fails. This is mainly caused by FFmpeg using separate pixfmts for the _same_ thing (CVPixelBuffers), simply because libavcodec's architecture demands that hwaccel backends are selected by pixfmts. (Which makes no sense, but now we have the mess.) So instead of duplicating FFmpeg's misdesign, just change the format to our own canonical one on the image output by the decoder. Now the GL interop code is exactly the same for VDA and VT, and we use the VT name only.
* hwdec: add VideoToolbox supportGravatar Sebastien Zwickert2015-08-05
| | | | | | | | VDA is being deprecated in OS X 10.11 so this is needed to keep hwdec working. The code needs libavcodec support which was added recently (to FFmpeg git, libav doesn't support it). Signed-off-by: Stefano Pigozzi <stefano.pigozzi@gmail.com>
* Update license headersGravatar Marcin Kurczewski2015-04-13
| | | | Signed-off-by: wm4 <wm4@nowhere>
* RPI supportGravatar wm42015-03-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This requires FFmpeg git master for accelerated hardware decoding. Keep in mind that FFmpeg must be compiled with --enable-mmal. Libav will also work. Most things work. Screenshots don't work with accelerated/opaque decoding (except using full window screenshot mode). Subtitles are very slow - even simple but huge overlays can cause frame drops. This always uses fullscreen mode. It uses dispmanx and mmal directly, and there are no window managers or anything on this level. vo_opengl also kind of works, but is pretty useless and slow. It can't use opaque hardware decoding (copy back can be used by forcing the option --vd=lavc:h264_mmal). Keep in mind that the dispmanx backend is preferred over the X11 ones in case you're trying on X11; but X11 is even more useless on RPI. This doesn't correctly reject extended h264 profiles and thus doesn't fallback to software decoding. The hw supports only up to the high profile, and will e.g. return garbage for Hi10P video. This sets a precedent of enabling hw decoding by default, but only if RPI support is compiled (which most hopefully it will be disabled on desktop Linux platforms). While it's more or less required to use hw decoding on the weak RPI, it causes more problems than it solves on real platforms (Linux has the Intel GPU problem, OSX still has some cases with broken decoding.) So I can live with this compromise of having different defaults depending on the platform. Raspberry Pi 2 is required. This wasn't tested on the original RPI, though at least decoding itself seems to work (but full playback was not tested).
* vo_opengl: move minor helper to common codeGravatar wm42015-03-09
| | | | | The generic image format code should cary most of the "knowledge" about image formats.
* vo_opengl: handle grayscale input better, add YA16 supportGravatar wm42015-01-21
| | | | | | | | | | Simply clamp off the U/V components in the colormatrix, instead of doing something special in the shader. Also, since YA8/YA16 gave a plane_bits value of 16/32, and a colormatrix calculation overflowed with 32, add a component_bits field to the image format descriptor, which for YA8/YA16 returns 8/16 (the wrong value had no bad consequences otherwise).
* vf_scale: replace ancient fallback image format selectionGravatar wm42015-01-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If video output and VO don't support the same format, a conversion filter needs to be insert. Since a VO can support multiple formats, and the filter chain also can deal with multiple formats, you basically have to pick from a huge matrix of possible conversions. The old MPlayer code had a quite naive algorithm: it first checked whether any conversion from the list of preferred conversions matched, and if not, it was falling back on checking a hardcoded list of output formats (more or less sorted by quality). This had some unintended side- effects, like not using obvious "replacement" formats, selecting the wrong colorspace, selecting a bit depth that is too high or too low, and more. Use avcodec_find_best_pix_fmt_of_list() provided by FFmpeg instead. This function was made for this purpose, and should select the "best" format. Libav provides a similar function, but with a different name - there is a function with the same name in FFmpeg, but it has different semantics (I'm not sure if Libav or FFmpeg fucked up here). This also removes handling of VFCAP_CSP_SUPPORTED vs. VFCAP_CSP_SUPPORTED_BY_HW, which has no meaning anymore, except possibly for filter chains with multiple scale filters. Fixes #1494.
* video: remove swapped-endian image format aliasesGravatar wm42014-11-05
| | | | | Like the previous commit, this removes names only, not actual support for these formats.
* video: remove aliases for some rarely referenced image formatsGravatar wm42014-11-05
| | | | | | | | | These formats are still supported; you just can't reference them via a defined constants directly. They are now handled via the generic passthrough. (If you want to use such a format, you either have to add the entry back, or use AV_PIX_FMT_* directly.)
* video: passthrough unknown AVPixelFormatsGravatar wm42014-11-05
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a rather radical change: instead of maintaining a whitelist of FFmpeg formats we support, we automatically support all formats. In general, a format which doesn't have an explicit IMGFMT_* name will be converted to a known format through libswscale, or will be handled by code which can treat pixel formats in a generic way using the pixel format description, like vo_opengl. AV_PIX_FMT_UYYVYY411 is a special-case. It's packed YUV with chroma subsampling by 4 in both directions. Its component order is documented as "Cb Y0 Y1 Cr Y2 Y3", meaning there's one UV sample for 4 Y samples. This means each pixel uses 1.5 bytes (4 pixels have 1 UV sample, so 4 bytes + 2 bytes). FFmpeg can actually handle this format with its generic mechanism in an extremely awkward way, but it doesn't work for us. Blacklist it, and hope no similar formats will be added in the future. Currently, the AV_PIX_FMT_*s allowed are limited to a numeric value of 500. More is not allowed, and there are some fixed size arrays that need to contain any possible format (look for IMGFMT_END dependencies). We could have this simpler by replacing IMGFMT_* with AV_PIX_FMT_* through the whole codebase. But for now, this is better, because we can compensate for formats missing in Libav or older FFmpeg versions, like AV_PIX_FMT_RGB0 and others.
* video: get hwaccel flag from pixdescGravatar wm42014-11-05
|
* video: clarify what IMFMT_DXVA2 isGravatar wm42014-10-26
|
* video: initial dxva2 supportGravatar wm42014-10-25
| | | | | Shamelessly stolen from ffmpeg. It probably doesn't work - you can debug it yourself.
* Move compat/ and bstr/ directory contents somewhere elseGravatar wm42014-08-29
| | | | | | | | | bstr.c doesn't really deserve its own directory, and compat had just a few files, most of which may as well be in osdep. There isn't really any justification for these extra directories, so get rid of them. The compat/libav.h was empty - just delete it. We changed our approach to API compatibility, and will likely not need it anymore.
* build: deal with endian messGravatar wm42014-07-10
| | | | | | | | | | | | | | | | | | | | There is no standard mechanism for detecting endianess. Doing it at compile time in a portable way is probably hard. Doing it properly with a configure check is probably hard too. Using the endian definitions in <sys/types.h> (usually includes <endian.h>, which is not available everywhere) works under circumstances, but the previous commit broke it on OSX. Ideally all code should be endian dependent, but that is not possible due to the dependencies (such as FFmpeg, some video output APIs, some audio output APIs). Create a header osdep/endian.h, which contains various fallbacks. Note that the last fallback uses libavutil; however, it's not clear whether AV_HAVE_BIGENDIAN is a public symbol, or whether including <libavutil/bswap.h> really makes it visible. And in fact we don't want to pollute the namespace with libavutil definitions either. Thus it's only the last fallback.
* video: synchronize mpv rgb pixel format names with ffmpeg namesGravatar wm42014-06-14
| | | | | | | | | | | This affects packed RGB formats up to 16 bits per pixel. The old mplayer names used LSB-to-MSB order, while FFmpeg (and some other libraries) use MSB-to-LSB. Nothing should change with this commit, i.e. no bit order or endian bugs should be added or fixed. In some cases, the name stays the same, even though the byte order changes, e.g. RGB8->BGR8 and BGR8->RGB8, and this affects the user-visible names too; this might cause confusion.
* video: automatically strip "le" and "be" suffix from pixle format namesGravatar wm42014-06-14
| | | | | | | | | | | | | These suffixes are annoying when they're redundant, so strip them automatically. On little endian machines, always strip the "le" suffix, and on big endian machines vice versa (although I don't think anyone ever tried to run mpv on a big endian machine). Since pixel format strings are returned by a certain function and we can't just change static strings, use a trick to pass a stack buffer transparently. But this also means the string can't be permanently stored by the caller, so vf_dlopen.c has to be updated. There seems to be no other case where this is done, though.
* vdpau: move RGB surface management out of the VOGravatar wm42014-05-22
| | | | | | | | | | Integrate it with the existing surface allocator in vdpau.c. The changes are a bit violent, because the vdpau API is so non-orthogonal: compared to video surfaces, output surfaces use a different ID type, different format types, and different API functions. Also, introduce IMGFMT_VDPAU_OUTPUT for VdpOutputSurfaces wrapped in mp_image, rather than hacking it. This is a bit cleaner.