利用者:Namemiso/sandbox/av1
開発者 | Alliance for Open Media |
---|---|
初版 | 2018年3月28日 |
種別 | Compressed video |
包含先 | |
派生元 | |
オープン フォーマット | Yes |
ウェブサイト | aomediacodec.github.io/av1-spec |
AOMedia Video 1 (AV1)はインターネット上での動画配信を目的として設計されたオープンかつロイヤリティフリーな動画圧縮コーデックである。このコーデックはGoogleのVP9、そしてMPEGのHEVC/H.265の置き換えを目指し、2018年3月28日に正式バージョンが公開された。[1] 開発を行っているのは半導体産業、VODプロバイダやWebブラウザ開発主体の主要企業の出資により2015年に設立されたコンソーシアムであるAlliance for Open Media(AOMedia)である。 このコーデックはInternet Engineering Task Force (IETF)の動画の標準化ワーキンググループであるNetVCによる標準化の最有力候補となっている。[2]彼らは標準化に必要とされる項目をリストにまとめている。[3] AV1は音声圧縮フォーマットであるOpusとともにWebM 形式においてHTML5 WebvideoやWebRTCで使用可能になる予定である。[4]
沿革
[編集]The first official announcement of the project came with the press release on the formation of the Alliance on 1 September 2015.[5] The increased usage of its predecessor VP9 is attributed to confidence in the Alliance and development of AV1 as well as the pricey and complicated licensing situation of HEVC (High Efficiency Video Coding).[6][7]
The roots of the project precede the Alliance, however. Individual contributors started experimental technology platforms years before: Xiph's/Mozilla's Daala already published code in 2010, VP10 was announced on 12 September 2014,[8] and Cisco's Thor was published on 11 August 2015. The first version 0.1.0 of the AV1 reference codec was published on 7 April 2016.
Soft feature freeze was at the end of October 2017, but a few significant features were decided to continue developing beyond this. The bitstream format was projected to be frozen in January 2018;[9] However, this was delayed due to unresolved critical bugs, as well as last changes to transformations, syntax and the prediction of motion vectors and the completion of legal analysis.[10] The Alliance announced the release of the AV1 bitstream specification on 28 March 2018, along with a reference encoder, a reference decoder, test files ("reference streams"), and software bindings.[11]Template:Third-party inline[要非一次資料] However, as of 29 March 2018, the specification is still being edited, and is marked "draft" until editing finishes.[12]
Martin Smole from AOM member Bitmovin admits that the computational efficiency of the reference encoder is the greatest remaining challenge after the bitstream format freeze.[13] While still working on the format, the encoder was not targeted for productive use and didn't receive any speed optimizations. Therefore, it works orders of magnitude slower than e.g. existing HEVC encoders, and development is planned to shift its focus towards maturing the reference encoder after the freeze.
Purpose
[編集]AV1 aims to be a video format for the web that is both state of the art and royalty free.[14] The mission of the Alliance for Open Media remains the same as the mission of the WebM project.[15]
To fulfill the goal of being royalty free, the development process is such that no feature is adopted before it has been independently double checked that it does not infringe on patents of competing companies.[15] This contrasts to its main competitor HEVC, for which IPR review was not part of the standardization process.[6] The latter practice is stipulated in ITU-T's definition of an open standard. The case of HEVC's independent patent pools has been characterized by critical observers as a failure of price management.[16][17]
Under patent rules adopted from the World Wide Web Consortium (W3C), technology contributors license their AV1-connected patents to anyone, anywhere, anytime based on reciprocity, i.e. as long as the user does not engage in patent litigation.[18] As a defensive condition, anyone engaging in patent litigation loses the right to the patents of all patent holders.[6]
The performance goals include "a step up from VP9 and HEVC" in efficiency for a low increase in complexity.[15] NETVC's efficiency goal is 25% improvement over HEVC.[3] The primary complexity concern is for software decoding, since hardware support will take time to reach users.[15] However, for WebRTC, live encoding performance is also relevant, which is Cisco's agenda: Cisco is a manufacturer of videoconferencing equipment, and their Thor contributions aim at "reasonable compression at only moderate complexity".[17]
Feature wise, it is specifically designed for real-time applications (especially WebRTC) and higher resolutions (wider color gamuts, higher frame rates, UHD) than typical usage scenarios of the current generation (H.264) of video formats where it is expected to achieve its biggest efficiency gains. It is therefore planned to support the color space from ITU-R Recommendation BT.2020 and 10 and 12 bits of precision per color component.[19] AV1 is primarily intended for lossy encoding, although lossless compression is supported as well.[20]
AV1-based containers have also been proposed as a replacement for JPEG, similar to Better Portable Graphics and High Efficiency Image File Format which wrap HEVC.[21]
Technology
[編集]AV1 is a traditional block-based frequency transform format featuring new techniques, several of which were developed in experimental formats that have been testing technology for a next-generation format after HEVC and VP9.[22] Based on Google's experimental VP9 evolution project VP10,[23] AV1 incorporates additional techniques developed in Xiph's/Mozilla's Daala and Cisco's Thor.
開発元 | Alliance for Open Media |
---|---|
プログラミング 言語 | C, assembly |
ライセンス | BSDライセンス (free) |
公式サイト | aomedia.googlesource.com/aom |
The Alliance publishes a reference implementation written in C and assembly language (aomenc
, aomdec
) as free software under the terms of the BSD 2-Clause License.[24] Development happens in public and is open for contributions, regardless of AOM membership.
The development process is such that coding tools are added to the reference codebase as experiments, controlled by flags that enable or disable them at build time, for review by other group members as well as specialized teams that help with and ensure hardware friendliness and compliance with intellectual property rights (TAPAS). Once the feature gains some support in the community, the experiment can be enabled by default, and ultimately have its flag removed when all of the reviews are passed.[25] Experiment names are lowercased in the configure script and uppercased in conditional compilation flags.[26]
Data transformation
[編集]To transform pixel data to the frequency domain, AV1 includes a range of specialized frequency transforms like rectangular versions of the DCT and asymmetric versions of the DST for edge blocks.
It can combine two one-dimensional transforms in order to use different transforms for the horizontal and the vertical dimension (ext_tx
[27]).[28]
Partitioning
[編集]Prediction can happen for bigger units (≤128×128), and they can be subpartitioned in more ways. "T-shaped" partitioning schemes for coding units are introduced, a feature developed for VP10. Two separate predictions can now be used on spatially different parts of a block using a smooth, wedge-shaped transition line (wedge-partitioned prediction).[29] This enables more accurate separation of objects without the traditional staircase lines along the boundaries of square blocks.
More encoder parallelism is possible thanks to configurable prediction dependency between tile rows.[30]
Prediction
[編集]AV1 performs internal processing in higher precision (10 or 12 bits per sample), which leads to compression improvement due to smaller rounding errors in reference imagery.
Predictions can be combined in more advanced ways (than a uniform average) in a block (compound prediction), including smooth and sharp transition gradients in different directions (wedge-partitioned prediction) as well as implicit masks that are based on the difference between the two predictors. This allows combination of either two inter predictions or an inter and an intra prediction to be used in the same block.[31][29]
A frame can reference 6 instead of 3 of the 8 available frame buffers for temporal (inter) prediction.
The Warped Motion (warped_motion
[32])[28] and Global Motion (global_motion
[33]) tools in AV1 aim to reduce redundant information in motion vectors by recognizing patterns arising from camera motion.[30][28] They implement ideas that were tried to be exploited in preceding formats like e.g. MPEG-4 ASP, albeit with a novel approach that works in three dimensions. There can be a set of warping parameters for a whole frame offered in the bitstream, or blocks can use a set of implicit local parameters that get computed based on surrounding blocks.
For intra prediction, there are 56 (instead of 8) angles for directional prediction and weighted filters for per-pixel extrapolation. The "TrueMotion" predictor got replaced with a Paeth predictor which looks at the difference from the known pixel in the above left corner to the pixel directly above and directly left of the new one and then chooses the one that lies in direction of the smaller gradient as predictor. A palette predictor is available for blocks with very few colors like in some computer screen content. Correlations between the luminosity and the color information can now be exploited with a predictor for chroma blocks that is based on samples from the luma plane (cfl
).[28] In order to reduce discontinuities along borders of inter-predicted blocks, predictors can be overlapped and blended with those of neighbouring blocks (overlapped block motion compensation).
[34]
Quantization
[編集]AV1 has new optimized quantization matrices.[35]
Filters
[編集]For the in-loop filtering step, the integration of Thor's constrained low-pass filter and Daala's directional deringing filter has been fruitful: The combined Constrained Directional Enhancement Filter (cdef
[36]) exceeds the results of using the original filters separately or together.[37][38]
It is an edge-directed conditional replacement filter that smoothes blocks with configurable (signaled) strength roughly along the direction of the dominant edge to eliminate ringing artifacts.
There is also the loop restoration filter (loop_restoration
) to remove blur artifacts due to block processing.[28]
Film grain synthesis (film_grain
) improves coding of noisy signals using a parametric video coding approach.
Due to the randomness inherent to film grain noise, this signal component is traditionally either very expensive to code or prone get damaged or lost, possibly leaving serious coding artefacts as residue. This tool circumvents these problems using analysis and synthesis, replacing parts of the signal with a visually similar synthetic texture, based solely on subjective visual impression instead of objective similarity. It removes the grain component from the signal, analyzes its non-random characteristics, and instead transmits only descriptive parameters to the decoder, which adds back a synthetic, pseudorandom noise signal that's shaped after the original component.
Entropy coding
[編集]Daala's entropy coder (daala_ec
[39][40]), a non-binary arithmetic coder, was selected for replacing VP9's binary entropy coder. The use of non-binary arithmetic coding helps evade patents, but also adds bit-level parallelism to an otherwise serial process, reducing clock rate demands on hardware implementations.[7] This is to say that the effectiveness of modern binary arithmetic coding like CABAC is being approached using a greater alphabet than binary, hence greater speed, as in Huffman code (but not as simple and fast as Huffman code).
AV1 also gained the ability to adapt the symbol probabilities in the arithmetic coder per coded symbol instead of per frame (ec_adapt
[41]).[28][6]
Former experiments that have been fully integrated
[編集]This list may or may not be complete.
Historic build-time flag | Explanation |
---|---|
alt_intra [42] |
A new prediction mode suitable for smooth regions[28] |
cb4x4 [43] |
|
cdef [36] |
Constrained Directional Enhancement Filter: The merge of Daala's directional deringing filter + Thor's constrained low pass filter[37][44] |
chroma_sub8x8 [45] |
|
compound_segment [46] |
|
convolve_round [47] |
|
delta_q [48] |
Delta quantization step: Arbitrary adaptation of quantizers within a frame[28] |
daala_ec [39] |
The Daala entropy coder (a non-binary arithmetic coder)[40] |
ec_adapt [41] |
Adapts symbol probabilities on the fly.[28] As opposed to per frame, as in VP9.[6] |
ec_smallmul [49] |
A hardware optimization of daala_ec[44] |
ext_inter [50] |
Extended inter[30][28] |
ext_refs [51] |
Extended reference frames:[28] Adds more reference frames, as described in Adaptive multi-reference prediction using a symmetric framework[52] |
ext_tx [27] |
Ability to choose different horizontal and vertical transforms[28] |
filter_7bit [53] |
7-bit interpolation filters[54] |
global_motion [33] |
Global Motion[30][28] |
interintra [55] |
Inter-intra prediction, part of wedge partitioned prediction[29] |
motion_var [56] |
Renamed from obmc.[57] Overlapped Block Motion Compensation: Reduce discontinuities at block edges using different motion vectors[28] |
new_multisymbol [58] |
|
one_sided_compound [59] |
|
palette [60] |
Palette prediction: Intra codig tool for screen content.[61] |
palette_delta_encoding [62] |
|
rect_intra_pred [63] |
|
rect_tx [64] |
Rectangular transforms[65] |
ref_mv [66] |
Better methods for coding the motion vector predictors through implicit list of spatial and temporal neighbor MVs[28] |
smooth_hv [67] |
|
tile_groups [68] |
|
var_tx [69] |
|
warped_motion [32] |
Warped Motion[28] |
wedge [46] |
Wedge partitioned prediction[29] |
Current experiments
[編集]Only explained experiments are listed.
Enabled by default | Build-time flag[70] | Explanation |
---|---|---|
Yes | aom_qm |
Quantization Matrices[35] |
Yes | cdef_singlepass |
An optimization of cdef[38] |
Yes | cfl |
Chroma from Luma[28] |
Yes | dist_8x8 |
A merge of former experiments cdef_dist and daala_dist.[26] Daala_dist is Daala's distortion function.[7] |
Yes | dual_filter |
Ability to choose different horizontal and vertical interpolation filters for subpixel motion compensation[28] |
Yes | ext_intra |
Extended intra:[30] 65 angular intra prediction modes[28] |
No | ext_tile |
Option of no dependency across tile rows[28] |
No | filter_intra |
Interpolate the reference samples before prediction to reduce the impact of quantization noise[28] |
Yes | loop_restoration |
Remove blur artifacts due to block processing[28] |
Yes | txmg |
Merge high/low bitdepth transforms[71] |
Notable features not included
[編集]Daala Transforms implements discrete cosine and sine transforms that its authors describe as "better in every way" than the txmg
set of transforms that prevailed in AV1.[72][73][74][75][76] Both the txmg
and daala_tx
experiments have merged high and low bitdepth code paths (unlike VP9), but daala_tx
achieved full embedding of smaller transforms within larger, as well as using fewer multiplies, which could have further reduced the cost of hardware implementations. The Daala transforms were kept as optional in the experimental codebase until late January 2018, but changing hardware blocks at a late stage was a general concern for delaying hardware availability.[77]
The encoding complexity of Daala's Perceptual Vector Quantization was too much within the already complex framework of AV1.[7] The Rate Distortion dist_8x8
heuristic aims to speed up the encoder by a sizable factor, PVQ or not,[7] but PVQ was ultimately dropped.
ANS was the other non-binary arithmetic coder, developed in parallel with Daala's entropy coder. Of the two, Daala EC was the more hardware friendly, but ANS was the fastest to decode in software.[6]
Quality and efficiency
[編集]A first comparison from the beginning of June 2016[78] found AV1 roughly on par with HEVC, as did one using code from late January 2017.[79]
In April 2017, using the 8 enabled experimental features at the time (of 77 total), Bitmovin was able to demonstrate favorable objective metrics, as well as visual results, compared to HEVC on the Sintel and Tears of Steel animated films.[80] A follow-up comparison by Jan Ozer of Streaming Media Magazine confirmed this, and concluded that "AV1 is at least as good as HEVC now".[81]
Ozer noted that his and Bitmovin's results contradicted a comparison by Fraunhofer Institute for Telecommunications from late 2016[82] that had found AV1 38.4% less efficient than HEVC, underperforming even H.264/AVC, and justified this discrepancy by having used encoding parameters endorsed by each encoder vendor, as well as having more features in the newer AV1 encoder.
Tests from Netflix showed that, based on measurements with PSNR and VMAF at 720p, AV1 could be about 25% more efficient than VP9 (libvpx), at the expense of a 4–10 fold increase in encoding complexity.[83] Similar conclusions with respect to quality were drawn from a test conducted by Moscow State University researchers, where VP9 was found to require 31% and HEVC 22% more bitrate than AV1 for the same level of quality.[84] The researchers found that the used AV1 encoder was operating at a speed “2500–3500 times lower than competitors”, while admitting that it has not been optimized yet.[85]
AOMedia provides a list of test results on their website.
Adoption
[編集]Like its predecessor VP9, AV1 can be used inside WebM container files alongside the Opus audio format. These formats are well supported among web browsers, with the exception of Safari (only has Opus support) and the discontinued Internet Explorer (prior to Edge) (see VP9 in HTML5 video).
From November 2017 onwards, nightly builds of the Firefox web browser contained preliminary support for AV1.[86][87] Upon its release on 9 February 2018, version 3.0.0 of the VLC media player shipped with an experimental AV1 decoder. [88]
It is expected that Alliance members have interest in adopting the format, in respective ways, once the bitstream is frozen.[19][80] The member companies represent several industries, including browser vendors (Apple, Google, Mozilla, Microsoft), content distributors (Apple, Amazon, Facebook, Google, Hulu, Netflix) and hardware designers (AMD, Apple, ARM, Broadcom, Intel, Nvidia).[6][7][89] Video streaming service YouTube declared intent to transition to the new format as fast as possible, starting with highest resolutions within six months after the finalization of the bitstream format.[19] Netflix "expects to be an early adopter of AV1".[15]
According to Mukund Srinivasan, chief business officer of AOM member Ittiam, early hardware support will be dominated by software running on non-CPU hardware (such as GPGPU, DSP or shader programs, as is the case with some VP9 hardware implementations), as fixed-function hardware will take 12–18 months after bitstream freeze until chips are available, plus 6 months for products based on those chips to hit the market.[25]
Software
[編集]References
[編集]- ^ Zimmerman, Steven (15 May 2017). “Google’s Royalty-Free Answer to HEVC: A Look at AV1 and the Future of Video Codecs”. XDA Developers. 14 June 2017時点のオリジナルよりアーカイブ。10 June 2017閲覧。
- ^ Rick Merritt (EE Times), 30 June 2016: Video Compression Feels a Pinch
- ^ a b Sebastian Grüner (19 July 2016). “Der nächste Videocodec soll 25 Prozent besser sein als H.265” (ドイツ語). golem.de. 1 March 2017閲覧。
- ^ Tsahi Levent-Levi (3 September 2015). “WebRTC Codec Wars: Rebooted”. BlogGeek.me. 1 March 2017閲覧。 “The beginning of the end of HEVC/H.265 video codec”
- ^ "Alliance for Open Media established to deliver next-generation open media formats" (Press release). Alliance for Open Media. 1 September 2015. 2015年9月5日閲覧。[自主公表]
- ^ a b c d e f g Timothy B. Terriberry (18 January 2017). “Progress in the Alliance for Open Media” (video). linux.conf.au. 1 March 2017閲覧。[自主公表]
- ^ a b c d e f Timothy B. Terriberry (18 January 2017). “Progress in the Alliance for Open Media (slides)”. 22 June 2017閲覧。[自主公表]
- ^ Stephen Shankland (September 12, 2014). “Google's Web-video ambitions bump into hard reality”. CNET September 13, 2014閲覧。
- ^ “Jai Krishnan from Google and AOMedia giving us an update on AV1”. YouTube (22 November 2017). 22 December 2017閲覧。[自主公表]
- ^ Terriberry, Timothy B. (2018年2月3日). “AV1 Codec Update” (英語). FOSDEM. 2018年2月8日閲覧。[自主公表]
- ^ Alliance for Open Media (28 March 2018). "The Alliance for Open Media Kickstarts Video Innovation Era with "AV1" Release" (Press release). Wakefield, Mass.
- ^ “AV1 Bitstream and Decoding Process Specification”. Alliance for Open Media. 29 March 2018閲覧。
- ^ Hunter, Philip (2018年2月15日). “Race on to bring AV1 open source codec to market, as code freezes” (英語). Videonet. Mediatel Limited. 2018年3月19日閲覧。
- ^ “AV1 Update”. YouTube (5 October 2017). 21 December 2017閲覧。[自主公表]
- ^ a b c d e “VP9-AV1 Video Compression Update” (31 July 2017). 21 November 2017閲覧。 “Obviously, if we have an open source codec, we need to take very strong steps, and be very diligent in making sure that we are in fact producing something that's royalty free. So we have an extensive IP diligence process which involves diligence on both the contributor level – so when Google proposes a tool, we are doing our in-house IP diligence, using our in-house patent assets and outside advisors – that is then forwarded to the group, and is then again reviewed by an outside counsel that is engaged by the alliance. So that's a step that actually slows down innovation, but is obviously necessary to produce something that is open source and royalty free.”
- ^ “Standards are Failing the Streaming Industry” (4 May 2017). 20 May 2017閲覧。
- ^ a b “Integrating Thor tools into the emerging AV1 codec” (13 September 2017). 2 October 2017閲覧。 “Royalty-free video codecs: The deployment of recent compression technologies such as HEVC/H.265 may have been delayed or restricted due to their licensing terms. (…) What can Thor add to VP9/AV1? Since Thor aims for reasonable compression at only moderate complexity, we considered features of Thor that could increase the compression efficiency of VP9 and/or reduce the computational complexity.”
- ^ Neil McAllister, 1 September 2015: Web giants gang up to take on MPEG LA, HEVC Advance with royalty-free streaming codec – Joining forces for cheap, fast 4K video
- ^ a b c Ozer, Jan (3 June 2016). “What is AV1?”. Streaming Media. Information Today, Inc.. 26 November 2016時点のオリジナルよりアーカイブ。26 November 2016閲覧。 “... Once available, YouTube expects to transition to AV1 as quickly as possible, particularly for video configurations such as UHD, HDR, and high frame rate videos ... Based upon its experience with implementing VP9, YouTube estimates that they could start shipping AV1 streams within six months after the bitstream is finalized. ...”
- ^ “examples/lossless_encoder.c” (英語). Git at Google. Alliance for Open Media. 2017年10月29日閲覧。[自主公表]
- ^ Shankland, Stephen (2018年1月19日). “Photo format from Google and Mozilla could leave JPEG in the dust”. CNET (CBS Interactive) 2018年1月28日閲覧。
- ^ Romain Bouqueau (12 June 2016). “A view on VP9 and AV1 part 1: specifications”. GPAC Project on Advanced Content 1 March 2017閲覧。
- ^ Jan Ozer, 26 May 2016: What Is VP9?
- ^ https://aomedia.googlesource.com/aom/+/master/LICENSE
- ^ a b “AV1: A status update” (30 August 2017). 14 September 2017閲覧。
- ^ a b “Delete daala_dist and cdef-dist experiments in configure” (30 August 2017). 2 October 2017閲覧。 “Since those two experiments have been merged into the dist-8x8 experiment”[自主公表]
- ^ a b “Remove experimental flag of EXT_TX” (2 November 2017). 23 November 2017閲覧。[自主公表]
- ^ a b c d e f g h i j k l m n o p q r s t u v “Analysis of the emerging AOMedia AV1 video coding format for OTT use-cases”. 19 September 2017閲覧。
- ^ a b c d “New video coding techniques under consideration for VP10 – the successor to VP9”. YouTube (16 November 2015). 3 December 2016閲覧。[自主公表]
- ^ a b c d e “Decoding the Buzz over AV1 Codec” (9 June 2017). 22 June 2017閲覧。[自主公表]
- ^ Mukherjee, Debargha; Su, Hui; Bankoski, Jim; Converse, Alex; Han, Jingning; Liu, Zoe; Xu (Google Inc.), Yaowu, “An overview of new video coding tools under consideration for VP10 – the successor to VP9”, SPIE Optical Engineering+ Applications (International Society for Optics and Photonics) 9599, doi:10.1117/12.2191104
- ^ a b “Remove experimental flag of WARPED_MOTION” (31 October 2017). 23 November 2017閲覧。[自主公表]
- ^ a b “Remove experimental flag of GLOBAL_MOTION” (30 October 2017). 23 November 2017閲覧。[自主公表]
- ^ Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu et al. (2017-09-19). “Novel inter and intra prediction tools under consideration for the emerging AV1 video codec”. Applications of Digital Image Processing XL, proceedings of SPIE Optical Engineering + Applications 2017 (International Society for Optics and Photonics) 10396: 103960F. doi:10.1117/12.2274022.
- ^ a b “AOM_QM: enable by default” (9 August 2017). 19 September 2017閲覧。[自主公表]
- ^ a b “Remove experimental flag of CDEF” (10 November 2017). 23 October 2017閲覧。[自主公表]
- ^ a b “Constrained Directional Enhancement Filter” (28 March 2017). 15 September 2017閲覧。[自主公表]
- ^ a b “Thor update” (July 2017). 2 October 2017閲覧。[自主公表]
- ^ a b “This patch forces DAALA_EC on by default and removes the dkbool coder” (25 May 2017). 14 September 2017閲覧。[自主公表]
- ^ a b “Daala Entropy Coder in AV1” (14 February 2017). Template:Cite webの呼び出しエラー:引数 accessdate は必須です。[自主公表]
- ^ a b “Remove the EC_ADAPT experimental flags” (18 June 2017). 23 September 2017閲覧。[自主公表]
- ^ “Remove ALT_INTRA flag” (1 June 2017). 19 September 2017閲覧。[自主公表]
- ^ “Remove CONFIG_CB4X4 config options” (21 October 2017). 29 October 2017閲覧。[自主公表]
- ^ a b “NETVC Hackathon Results IETF 98 (Chicago)”. 15 September 2017閲覧。
- ^ “Remove experimental flag of chroma_sub8x8” (23 October 2017). 29 October 2017閲覧。[自主公表]
- ^ a b “Remove compound_segment/wedge config flags” (29 October 2017). 23 November 2017閲覧。[自主公表]
- ^ “Remove convolve_round/compound_round config flags” (12 December 2017). 17 December 2017閲覧。[自主公表]
- ^ “Remove delta_q experimental flag” (19 September 2017). 2 October 2017閲覧。[自主公表]
- ^ “Remove the EC_SMALLMUL experimental flag” (25 August 2017). 15 September 2017閲覧。[自主公表]
- ^ “Remove compile guards for CONFIG_EXT_INTER” (2 October 2017). 29 October 2017閲覧。 “This experiment has been adopted”[自主公表]
- ^ “Remove compile guards for CONFIG_EXT_REFS” (16 October 2017). 29 October 2017閲覧。 “This experiment has been adopted”[自主公表]
- ^ “Adaptive Multi-Reference Prediction Using A Symmetric Framework” (4 July 2017). 29 October 2017閲覧。
- ^ “Remove filter_7bit experimental flag” (19 September 2017). 29 October 2017閲覧。[自主公表]
- ^ “7-bit interpolation filters” (26 August 2017). 29 October 2017閲覧。 “Purpose: Reduce dynamic range of interpolation filter coefficients from 8 bits to 7 bits. Inner product for 8-bit input data can be stored in a 16-bit signed integer.”[自主公表]
- ^ “Remove CONFIG_INTERINTRA” (30 October 2017). 23 November 2017閲覧。[自主公表]
- ^ “Remove experimental flag of MOTION_VAR” (31 October 2017). 23 November 2017閲覧。[自主公表]
- ^ “Renamings for OBMC experiment” (13 October 2017). 19 September 2017閲覧。[自主公表]
- ^ “Remove experimental flag of NEW_MULTISYMBOL” (15 November 2017). 23 October 2017閲覧。[自主公表]
- ^ “Remove ONE_SIDED_COMPOUND experimental flag” (7 November 2017). 23 November 2017閲覧。[自主公表]
- ^ “Remove PALETTE flag” (1 June 2017). 19 September 2017閲覧。[自主公表]
- ^ “Overview of the Decoding Process (Informative)”. 21 January 2018閲覧。 “For certain types of image, such as PC screen content, it is likely that the majority of colors come from a very small subset of the color space. This subset is referred to as a palette. AV1 supports palette prediction, whereby non-inter frames are predicted from a palette containing the most likely colors.”[自主公表]
- ^ “Remove experimental flag of PALETTE_DELTA_ENCODING” (15 December 2017). 17 December 2017閲覧。[自主公表]
- ^ “Remove rect_intra_pred experimental flag” (26 September 2017). 2 October 2017閲覧。[自主公表]
- ^ “Remove experimental flag for rect-tx” (29 October 2017). 23 November 2017閲覧。[自主公表]
- ^ “Rectangular transforms 4x8 & 8x4” (1 July 2016). 14 September 2017閲覧。[自主公表]
- ^ “Merge ref-mv into codebase” (27 April 2017). 23 September 2017閲覧。[自主公表]
- ^ “Remove smooth_hv experiment flag” (9 November 2017). 23 November 2017閲覧。[自主公表]
- ^ “Remove the CONFIG_TILE_GROUPS experimental flag” (18 July 2017). 19 September 2017閲覧。[自主公表]
- ^ “Remove compile guards for VAR_TX experiment” (24 October 2017). 29 October 2017閲覧。 “This experiment has been adopted”[自主公表]
- ^ “AV1 experiment flags” (29 September 2017). 2 October 2017閲覧。[自主公表]
- ^ “Add txmg experiment” (31 July 2017). 3 January 2018閲覧。 “This experiment aims at merging lbd/hbd txfms”[自主公表]
- ^ “Daala-TX” (22 August 2017). 26 September 2017閲覧。 “Replaces the existing AV1 TX with the lifting implementation from Daala. Daala TX is better in every way: ● Fewer multiplies ● Same shifts, quantizers for all transform sizes and depths ● Smaller intermediaries ● Low-bitdepth transforms wide enough for high-bitdepth ● Less hardware area ● Inherently lossless”[自主公表]
- ^ “Daala Transforms in AV1” (27 October 2017). Template:Cite webの呼び出しエラー:引数 accessdate は必須です。[自主公表]
- ^ “Daala Transforms Update” (1 December 2017). Template:Cite webの呼び出しエラー:引数 accessdate は必須です。[自主公表]
- ^ “Daala Transforms Evaluation” (15 December 2017). Template:Cite webの呼び出しエラー:引数 accessdate は必須です。[自主公表]
- ^ “Daala Transforms Informational Discussion” (21 December 2017). Template:Cite webの呼び出しエラー:引数 accessdate は必須です。[自主公表]
- ^ “The Future of Video Codecs: VP9, HEVC, AV1” (2 November 2017). 30 January 2018閲覧。
- ^ Sebastian Grüner (9 June 2016). “Freie Videocodecs teilweise besser als H.265” (ドイツ語). golem.de. 1 March 2017閲覧。
- ^ “Results of Elecard's latest benchmarks of AV1 compared to HEVC” (24 April 2017). 14 June 2017閲覧。 “The most intriguing result obtained after analysis of the data lies in the fact that the developed codec AV1 is currently equal in its performance with HEVC. The given streams are encoded with AV1 update of 2017.01.31”
- ^ a b “Bitmovin Supports AV1 Encoding for VoD and Live and Joins the Alliance for Open Media”. (18 April 2017) 20 May 2017閲覧。[自主公表]
- ^ “HEVC: Rating the contenders”. Streaming Learning Center. 22 May 2017閲覧。
- ^ D. Grois, T, Nguyen, and D. Marpe, "Coding efficiency comparison of AV1/VP9, H.265/MPEG-HEVC, and H.264/MPEG-AVC encoders", IEEE Picture Coding Symposium (PCS) 2016 http://iphome.hhi.de/marpe/download/Preprint-Performance-Comparison-AV1-HEVC-AVC-PCS2016.pdf
- ^ “Netflix on AV1” (英語). Streaming Learning Center. (2017年11月30日) 2017年12月8日閲覧。
- ^ “MSU Codec Comparison 2017” (2018年1月17日). 2018年2月9日閲覧。
- ^ Ozer, Jan (2018年1月30日). “AV1 Beats VP9 and HEVC on Quality, if You've Got Time, says Moscow State”. Streaming Media Magazine 2018年2月9日閲覧。
- ^ Shankland, Stephen (2017年11月28日). “Firefox now lets you try streaming-video tech that could be better than Apple's” (英語). CNET 2017年12月25日閲覧。
- ^ https://hacks.mozilla.org/2017/11/dash-playback-of-av1-video/[自主公表]
- ^ “VLC 3.0 Vetinari” (英語) (2018年2月10日). 2018年2月10日閲覧。
- ^ Nick Stat (2018年1月4日). “Apple joins group of tech companies working to improve online video compression”. The Verge. 2018年1月10日閲覧。
- ^ “DASH playback of AV1 video in Firefox – Mozilla Hacks - the Web developer blog” (英語). Mozilla Hacks – the Web developer blog. 2018年3月20日閲覧。
- ^ “VLC release notes”. Template:Cite webの呼び出しエラー:引数 accessdate は必須です。
- ^ “GStreamer 1.14 release notes”. gstreamer.freedesktop.org. 2018年3月20日閲覧。
外部リンク
[編集]- Overview of the decoding process (not up to date)
- Bitstream specification
- Source code repository
- Source code review
- Issue tracker
- Requirements to be met for the IETF NetVC