I’ve been saying for a while that the best way out of the Web video codec mess is formal standardization of a royalty-free video codec and that formal standards groups like MPEG and others should step up to the task.

Of course, I mean a real, bona-fide standardization process, not a dubious rubber-stamping “ratification” nor a half-hearted kick-the-can-down-the-road affair.

Below is a draft personal response to a public call for comments by the ISO MPEG committee on issues related to standardizing a royalty-free video codec.

—————————————————————————————————————————————————————————————–

(DRAFT) Response to MPEG Request for Comments
on Option-1 (Royalty-Free) Video Coding

At its October 2010 meeting, MPEG requested public comment on industry needs and performance targets for an Option-1 (royalty-free) video codec standard [1].

MPEG should enlarge its portfolio of standards by offering some that are expected to be royalty free [2], taking into account the following points:

Google has proposed ratification of its video codec VP8 as mandatory by IETF, and has stated it will remove h.264 support from Chrome:

In November, 2010, Google filed a proposed Internet-Draft, standards track document for real-time communication over the Web that states:

“In video, the VP8 codec [vp8] MUST be supported..” [3]

As to removing support of h.264 from Chrome, Google stated in January, 2011:

“Specifically, we are supporting the WebM (VP8) and Theora video codecs, and will consider adding support for other high-quality open codecs in the future. Though H.264 plays an important role in video, as our goal is to enable open innovation, support for the codec will be removed and our resources directed towards completely open codec technologies.” [4]

In discussing ratification of VP8 as a mandatory-to-implement specification by IETF even without a formal standardization process, the author of the Google proposed Internet-Draft has opined that making VP8 mandatory to implement is reasonable, and if IETF does not address the issue, proponents “may have to start up a third effort in some forum that’s willing to define the profile needed for interoperability” [5].

The Internet and World Wide Web are fundamentally based on royalty-free standards [6] and therefore need and expect a royalty-free video codec standard [7].

Tim Berners-Lee in an article in the Scientific American in November, 2010 restated the often-stated point that the Web’s richness, diverse user base and different (free and pay) business models require that royalty-free standards are and must be the foundation of the Web:

“The basic Web technologies that individuals and companies need to develop powerful services must be available for free, with no royalties.

… Open, royalty-free standards that are easy to use create the diverse richness of Web sites, from the big names such as Amazon, Craigslist and Wikipedia to obscure blogs written by adult hobbyists and to homegrown videos posted by teenagers.

… Openness also means you can build your own Web site or company without anyone’s approval. When the Web began, I did not have to obtain permission or pay royalties to use the Internet’s own open standards, such as the well-known transmission control protocol (TCP) and Internet protocol (IP). Similarly, the Web Consortium’s royalty-free patent policy says that the companies, universities and individuals who contribute to the development of a standard must agree they will not charge royalties to anyone who may use the standard.

… Open, royalty-free standards do not mean that a company or individual cannot devise a blog or photo-sharing program and charge you to use it. They can. And you might want to pay for it if you think it is “better” than others. The point is that open standards allow for many options, free and not.” [8]

The World Wide Web Consortium recently stated in the context of the much-watched deliberation about a royalty-free codec for the upcoming HTML5 specification:

“The W3C HTML Working Group has not identified a Royalty-Free video codec or container format that would satisfy all parties. … W3C is still highly interested in finding a solution in this space.”[9]

Looking forward, trends like cloud computing, the “Internet of devices”, and the many business models and usage scenarios thriving on the Internet all point to the continuing necessity of placing royalty-free, rather than royalty-bearing, standards at the foundation of the open Internet.

MPEG (WG11 of ISO SC29) has the competence and responsibility to standardize an Option-1 (royalty-free) video codec.

With a 20+ year history, MPEG, now has a portfolio of standards and technologies at or approaching patent expiration from which to assemble a royalty free standard. [10]

MPEG has deliberated at multiple meetings and a quorum of SC29 national bodies have expressed support [11].

The ISO/IEC/ITU patent policy provides a process framework [12], and related bodies like IETF are already moving forward with royalty-free codec standardization [13].

Recent changes in h.264 licensing have been rejected by key Internet industry leadership as inadequate to meet the need for a fully royalty-free standard [14].

As three of many examples, Web browser vendors Mozilla [15], Opera [16], and Google [4] have expressed that these new license terms from MPEG LA are unacceptable to them.

MPEG’s well-established methodology of defining a test model and evaluating improvements to it is the appropriate approach for Option-1 standardization.

MPEG has a long-established and successful work method of defining and then improving a test plan based on pre-defined, rigorous methodology [17].  This test model approach is ideally suited for Option-1, royalty-free standardization, which requires a two-step process of first defining a generic video test model of a known royalty-free foundation, then only allowing additions and modifications of known IPR source and licensing.

It would be unwise, counterproductive, and against long-established MPEG practice for MPEG to publicly set only a “hard” performance target like “a 2x coding gain, in comparison to MPEG-1” for an Option-1 (royalty-free) codec.

A narrow measure of video codec performance is a metric such as image quality (like PSNR) against bandwidth, but the practical reality is that all viable video codec standards activities must and do incorporate trade-offs of multiple features, application profiles and requirements, and platform constraints.

For example, MPEG-2 wisely did not set a hard performance target over MPEG-1; rather, MPEG-2’s core rationale [18] was specifically aimed at adding features like interlace to make the overall standard more generically useful and supportive of specific application profiles:

Rationale

…The general aim of MPEG-2 is to support a broad number of features and operating points which were not the focus of MPEG-1, and in so doing to establish an essentially generic, i.e. application independent standard, which will support a small number of key application profiles.”

Performance-related requirements were only specified as “optimized” image quality, and balanced against multiple parameters like multi-resolution scalability, transmission and storage channel-coding and error-recovery schemes, and  low coding-decoding delay:

“Requirements

MPEG-2 Video extends the capabilities of MPEG-1 with efficient methods to encode interlaced video formats. MPEG-2 Video key requirements include: optimized image quality in ranges from about 3 to 15 Mbit/s; support for various interlaced (as well as progressive) video formats; provision for multi-resolution bit-stream and decoder scalability; random accessibility to support efficient channel-hopping and editability; compatibility with both MPEG-1 and the CCITT H.261 recommendation for video telecommunications; adaptability to various transmission and storage channel-coding and error-recovery schemes; provision for low coding-decoding delay.

And the workplan required only an evaluation process to identify independently confirmable, appreciable improvements in picture quality to make improvements in a short time:

Workplan

… A video test model has been established, and several key modules in its algorithmic block diagram are subject to improvement up until March 1993.  For a proposed alternative to be selected to replace the current method, two independent experts must confirm experimentally that the proposed method yields appreciably improved picture quality.  This methodology permits MPEG’s dozens of experts from the world’s top video coding laboratories to make tremendous improvements to the state-of-the-art over a remarkably short time”

The 2001 ISO/ITU Terms of Reference for AVC and h.264 [19] did set a 50% performance target as one of 9 requirements in Annex 1, but it separately required a royalty-free baseline for which the priority requirement was to be its demonstrably royalty free IPR.  And the overall aim was limited to “offer the best possible technical performance under the practical constraints of being implementable on various platforms and for various applications enabled by the relevant ITU-T Recommendations and ISO/IEC International Standards”

Similarly, the High-Performance Video Coding activity has wisely acknowledged that optimizing performance over some ranges may produce worse performance over others [20]:

“3 Requirements

3.1 Compression Performance

A substantially greater bitrate reduction over MPEG-4 AVC High Profile is required for the target application(s); at no point of the entire bitrate range shall HVC be worse than existing standard(s).”

In sum, while performance efficiency is undoubtedly an important requirement of any viable video codec standard, requirements must include and weigh trade-offs of multiple factors.  As described above, MPEG’s long-established methodology of first establishing a test model (in this case an Option-1/royalty-free test model) and then evaluating (Option-1) improvements to it, within the constraints of multiple application and platform requirements, is the appropriate requirements methodology.

References

[1]     “Resolutions of the 94th Meeting”, ISO/IEC JTC 1/SC29 N11553, Guangzhou, CN, October 2010, http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11604c.htm:

“MPEG requests that companies comment on the following topics relating to Option-1 licensable video coding:

1. The relevance of pursuing such a standards activity within MPEG, particularly with respect to current market conditions and industry needs.

2. What are the specific video codec performance targets that may be required in order to secure the desired level of market adoption? As an example, current discussions related to an Option-1 codec have considered a 2x coding gain, in comparison to MPEG-1, as a minimum performance target.”

[2]     Leonardo Chiariglione, The missed award speech, May 2008, http://www.chiariglione.org/leonardo/publications/epo2008/index.asp:

“I believe MPEG should enlarge its portfolio of standards by offering some that are expected to be royalty free and typically less performing and with less functionality next to those that are state of the art, more performing and with more functionality.

… “[F]air – reasonable – non discriminatory” used to be meaningful words when standards were designed for the needs of one industry whose members generally shared the business model according to which the standard would be used.

… I fear that the virtuous circle … whereby the reward from innovation is used to create more innovation may be coming to an end.

… [T]he problem is not “how many cents, tens of cent or euros of licensing fee is fair and reasonable” but “how can licensing be fair and reasonable without specifying a business model”.”

[3]     IETF Internet Draft, Standards Track, “Overview: Real Time Protocols for Brower-based Applications”, H. Alvestrand (Google), November 11, 2010, http://tools.ietf.org/html/draft-alvestrand-dispatch-rtcweb-protocols-00.

Section 5, Data formats, states:

“This document specifies a minimum baseline that will be supported by all implementations of this specification, and leaves further codecs to be included at the will of the implementor.

… In video, the VP8 codec [vp8] MUST be supported.” …

[4]     “HTML Video Codec Support in Chrome”, Mike Jazayeri, Google Product Manager, January 11, 2011, http://blog.chromium.org/2011/01/html-video-codec-support-in-chrome.html

[5]     January 13, 2011, Re: [dispatch] Charter proposal: The activity hitherto known as “RTC-WEB at IETF”, http://www.ietf.org/mail-archive/web/dispatch/current/msg03119.html:

“My personal thinking is that a mandatory codec is reasonable for the profile document that I tried to create in draft-alvestrand-dispatch-rtcweb-protocols, while it is not reasonable for the protocol document of draft-alvestrand-dispatch-rtcweb-datagram. But if the IETF decides that it doesn’t want to address this issue … we (the people who want interoperability of RTC-Web applications) may have to start up a third effort in some forum that’s willing to define the profile needed for interoperability”

[6]     See, for example, W3C Patent Policy, http://www.w3.org/Consortium/Patent-Policy-20040205/:

“In order to promote the widest adoption of Web standards, W3C seeks to issue Recommendations that can be implemented on a Royalty-Free (RF) basis. Subject to the conditions of this policy, W3C will not approve a Recommendation if it is aware that Essential Claims exist which are not available on Royalty-Free terms.”

[7]     For example, the Open Video Alliance, founded in 2009 by Mozilla, Kaltura, Participatory Culture Foundation, and Yale ISP, has issued five “Principles for an Open Video Ecosystem”, which clearly indicate that royalty-free open standards are the only acceptable approach for open web video standards:

“Open Standards for Video — Video standards (formats, codecs, metadata, etc.) should be open, interoperable, and royalty free”

http://openvideoalliance.org/wiki/index.php?title=Some_principles_for_open_video

[8]     Tim Berners-Lee, “Long Live the Web: A Call for Continued Open Standards and Neutrality”, Scientific American, November 22, 2010,  http://www.scientificamerican.com/article.cfm?id=long-live-the-web

[9]     W3C HTML5 FAQ, http://www.w3.org/html/wiki/FAQs, last modified 23 November 2010:

Does HTML5 provide for Royalty-Free video and audio codecs?

Question: Since no video codec and container format have been specified yet, CPs and service providers have to prepare multiple versions of same video contents for browsers supporting different codecs and container formats with HTML5. Therefore, it would be nice to specify (mandatorily) supported codec(s) and container format(s). When do you estimate this can be done? Or is it possible that this can be done at all?

The W3C HTML Working Group has not identified a Royalty-Free video codec or container format that would satisfy all parties. There are various requirements to consider, including the W3C Royalty-Free licensing commitments and various open source projects (Mozilla, Webkit). W3C is still highly interested in finding a solution in this space. At the moment, two video codecs seem to cover all major Web browsers.

See also Jeremy Kirk, “Browser vendor squabbles cause W3C to scrap codec requirement”, Infoworld, July 2, 2009 http://infoworld.com/d/developer-world/browser-vendor-squabbles-cause-w3c-scrap-codec-requirement-974

[10]   See MPEG 20th Year Anniversary Commemoration, Tokyo, November 8, 2008, http://www.itscj.ipsj.or.jp/forum/forum2008MPEG20.html;       Hiroshi Yasuda, “MPEG Birth to Practical Use”, November 8, 2008, http://www.itscj.ipsj.or.jp/forum/forum2008MPEG20/01MPEG20_Yasuda.pdf; Cliff Reader, “Video Coding IPR Issues”, http://www.avs.org.cn/avsdoc/2003-7-30/Cliff.pdf; Cliff Reader, “History of MPEG Video Compression–Ver. 4.0,” 99 pp., document marked Dec. 16, 2003.

[11]    Resolutions, the 92nd SC 29/WG 11 Meeting, 2010-04-19/23, Dresden, Germany, SC 29/WG 11 N 11241, http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11185c.htm:

“Given that there is a desire for using royalty free video coding technologies for some applications such as video distribution over the Internet, MPEG wishes to enquire of National Bodies about their willingness to commit to active participation (as defined by Section 6.2.1.4 of the JTC1 directives) in developing a Type-1 video coding standard.”

Resolutions, the 93rd SC 29/WG 11 Meeting, 2010-07-26/30, Geneva, Switzerland  [SC 29/WG 11 N 11356], http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11382c.htm:

“The Requirements group requests National Bodies and experts to provide contributions on the draft Option 1 Licensing Video Coding documents”

[12]   See Part 1, section 3 of “Guidelines for Implementation of the Common Patent Policy for ITU-T/ITU-R/ISO/IEC” (1 March 2007). The guidelines for the common patent policy encourage disclosure as early as possible by participants and third parties of any known patents or applications:

“[A]ny party participating in the work of the Organizations should, from the outset, draw their attention to any known patent or to any known pending patent application, either their own or of other organizations.

In this context, the words “from the outset” imply that such information should be disclosed as early as possible during the development of the Recommendation | Deliverable.

…  In addition to the above, any party not participating in Technical Bodies may draw  the attention of the Organizations to any known Patent, either their own and/or of any third-party.

[13]   http://tools.ietf.org/wg/codec/

[14]   See for example, “Think H.264 is Now Royalty-Free?  Think Again – and the ‘Open Source’ Defense is No Defense to MPEG-LA”, Peter Csathy, CEO Sorenson Media, Sept. 20, 2010,  http://blog.sorensonmedia.com/2010/09/think-h-264-is-now-royalty-free-think-again-and-the-open-source-defense-is-no-defense-to-mpeg-la/

It appears that many may  have been initially misinformed that MPEG-4 AVC was to become entirely royalty free, as highlighted by Peter Csathy, CEO of encoder vendor Sorenson Media:

“But, you say, MPEG LA recently announced that it will no longer charge royalties for the use of H.264. Yes, it’s true – MPEG LA recently bowed to mounting pressure from, and press surrounding, WebM and announced something that kind of sounds that way. But, I caution you to read the not-too-fine print. H.264 is royalty-free only in one limited case – for Internet video that is delivered free to end users. Read again: for (1) Internet delivery that is (2) delivered free to end users. In the words of MPEG LA’s own press release, “Products and services other than [those] continue to be royalty-bearing.”

This inspires speculation that there may be underlying and less charitable motives in play:

“MPEG LA, in fact, continues to make “noises” that even Google’s royalty free WebM gift to the world (which resulted from its acquisition of On2 and its VP8 video codec) infringes the rights of its patent holders.”

[15]   “Mozilla shrugs off ‘forever free’ H.264 codec license: Uh, will H.264 even be relevant in 4 years?”, Cade Metz, August 26, 2010, http://www.zdnet.com/blog/hardware/mozilla-unmoved-by-royalty-free-h264/9499

[16]   “Opera still won’t support H.264 video”, Jan Vermeulen, October 1, 2010, http://mybroadband.co.za/news/internet/15547-Opera-still-wont-support-H264-video.html

[17]   See for example, MPEG2 Workplan, http://mpeg.chiariglione.org/meetings/london/london_press.htm:

“For the Video work, a disciplined methodology has been established to enable experimentation on various modules of the video encoding and decoding system to proceed in parallel.  A video test model has been established, and several key modules in its algorithmic block diagram are subject to improvement up until March 1993.  For a proposed alternative to be selected to replace the current method, two independent experts must confirm experimentally that the proposed method yields appreciably improved picture quality.  This methodology permits MPEG’s dozens of experts from the world’s top video coding laboratories to make tremendous improvements to the state-of-the-art over a remarkably short time.”

[18]   MPEG-1 rationale, requirements and workplan are publicly described in MPEG Press Release, 20th  Meeting, Nov. 6, 1992, http://mpeg.chiariglione.org/meetings/london/london_press.htm

[19]   “Terms of Reference for a Joint Project between ITU-T Q.6/SG16 and ISO/IEC JTC 1/SC 29/WG11 for the Development of new Video Coding Recommendation and International Standard”, ISO/IEC JTC 1/SC 29/WG 11 N4400, December 2001,  http://www.itscj.ipsj.or.jp/sc29/29w12911jvt.pdf

[20]   “Vision, Applications and Requirements for High-Performance Video Coding (HVC)”, ISO/IEC JTC1/SC29/WG11/N11096, January 2010, Kyoto, JP, http://mpeg.chiariglione.org/working_documents/explorations/hvc/HVC_VR.zip

MPEG has moved forward with a royalty-free standard activity, approving a “Call for Evidence”, the typical first step in the MPEG standardization process.

The MPEG Call for Evidence is referenced in the public Resolutions of the just-completed 93rd MPEG meeting, which also hint at an upcoming Call For Proposals.

The Call for Evidence follows the April call for active participation in a royalty-free standardization activity to verify the ISO-required minimum of 5 National Bodies willing to actively participate.

No doubt of interest in response to the Call for Evidence will be performance tests on royalty free codecs and coding techniques (perhaps along the lines of the annual MSU h.264 codec comparison, which in June released test results on Google’s VP8 codec) and background on patent and prior art searches.

The documents use the new MPEG-speak for royalty-free licensing of “Option-1 Licensing”, named for the first check box on the ISO/IEC/ITU Common Patent Policy Patent Statement and Licensing Declaration, which allows patent holders to affirm that “[t]he Patent Holder is prepared to grant a free of charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the above document.”

14.6 Option 1 Licensing Video Coding

14.6.1 The Requirements subgroup recommends approval of the following documents:

No. Title TBP Available
Exploration – Option 1 Licensing Video Coding ISO-group.jpg
11533 Call for Evidence on Option-1 Video Coding Technology N 10/07/30
11534 Draft Context, Objectives and Applications for Option-1 video coding for Internet applications N 10/07/30
11535 Draft Requirements for Option-1 Video coding for Internet applications N 10/07/30
11536 Draft Call for Proposals for Option-1 Video Coding for Internet applications N 10/07/30

The Requirements group requests National Bodies and experts to provide contributions on the draft Option 1 Licensing Video Coding documents

MPEG — Working Group 11 of  ISO/IEC JTC 1/SC 29 — has issued a resolution seeking active participation in developing a Type-1 (royalty-free) video coding standard.

“Given that there is a desire for using royalty free video coding technologies for some applications such as video distribution over the Internet, MPEG wishes to enquire of National Bodies about their willingness to commit to active participation (as defined by Section 6.2.1.4 of the JTC1 directives) in developing a Type-1 video coding standard.”

See below for publicly-released information from recent MPEG meetings on royalty-free standardization.

Organizations and experts interested in actively participating in a type-1 (royalty-free) standardization activity should contact their SC29/MPEG National Body or liaison.

—————-

Glossary:

SC: Subcommittee.  SC 29 is the ISO/IEC Subcommittee covering coding of Audio, Picture, Multimedia and Hypermedia Information (MPEG and JPEG).

WG: Working Group.  A subsidiary body of an SC, that undertakes work planned with the SC.

NB: National Body.  The members of a Subcommittee, one member per country.

P-Member: A participating, voting NB (as opposed to O-Member, a non-voting observer).  There are 25 P-Members of SC 29 (voting country members).

WD: Working Draft.  Preparatory-stage draft of specification.

CD: Committee Draft.  Committee-stage draft of specification.

RAND: Reasonable and Non-Discriminatory.  General term for patents licensed for royalties, rather than available for use on a royalty-free basis.

NP: New Work Item Proposal.

MPEG: Moving Pictures Experts Group.  WG 11 of SC 29, with charter for coding of moving pictures and audio.

Type 1:  Option 1 on the 2007 ITU/ISO/IEC Common Patent Policy Patent Statement and Licensing Form, stating “The Patent Holder is prepared to grant a free of charge license to an unrestricted  number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the above document.”

—————-

Resolutions, the 92nd SC 29/WG 11 Meeting, 2010-04-19/23, Dresden, Germany

SC 29/WG 11 N 11241

http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11185c.htm

Type-1 License Video Coding Standard

Given that there is a desire for using royalty free video coding technologies for some applications such as video distribution over the Internet, MPEG wishes to enquire of National Bodies about their willingness to commit to active participation (as defined by Section 6.2.1.4 of the JTC1 directives) in developing a Type-1 video coding standard. MPEG would appreciate if NBs provide the names of individual organisations that will commit resources. MPEG will use the information gathered from the NB responses, particularly including the number of countries willing to actively participate, in order to decide at the Geneva meeting whether to request approval of a new Work Item Proposal. MPEG does not intend to reopen the issue, unless strong support of at least five national bodies is presented in the future.

—————-
ISO/IEC JTC 1 Directives, 5th Edition, Version 3.0

ISO/IEC JTC 1 N8557

http://www.itscj.ipsj.or.jp/sc29/directives.pdf

6.2.1 New Work Item Proposals (NP)

6.2.1.3 … In order to be approved, the proposal shall be supported by a majority of all P-members of JTC 1 with at least five P-members of the SC to which the project will be assigned committed to active participation. …

6.2.1.4  Active participation for NPs includes involvement by NBs in more than one of the following:

• Attendance at meetings (see also 7.11);
• Contributing to the development of the WD;
• Performing substantial review on a CD and subsequent stages;
• Submitting detailed comment with ballots.

—————-

Meeting Report, the 91st SC 29/WG 11 Meeting, 2010-01-18/22, Kyoto, Japan

SC 29/WG 11 N 11077

http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11151c.htm

Royalty-free Codecs

In order to help with the discussion on royalty-free codecs, several National Bodies provided input as requested in N11066 Call for Comments on Possible Future activities on “Royalty-free” Standardization by MPEG. MPEG thanks with N11222 Responses to NB position statements on N1066. No clear conclusions could be drawn from the diverse responses. Furthermore, neither MPEG nor ISO can guarantee that a standard developed with the goal of being RAND or royalty-free will actually be RAND or royalty-free since the analysis of patents is outside of the scope and competence of ISO and MPEG.

MPEG issued document N11221 Possible future actions on standardization with Type 1 licensing where the legal issues are summarized and discussed. Type 1 licensing refers to option 1 of the joint patent declaration form, where an intellectual property holder can indicate that he will not charge for his IP. Laymen refer to this type of licensing as royalty-free.

However, MPEG believes that 20 years after its publication some technology will become royalty-free. Since parts of MPEG-1 and MPEG-2 were published in 2013 and 2014, candidates are a MPEG-2 Part 2 baseline profile carved out of MPEG-2 Part 2, MPEG-1 Part 3 Layer 2 baseline profile carved out of the MPEG-1 part 3 Layer 2, a MPEG-1 Part 3 Layer 3 baseline profile carved out of the MPEG-1 part 3 Layer 3, and a MPEG-2 Part 1 baseline profile carved out of the MPEG-2 part 1. These candidates would be compatible with existing equipment. Alternatively, MPEG may define a new set of standards which are believed to be RF provided such standards provide sufficient differentiation to be successful in the market place.

—————-

Meeting Report, the 90th SC 29/WG 11 Meeting, 2009-10-26/30, Xian, China

SC 29/WG 11 N 10876

http://www.itscj.ipsj.or.jp/sc29/open/29view/29n10944c.htm

Royalty-free Codecs

The Chinese National Body encouraged MPEG to discuss the option of royalty-free codecs developed within MPEG (N11065 Responses to CNNB position statement on more friendly IPR policy). Especially small companies perceive licensing as cumbersome. Some royalty free standards have become successful in the market place.

MPEG might consider royalty-free codecs only as a supplement to its current standards development process. The preliminary results of the discussion are summarized in N11067 Summary of Issues and question from the 90th MPEG Meeting in connection with CNNB input document (M16903). In order to help with this discussion, MPEG requests National Bodies to provide input according to N11066 Call for Comments on Possible Future activities on “Royalty-free” Standardization by MPEG.

—————-

It is gratifying to see the FCC Broadband Plan include an open set top recommendation (4.12), firmly grounded in the FCC’s continuing responsibility to implement section 629 of the 1996 Telco Act to “assure the commercial availability” of TV devices from retail and unaffiliated sources.

And welcome words in the frank acknowledgment that over 14 years “the FCC’s attempts to meet Congress’s objectives have been unsuccessful”.

So a new proceeding will no doubt move forward, given the much-documented crying need, multi-stakeholder support, and explicit congressional directive.

But how to tell if this time the effort is on track, and not just another capture-ready MacGuffin?

Ask four questions:

– Is it specified in an uncaptured venue?
– Does it use unencumbered technologies?
– Is “it” a network interface?
– Does it work for the Web?

Overview presentation downloadable here, slideshow below.

[album: http://robglidden.dev/wp-content/uploads/2010/03/FCCOpenSetTop/]

Standards “would thwart, not advance, innovation” and “entail crippling delays”  because they are “extremely time consuming, often divisive, and sometimes used by one faction to block the progress of another or to promote its own intellectual property portfolio”.

It would be easy to dismiss comments like these in the Cable industry’s latest response to the FCC set top box inquiry (#27) that question the wisdom and feasibility of a standardized multi-network gateway as just so much diversionary polemics, masked as the caution of experience.  But a closer look is merited in part because as discussed before, royalty-free standards can be America’s broadband advantage.

On the first point — the wisdom of a multi-network gateway — the NCTA has a point.  Adding another intervening box between your TV and your TV content may have a certain quick-fix political logic in the tortured history of the 1996 Telco Act’s goal of competitive services and devices.

But the important question isn’t how many boxes it should take to hook your TV to the Internet (or any network), but how few — and that will take an Internet-acceptable open video standard.

And on this score, the existing gateway initiatives have little to offer, and are even dismissive of the core need in the first place.  The Digital Living Network Alliance, seen by some as the leading gateway standards group, said flippantly in their filing to the FCC:

“there are few (if any) standards for Internet video. Another way of looking at it is there are too many standards for Internet video. DLNA Guidelines by themselves do not solve this problem.”

Instead, DLNA promotes a philosophy and architecture of “indirection”:

“network-specific hindrances can be addressed by ‘adding one layer of indirection'”

The software domain maxim that DLNA bases its strategy on — “any software problem can be solved by adding one layer of indirection” — is well-known in software development, but it is a stretch to assume it is therefore a particularly appropriate or adequate policy architecture for the multiple networking, standards, industry structure and business problems of set top boxes meeting the Internet.

DLNA and NCTA are far from alone in proposing variants of the theme of the devolving cycle of standards pragmaticism — ambivalence — doubting — bashing (check the UK version here).

But there is a better way. Instead of bashing standards, standards groups, industry groups, participants, and regulators should turn their focus and energies to how to make standards work.

References

REPLY COMMENTS OF THE NATIONAL CABLE & TELECOMMUNICATIONS ASSOCIATION ON NBP PUBLIC NOTICE #27

January 27, 2010

http://fjallfoss.fcc.gov/ecfs/document/view?id=7020384091

“- Proposals to require an ANSI standardized gateway solution would entail crippling delays. Standards activities are extremely time consuming, often divisive, and sometimes used by one faction to block the progress of another or to promote its own intellectual property portfolio. It would require years just to get the standards developed, at which point products would still have to be designed, manufactured, and brought to market.

– Subjecting this dynamic marketplace to an ANSI standards process in which each industry participant can delay or veto the innovations of the other would thwart, not advance, innovation.

– These demands call for massive standards activities required in multiple standards bodies for multiple services, interfaces, and technologies. Standardization and related intellectual property clearances are extremely time consuming.

– Zenith, the intellectual property holder for the rejected VSB system, sought to use the process of amending SCTE 40 to put VSB transport into SCTE 40. It slowed the standards process by submitting the majority of objections to SCTE 40 and an unsuccessful appeal to ANSI, in an effort to impose VSB transport onto the cable architecture. This process took years to resolve.

– Under the CEA standards process, IS-6 became IS-132, which became EIA-542, which became CEA-542B. It took more than 13 years to produce the very simple Cable Channel Plan standard. This slow process was one of the reasons that led to the development of CableLabs, so that the cable industry could innovate more rapidly.

– DBS could not have offered MPEG-4 if it had to await elaborate industry consensus or rule change.

– AT&T still would not have deployed U-verse if it were required to wait until IPTV issues were set through industry consensus or by an ANSI-accredited body.

– Had Verizon deferred its hybrid IP/QAM offering until such processes were completed, it too would still be waiting to enter the marketplace.

COMMENTS OF THE DIGITAL LIVING NETWORK ALLIANCE

http://fjallfoss.fcc.gov/ecfs/document/view?id=7020354067

These network-specific hindrances can be addressed by “adding one layer of indirection”7—a gateway device.

As some have already commented, there are few (if any) standards for Internet video.10 Another way of looking at it is there are too many standards for Internet video. DLNA Guidelines by themselves do not solve this problem; however, a DLNA gateway device which is able to receive Internet video is able to bridge that content onto a DLNA home network.11

The FCC Video Device Innovation Notice [1] asks one of the most fundamentally central questions to the prospect of not only a viable Broadband Plan for America, but also to the very future of the Open Internet that has revolutionized communications systems of all humanity:

“How could the Commission develop a standard that would achieve a retail market for devices that can attach to all MVPD [Multichannel Video Programming Distributor] networks and access Internet-based video sources?”

There can be little doubt that video is a central broadband driver, but simply put, today there is no standard for “Internet-based video”.  Just ask the World Wide Web Consortium, who has struggled for years to no success to find an acceptable video standard to incorporate into HTML5, the first major update to the core Web standard in a decade [2].

Of course, there is a lot of video on the Internet, but it is controlled by a hodge-podge of proprietary and so-called “semi-open” plug-ins that do not meet even the loosest definition of “standard” and certainly nothing that comes near meeting the requirements, processes, and practices of the standardizing bodies of the Web and Internet.

So developing a standard that could attach to both MVPD and Internet video would require in the first place developing a standard for Internet video, on terms acceptable to the Open Internet.  The FCC has much to contribute on this score, as do broadcasters in the burgeoning EBU-led “hybrid broadcast-broadband” initiative, which has already developed clear and measurable requirements specifically for hybrid, multi-network video standards [3].

But the non-solution to this problem is as clear as it is unacceptable.  Forcing the Open Internet to adopt the closed, “walled-garden” model and captured, controlled specifications of status-quo gridlock that are dogging digital TV, digital cable, and IPTV, or concede to proprietary control, would certainly damage the Open Internet and perpetuate the dysfunctional tendencies of “standards as trade association lobbying” that underpin this FCC notice.

The right way forward is equally clear.  Standardize, in appropriate organizations and with appropriate oversight, in the Open Internet model of uncaptured, royalty-free process, the needed elements: codecs, transport stream, conditional access, and UI middleware.

The good news is that media standards gridlock has been a global challenge addressed by regulators in retail-scale national standards, and initiatives already underway on all of these elements point to best practices, way forward to success, and pitfalls to avoid [4].  Gridlock, capture, patent overcharging, and proprietary control – key underlying contributors to the absence of the robust retail, unaffiliated device markets as contemplated by Section 629 and this notice — are not the only inevitable outcome, they can be addressed [5].

The Commission should evaluate these activities and incorporate appropriate lessons into a proactive video standardization element of the broadband plan for America, one that envisions video standards to embrace, empower, and leverage the best of the Open Internet, not one that protects walled gardens through silos of pseudo-standards and control points.

References

Notes are available in the FCC comment filing available here.

Gridlock, capture, patent overcharging, and proprietary control – key underlying contributors to the absence of the robust retail, unaffiliated device markets as contemplated by Section 629 and this notice — are not the only inevitable outcome, they can be addressed

After a lively debate, the IETF appears to be moving forward with a royalty-free audio codec standardization activity.  Here’s to its successful launch and positive outcome.

I’ve put a brief summary at the mpegrf.com site, and there is a good summary here.

The group’s email discussion alias is here — and my view, expressed there (echoing this), is pretty straightforward:

[codec] Royalty Free codec standards — don’t settle for less”

Here is my view, perhaps you share it, perhaps you don’t.

What the world needs now is royalty-free, standardized codecs. This is critical to the future of the Web, and the progress the Internet has brought to the world, and will bring to the world.

Video, audio, transport, the whole thing. Evaluated, vetted for patents. Under an appropriate, responsible and complete royalty free process. No less.

IETF, ITU, and ISO/MPEG should all get going on this important activity — after all why shouldn’t all of these organizations include this as core to their mission.

I have, and no doubt you have too, seen countless explanations why this should not, could not, will not, rather not, might not, or can not happen. Some well meaning and sincere, some from vested interests.

There are too many “powerful” interests against it. “Important” commercial interests are ambivalent. It is too hard “legally” or “politically” or “technically”. It is just too confusing to think through. There is no longer a critical mass that cares enough about keeping the future of the Open Internet open and royalty free. The well meaning are ignorant, or naive. Etc.

Don’t settle. Take the issue of royalty free, standardized codecs all the way to the top of these organizations. Do what it takes. If it requires new organizations, start them. It it requires revised processes, revise them. This is the spirit that built the Web and the Internet, this is the spirit that is its lifeblood, and this is the spirit that needs to be at the heart of its future.

Don’t settle. Don’t let those who have tried hard already, or have only half-heartedly tried, justify the status quo or their half-heartedness. Encourage them to focus on how to take the next steps.

Don’t let convenient “interpretations” of standards processes be an excuse for never starting, never finishing, or never setting up processes that will work. Need more legal background? Find it. More technical information? Get it.

Don’t settle. The world has plenty of patent-encumbered media standards, plenty of proprietary solutions, and plenty of standards in other domains that have figured out how to deliver royalty free.

But the world does not have enough royalty-free codec standards, so this is the task that needs to be addressed.

Rob

Here is my view, perhaps you share it, perhaps you don't.

What the world needs now is royalty-free, standardized codecs. This is critical to the future of the Web, and the progress the Internet has brought to the world, and will bring to the world. Video, audio, transport, the whole thing. Evaluated, vetted for patents. Under an appropriate, responsible and complete royalty free process. No less. IETF, ITU, and ISO/MPEG should all get going on this important activity -- after all why shouldn't all of these organizations include this as core to their mission. I have, and no doubt you have too, seen countless explanations why this should not, could not, will not, rather not, might not, or can not happen. Some well meaning and sincere, some from vested interests. There are too many "powerful" interests against it. "Important" commercial interests are ambivalent. It is too hard "legally" or "politically" or "technically". It is just too confusing to think through. There is no longer a critical mass that cares enough about keeping the future of the Open Internet open and royalty free. The well meaning are ignorant, or naive. Etc. Don't settle. Take the issue of royalty free, standardized codecs all the way to the top of these organizations. Do what it takes. If it requires new organizations, start them. It it requires revised processes, revise them. This is the spirit that built the Web and the Internet, this is the spirit that is its lifeblood, and this is the spirit that needs to be at the heart of its future. Don't settle. Don't let those who have tried hard already, or have only half-heartedly tried, justify the status quo or their half-heartedness. Encourage them to focus on how to take the next steps. Don't let convenient "interpretations" of standards processes be an excuse for never starting, never finishing, or never setting up processes that will work. Need more legal background? Find it. More technical information? Get it. Don't settle. The world has plenty of patent-encumbered media standards, plenty of proprietary solutions, and plenty of standards in other domains that have figured out how to deliver royalty free. But the world does not have enough royalty-free codec standards, so this is the task that needs to be addressed.

Rob

A “Julius Stonian” observation:  standards groups aren’t “consensus organizations”, they are political organizations. Winners declare their way the “consensus”, and changes in political context shift the “consensus”.

So reflects calls in several slides at yesterday’s Hybrid Broadcast-Broadband (HBB) workshop to look deeper into Intellectual Property Rights and other control points in the new “broadcast+broadband” (aka OTT TV) standards initiatives.

Cases in point:  UK Project Canvas, see filing here, and HBBTV, a “consortium” claiming pan-European fait accompli authority that is questioned by the European Broadcasting Union’s workshop slides.
s-and-t-HBB
So Stonian kudos to MHEG vendor S&T and the EBU for frank, to-the-political-point, what’s-in-it-for-me observations:

  • “Do you know what IPR issues exist in new initiatives like HBBTV??? (S&T slide 45)
  • “IPR and patent issues shall be resolved prior to rolling out the HBB services” (EBU slide 9)
  • “Unresolved IPR issues (particularly “submarine” patents)” (EBU slide 28)

The EBU’s observations are perhaps most interesting, because they draw from a February 2009 EBU recommendation and initiative, cited below, that cuts to the chase of many of the fundamental political and policy interests that are at stake when “broadcast meets broadband”, many of which don’t fit neatly into the current standards landscape status quo, but require both regulatory oversight and clear, direct articulation of broadcasters interests (as well as other interests, public and private).

EBU

EBU’s observations are a frank step ahead of the BBC’s have-it-both-ways “standards-based

open environment” double talk challenged here (although neatly respected by Andrew Burke here), and miles ahead of the US ATSC Forum’s recent tepid “almost-as-good-as Europe” defense.

So who is Julius Stone?

Since his death in 1985, the influence of Julius Stone, one of the 20th century’s great legal scholars, has enjoyed a strange yet welcome renaissance.

Yorkshire born in 1907 of Lithuanian Jewish refugees, his first of 27 books, “International Guarantees of Minority Rights”, was published when he was only 25, and is still considered “the most authoritative and objective work in its field”.

From 1942 on an Australian law professor, Stone’s half-century of jurisprudence  once seemed destined to drift to dated obscurity. A review of a 1992 biography questioned the wisdom of bothering with a biography at all, since “as the spheres are aligned in the academic firmament today, Julius Stone’s star is not burning particularly brightly”.

But the generations he marked knew better, including this writer who was blessed in law school by Stone (he taught part-time in the US after his retirement) assigning one of his last books, “Conflict Through Consensus”, a thin dissection entirely unlike other law school casebooks whose title alone knifed a foolishness of his century, the 51 year quest to legalistically define the term “agression” to whitewash some of the century’s greatest criminal acts.

To this day the title “Conflict through Consensus” rings like an alarm in my head whenever I see such cover-up terms as “consensus organization” bandied about in standards groups process documents.  Of course there is no consensus when there is conflict, and only a nitwit self-deceiving “expert” couldn’t see, as Stone once far more artfully put it, “the realities disclosed by an examination of the definition.”

In 1999 an Institute of Jurisprudence was founded in Stone’s name, and nowadays the
accolades flow.

And a heavyweight annual lecture series that features dense yet topical speechs on international jurisprudence, such as one by a Harvard law professor warning of the dangers when “[w]e underestimate the power of expert consensus” or by a St Johns’ law professor on how overstated notions of “legal pluralism” mislead in a “new lex mercatoria” of pseudo-governmental venues.

The common Stonian thread?  A warning, really, of the dangers of overbelief, particularly in an international organizational context, as conditions evolve.  Overbelief in a status quo of expert consensus. Overbelief in the ultimately deluding dead-end thought process that others might be fooled by a “consensus covering up conflict”.  Overbelief in misleading notions of “legal pluralism” that see in the status quo of international organizations some sort of law, rather than just behavior.

So cut the “pan-European consortium” PR happy talk, folks, and put the interests on the table (here’s how).  Who owns what, and who will get what?  The EBU has had the courage and insight to do as much; so should everyone else.

References

“I begin with a simple “Julius Stonian” observation: the international world is governed.   The domain outside and between nation states is neither an anarchic political space beyond the reach of law, nor a domain of market freedom immune from regulation.   Our international world is the product and preoccupation of an intense and ongoing project of regulation and management.”

David Kennedy (Harvard Law School professor), “Challenging Expert Rule: The Politics of Global Governance”, in 2004 Julius Stone Memorial Address, Sydney Australia.

European Broadcasting Union Recomendation, “Television in a Hybrid Broadcast/Broadband Environment”, February 2009, http://tech.ebu.ch/docs/r/r127.pdf:

“The EBU recommends that EBU Members must foster, in cooperation with the industry and standardization bodies, the development of hybrid broadcast/broadband technical platforms with the necessary technical commonality to ensure the development of a European-wide
consumer market …

It is fair and reasonable that consumers should enjoy PSB [Public Service Broadcasting] ‘rich-media’ delivered over hybrid broadcast-broadband networks in the same way they consume broadcast-only content. They should be able to do so without organizations that have not contributed to the production process capitalizing on the process.

The EBU and its Members need to analyze the European and national regulatory frameworks, taking action where appropriate, to ensure that third parties associate their broadband services with EBU Member’s programmes only when authorized. For example, PSBs should retain editorial control of all content associated with their programmes (e.g. EPGs, surrounding text and rich multimedia, advertising and banners, picture-in-picture, interactive applications).”

I have filed comments in the UK Project Canvas public consultation.  To catch up on the UK context with global implications, watch James Murdock’s mesmerizing anti-BBC screed, and say…

“This is the BBC.”

Perhaps no other single phrase has broadcast more meaning to more people in the great call to communicate that has gripped our species and planet in the last two centuries and fed waves of techno-political-industrial revolutions from telegraphs to telephones, radio to TV.

And now, working title “Project Canvas”, the BBC’s proposal for a broadcaster-led, free-to-view IPTV service.  This is not  Telco TV, that cable-imitating subscription TV.

Project Canvas is the Internet TV every consumer wants (just hook the Net to my TV and let me watch for free) and (nearly) every incumbent dreads.

But the BBC-led Freeview is coming off a back-from-the-dead UK success, putting Free-To-View broadcasting back on the business-model map.  If anyone has earned the right to think different about IPTV, it is the BBC.

So little wonder trust is the watchword of the moment.

  • Absence of Trust“, Jame Murdock is shouting.
  • Potential of Trust“, UK trust-busting regulators are whispering after killing the precurser Project Kangaro.
  • And “BBC Trust“, the BBC’s watchdog-cum-champion who is running the Project Canvas public consultation.

In the 1981 MacTaggart Lecture, long before the World Wide Web as we know it today, and 28 years before James Murdock reprised his father’s 1989 role on the Edinburgh International Television Festival stage, Peter Jay painted the high-stakes vision of today’s Project Canvas:

“Quite simply we are within less than two decades technologically of a world in which there will be no technically based grounds for government interference in electronic publishing. To put it technically, ‘spectrum scarcity’ is going to disappear. In simple terms this means that there will be as many channels as there are viewers. At that moment all the acrimonious and difficult debate about how many channels there should be, who should control them, have access to them and what should be shown on them can disappear. But it will only disappear if we all work, indeed fight, extremely hard.”

So why shouldn’t Project Canvas also be built on royalty-free standards, advancing rather than opposing the thrust of the Open Internet and World Wide Web that has enabled the Project Canvas opportunity in the first place?

Is the BBC slipping unthinkingly into a common parlance of the day – seduced by the cynical allure of a semi-open “standards-based open environment” — open enough to help me, closed enough to hurt my competitors, with vendor complicity bought by the potential competitive advantage of conveniently under-disclosed patent royalties or other control points?

This is an under-addressed question that the BBC Executive, BBC Trust and proposed joint venture have skirted so far in this consultation, and should be fully addressed before proceeding. A Free-To-View TV Internet is both a TV and a network stewardship.

CONTENTS

EXECUTIVE SUMMARY
DISCUSSION
I. TO DATE THE PROJECT CANVAS CONSULTATION HAS NOT ADEQUATELY CONSIDERED KEY IPR BEST PRACTICES
A. The Core Principle of “Standards-Based Open Environment” is Ill-Defined and Problematic
B. The Needs of and Responsibilities to the Future of the Open Internet Are Not Sufficiently Considered
II. THE LATEST BBC RESPONSE RAISES IPR PROCESS CONCERNS
A. Proposed Framework Compounds Core Problems
B. Preferred Partner DTG Does Not Adequately Address IPR Process
III. PROJECT CANVAS SHOULD ADOPT AN IPR PROCESS BASED ON FACILITATION, EX ANTE, & PREFERENCE FOR ROYALTY FREE
A. Facilitation
B. Ex Ante
C. Preference for Royalty-Free
CONCLUSION

It is very exciting to see the “Open Video” movement taking off and finding voice with the upcoming Open Video Conference.

This well-earned “open breakthrough” has been a long time coming.  After all, open standards, and particularly royalty-free standards, are the very foundation of the Open Internet as we know it, and Internet leaders are vocal that open and royalty free standards are essential to its future.

But where are the open standards for open video?  Why don’t we already have them?

Hint:  business guru W. Edwards Deming once said: “If you control an industry’s standards, you control that industry lock, stock, and ledger”.

This bitter pill of insight points to the first thing you should know about open video and open standards:

1) Open Video is Collateral Damage
of the Digital TV Standards Wars


It’s not hard to figure out that if you could quietly bake your patents into a standard and then name your price after the standard becomes widely deployed, you could make a lot of money and wield a lot of control.

Great work if you can get it, and that’s pretty much the story of a set of international video and digital TV standards that got going in the 1990s, with MPEG the poster child of modern patent-pooled standards.

Of course this is a tale of big bucks.  Think $26 to $40 per TV, billions of dollars in royalties on billions of devices, vendor shoot-outs, litigation, dueling industry groups, back-room deals, claims of abuse, and consumer groups pushing for public disclosure of confidential patent licensing practices hidden behind claims they are “reasonable and nondiscriminatory”  — “RAND” in standards-speak.

So it is hardly surprising that RAND licensing practices and such developed through the DTV experience have done little to nothing to contribute royalty-free video technologies or standards now needed for broadband deployments, which today are essentially captured by proprietary solutions.

2) Standards Aren’t Just a “Techy Topic”
— They’re a Policy Problem


In fact, scratch almost any network policy issue and you’re likely to find a standards issue lurking inside.  Indeed, America’s broadband plan needs a standards policy.

Turns out country after country has a national “standards strategy”.

UK, France, Germany, Canada, and Korea to name a few.  Some closely tie international standards advantage to IPR & patents, as in Japan (“Intellectual Property Strategy Headquarters decided the International Standardization Comprehensive Strategy, with the aim of enhancing the international competitiveness of Japanese industries and contributing to setting global rules”) and China (“[the] Trade Barrier Treaty [TBT] can be used under the mask of standardization, patents and intellectual-property rights to obtain most world trade advantages.”).

And those that don’t, like Taiwan, have vendors crying foul.

Even in the U.S., a prescient 1992 Congressional report warned:

“The United States has been fortunate to have a pluralistic, industry-led standards setting process that has served us well in the past. Whether it will continue to do so in the future in the face of bruising international economic competition is uncertain.”

So if you think standards are for geeks and not wonks, think again.  As a Toyo University professor recently put the blunt zen to it:

“Standardization activities are political negotiations and not a forum for assessing which technologies excel over others.”

3) Open Source Doesn’t Solve
the Open Standards Problem


I don’t actually know anyone who is really confused or bent out shape about the difference between “open source” and “open standard” or believes that one is a good substitute for the other.  They are of course different things (one’s a license, one’s a specification, and so on).

But if you are inclined to dig in to this, check here or search the Web for “open source v. open standards’ and you’ll find numerous nice explanations.

4) Don’t Confuse Patent Reform with
Patent Licensing (They’re Different)


Another potential source of confusion is the distinction between patent reform — various proposals to make it more difficult to get a patent, to assure that patents are of appropriate quality, to tighten definitions of obviousness and so forth — and patent licensing — the rules and practices of patent pool licensing, disclosure, and IPR (Intellectual Property Rights) policies of standards groups.

Patents have been around for centuries, and so have patent pools, but the regulatory and policy linkages between the two are less than it might seem.  In fact, for a long time patent pools were rare and highly frowned upon by regulators (they weren’t even mentioned in the 1992 Congressional report on standards).  Then in the late 1990s many would trace the beginning of  the “modern” patent pool era to the U.S. Department of Justice’s authorization of the MPEG patent pool.

Pools and patents serve very different policy needs, raise different policy concerns, and by and large are even regulated by different entities.

So unless you are counting on a major scaling back of the patent system that somehow just makes patent issues go away (and few people are), it makes more sense to find a way, as many have, to achieve business model results.

5) “RAND” Isn’t


So what does the term “reasonable and nondiscriminatory” actually mean?

In theory it’s the commitment to fair licensing required of patent holders in standards groups that — unlike the W3C which defines HTML — are open to patents.

But in reality, since price isn’t set until after the standard comes out (sometimes years later), RAND ends up meaning whatever the patent holders want it to mean.

Studies of RAND licensing typically conclude:

“few SSOs [standard-setting organizations] define the term ‘reasonable and nondiscriminatory’ or have mechanisms to resolve disputes about its interpretation”

So Richard Stallman said it well:

“half of “RAND” is deceptive and the other half is prejudiced”

Still, sincere efforts have been made to give the term “reasonable and nondiscriminatory” a meaning  in standards IPR policies. For example the American Bar Association’s Standards Development Patent Policy Manual is a good source.  But good luck if you hope to wade through lawyerly weighing of “multiple factors” to get any particular practice declared unreasonable or discriminatory.

6) Don’t Fall For FUD — There Is a Solution


Finally, it seems there is a never-ending version of Fear, Uncertainty and Doubt that goes something like “you can never really be sure that someone might have a patent so there is no way to ever be sure a standard is truly royalty free”.

To be blunt — this is nonsense, and don’t believe it.  Not only are there thousands of royalty-free standards in the world, and although the number of patent disclosures started to accelerate in the 1990s, the vast majority of standards have no particular IPR or patent issues to speak of.

And even in areas of particular patent thickets and patent controversies, standards organizations with a determined and specific royalty-free policy and process (Khronos and Web3D are a couple of examples) have successfully established their royalty-free credentials.  Sure it takes diligence, a “Freedom-to-Operate” analytical approach, proactive patent reading, time and determination.  Dirac is already making good progress down this path.

So get going Open Video-ers — let’s get some truly open, truly royalty-free standards initiatives going!


References

“The Internet is fundamentally based on the existence of open, non-proprietary standards” Vint Cerf, “the father of the Internet” cited in The Importance of Open Standards in Interoperability, OFE Onepage Brief No.1 (31.10.08.) Available at http://www.openforumeurope.org/library/onepage-briefs/ofe-open-standards-onepage-2008.pdf.

“It was the standardisation around HTML that allowed the web to take off. It was not only the fact that it is standard but the fact that it is open and royalty-free. If HTML had not been free, if it had been proprietary technology, then there would have been the business of actually selling HTML and the competing JTML, LTML, MTML products.”

Tim Berners-Lee, quoted in Standards and the Future of the Internet, Declaration 25th February 2008, at http://www.openforumeurope.org/press-room/press-releases/standards-and-the-future-of-the-internet/