Transparency may undermine online competition: Commission’s Final Report on the E-commerce Sector Inquiry

On 10 May 2017 the European Commission published its Final Report on the E-commerce Sector Inquiry, together with accompanying Q&As, and, for those who want something rather longer, a Staff Working Document

The inquiry, launched over 2 years ago, and part of the wider Commission Digital Single Market Strategy (see our earlier comment here) has gathered evidence from nearly 1,900 companies connected with the online sale of consumer goods and digital content.
The Report’s main findings

  • Price transparency has increased through online trade, allowing consumers instantaneously to compare product and price information and switch from online to offline. The Commission acknowledges that this has created a significant ‘free riding’ issue, with consumers using the pre-sales services of ‘brick and mortar’ shops before purchasing products online. 

  • Increased price transparency has also resulted in greater price competition both online and offline.  It has allowed companies to monitor prices more easily, and the use of price-tracking software may facilitate resale price maintenance and strengthen collusion between retailers.

  • Manufacturers have reacted to these developments by seeking to increase their control of distribution networks though their own online retail channels, an increased use of ‘selective distribution’ arrangements (where manufacturers set the criteria that retailers must meet to become part of the distribution system) and the introduction of contractual restrictions to control online distribution.
How about changes to competition policy? 

The Report does not advocate any significant changes to European competition policy, but rather confirms the status quo. The key point of interest are as follows: 

  • Selective distribution – whilst the Commission has not recommended any review of the Vertical Block Exemption Regulation (‘VBER’) ahead of its scheduled expiry in 2022, the Commission notes that the use of selective systems aimed at excluding pure online retailers, for example by requiring retailers to operate at least one ‘brick and mortar’ shop, is only permissible where justified (for example in respect of complex or quality goods or to protect suitable brand image).

  • Pricing restrictions – dual pricing (i.e. differential pricing depending on whether sales are made online or through a bricks and mortar outlet) will generally be considered a ‘hardcore’ (or object) restriction of competition when applied to one and the same retailer, although it is capable of individual exemption under Article 101(3) TFEU, for example if the obligation is indispensable to address free-riding by offline stores.  

  • Restrictions on the use of marketplaces – the Report finds that an absolute ban on the use of an online marketplace should not be considered a hardcore restriction, although the Commission notes that a reference for a preliminary ruling is pending before the CJEU (C-230/16 - Coty Germany v Parfümerie Akzente).

  • Geo-blocking – a re-emphasis of the existing position on territorial and customer restrictions – active sales restrictions are allowed, whereas passive sales restrictions are generally unlawful. Within a selective distribution system, neither active nor passive sales to end users may be restricted. The Commission also make clear that companies are free to make their own unilateral decisions on where they choose to trade.

  • Content licensing – the significance of copyright licensing in digital content markets is noted, as is the potential concern that licensing terms may suppress innovative business practices.  

  • Big Data – possible competition concerns are identified relating to data collection and usage. In particular, the exchange of competitively sensitive data (e.g. in relation to prices and sales) may lead to competition problems where the same players are in direct competition, for example between online marketplaces and manufacturers with their own shop.  
What happens next?

The Commission has identified the need for more competition enforcement investigations, particularly in relation to restrictions of cross-border trade.  It is expected that more investigations will be opened in addition to those already in play in respect of holiday bookings, consumer electronics and online video games. In a more novel approach, the Commission’s press release also name-checks a number of retailers (in particular in fashion) who have already reformed their business practices “on their own initiative”.
  
The Commission also highlights the need for a consistent application of the EU competition rules across national competition authorities.  It remains to be seen whether the Commission will seek to use its enforcement investigations to address inconsistencies such as those evident in the more interventionist stance of some national authorities (e.g. the Bundeskartellamt) in respect of issues such as pricing restrictions.

Patent licensing: 5G & the Internet of Things

The next telecommunications standard, 5G, and the nascent Internet of Things (IoT), promise a world of high-speed interconnectivity. We’re already accustomed to people talking to smart devices to ask them to play music, or to order a taxi or takeaway. The technology for truly smart homes and even smart cities is following closely (see here for examples of how the IoT is already being used).

Standardisation will be essential to maximising the benefits of both the IoT and 5G. After all, there’s no point in having a smart thermostat that can be adjusted remotely if you can’t connect to it whilst out of the house because your phone is made by a different manufacturer. Standardisation will ensure that 5G/IoT devices and systems can connect and work together.

There will be thousands of patents essential to the operation of the standards developed. As the IoT grows and 5G is rolled out, the issue of how these patents are licensed will become increasingly important. Standard essential patents (SEPs) have to be licensed on a FRAND basis, but determining a FRAND royalty rate is a challenging task.

The Commission’s Roadmap

Following two studies on SEPs published at the end of last year (see here), on 10 April 2017, the Commission released a Roadmap that sets out its plan to publish a ‘Communication on Standard Essential Patents for a European digitalised economy’ later this year, possibly as early as May or June. The Communication (which will not have the binding force of a Directive or Regulation) is intended to complement the Commission’s Digital Single Market project, and to help work towards the goal of having 5G rolled out across the EU by 2025. 

The Roadmap identifies three main issues that the Commission will seek to address in its Communication to ensure a balanced, flexible framework for SEP licensing:

  • Opaque information about SEP exposure: the lack of effective tools for potential licensees too identify which patents they need to take licences for in order to implement relevant standardised technologies.
  • Unclear valuation of patented technologies: the difficulties in assessing the value of new technology (for both licensors and licensees), including the lack of any widely accepted methodology.
  • Risks of uncertainty in enforcement: the general framework provided by the CJEU in Huawei v ZTE is a starting point for agreeing a FRAND royalty rate, but it does not provide complete guidance. There are many technical issues that aren’t addressed, such as how portfolio licensing, related damages claims, and ADR mechanisms should be dealt with. (NB: some of these issues have been recently addressed in the UK case Unwired Planet v Huawei, see our initial thoughts on that case here.)

It is unclear if the Communication will address other issues with standardisation and SEP licensing, such as over-declaration, hold-up, hold-out, the appropriate royalty base (a particularly difficult challenge in the diverse world of the IoT) or whether a total aggregate royalty burden is appropriate.

Use-based licensing?

The Roadmap does not offer any specific details as to how the Commission intends to solve the issues it identifies. However, according to MLex (here – subscription required), an outline of the Commission’s Communication seen in February suggested that the Commission was considering a licensing model that would enable licensors to offer licensees different royalty rates depending on how the relevant technology is used. This could extend to allowing licensors to refuse to offer a licence if the final use of the technology cannot be identified and tracked.

This suggestion has caused concern amongst a number of companies. ACT | The App Association, an organisation sponsored by Apple, Facebook, Microsoft, PayPal & others which represents more than 5,000 small technology firms, has written to the Commission claiming that such a licensing model poses a substantial threat to innovation. It argues that in order for suppliers to obtain licences under this model, they might be required to monitor their customers’ business practices and potentially to charge different customers different prices depending on how they use the technology. It also claims that this model could appropriate the value created by new innovators. If a company succeeds in developing a new use for a particular sensor by incorporating it into a health app for example, it might find itself being charged higher royalties for this new use of the sensor.   

We have reported before on how new licensing models may be required to make best use of the IoT. Qualcomm, Ericsson and Royal KPN (among others) have backed a new licensing platform called Avanci. This is designed to remove the need for lengthy bi-lateral licence negotiations by making flat rate FRAND licences available for particular patent portfolios. However, it is now over six months since Avanci launched and there have been no reports yet of it successfully concluding any licence agreements.

The proper royalty base for patent licences has been a controversial topic for a number of years now. There has been considerable debate over whether licences should be based upon the price of the smallest component in which the patent is implemented, or the final price of the end product. This issue has been addressed by the courts on occasion, particularly in the US (see our post on CSIRO v Cisco here for example).

The new Communication is intended to provide best practice guidance on SEP licensing. If the Commission does opt for a use-based licensing model, this would be a controversial choice. However, whatever the Commission decides, given the number of conflicting interests and amounts of money at state, it is unlikely to satisfy everyone. 

Unwired Planet v Huawei: Is FRAND now a competition law free zone? Not so fast…

It has been two weeks since Mr Justice Birss handed down his latest judgment in Unwired Planet v Huawei (see here for a summary), which is almost long enough to get to grips with the 150 or so pages. There has already been a huge amount of discussion as to what this judgment means in practice and we have even overheard some suggest that, when it comes to FRAND in the future, we can simply ignore competition law altogether. This week we were invited by our friends at the renowned IP law blog, IPKat, to have our say on this. You can check out our thoughts on the IPKat blog here.

Unwired Planet v Huawei: UK High Court determines FRAND licence rate

Mr Justice Birss has just handed down the first decision by a UK court on the ever controversial topic of what constitutes a FRAND royalty rate.  At well over 150 pages, the judgment covers a lot of ground: a lot of ink is likely to be spilled about it over the coming weeks and months.  From what we’ve seen so far, the judge has not been afraid to make findings that will have a considerable impact on licensing negotiations in the TMT sector. 

We’ve summarised the headline conclusions below, but also keep an eye out for future posts in which we’ll analyse some of the judge’s findings and reasoning in more detail.

Background

In March 2014, Unwired Planet (“UP”) sued Huawei, Samsung and Google for the infringement of 6 of its UK patents.  Five of these were standard essential patents (“SEPs”) that UP had acquired from Ericsson.  They related to various telecommunications standards (2G GSM, 3G UMTS, and 4G LTE) for mobile phone technology. 

Five technical trials, numbered A-E, were listed on the validity and infringement of the patents at issue.  These were to be followed by a non-technical trial on competition law and FRAND issues.  UP’s patents were found valid and infringed in both trial A and trial C, but two were held invalid for obviousness in trial B. Trials D and E were then stayed, and as Google and Samsung had settled with UP during the proceedings, this just left Huawei and UP involved in the 7 week non-technical trial, for which judgment has just been given. 

Judgment

There’s a lot to unpack in this judgment, but here is a short list of what we think are the most important findings:

General principles:
  • There is only one set of FRAND terms in a given set of circumstances.  Note the contrast between this and the comments of the Hague District Court in the Netherlands in Archos v Philips (here, in Dutch) which seem to interpret the CJEU decision in Huawei v ZTE as meaning that there can be a range of FRAND rates.
  • Injunctive relief is available if an implementer refuses to take a FRAND licence determined by the court. Mr Justice Birss indicated that an injunction would be granted against Huawei at a post-judgment hearing in a few weeks’ time (although presumably Huawei can avoid this by now taking a licence on the terms set by the Judge).
  • UP is entitled to damages dating back to 1 January 2013 at the determined major markets FRAND rate applied to UK sales. 
  • What constitutes a FRAND rate does not vary depending on the size of the licensee.
  • For a portfolio like UP’s and for an implementer like Huawei, a FRAND licence is worldwide.
  • It’s still legitimate to make offers higher or lower than FRAND if they do not disrupt or prejudice negotiations.
Abuse of dominance:
  • UP did not abuse its dominant position by issuing proceedings for an injunction prematurely (it began the litigation without complying with the Huawei v ZTE framework).
Calculating the FRAND rate:
  • A FRAND royalty rate can be determined by making appropriate adjustments to a ‘benchmark rate’ primarily based upon the SEP holder’s portfolio. 
  • In the alternative, if a UK-only portfolio licence was appropriate, an uplift of 100% on the benchmark rates would be required.
  • Counting patents is the only practical approach for assessing the value of sizeable patent portfolios, although it may be possible to identify a patent as an exceptional ‘keystone’ invention.
  • Comparable, freely negotiated licences can be used as to determine a FRAND rate.
The FRAND rates as determined:


Other FRAND terms:
  • The Judge goes into some details as to the terms which will be FRAND in the licence between Unwired Planet and Huawei – much of which will be worth reading for licensors and licensees in this field.  Of particular note is the royalty base for infrastructure (excluding services). 
Other remedies:
  • Damages are compensatory and are pegged to the FRAND rate.
Comment

There have been near continual disputes between the major players in the TMT field over the last decade or so.  The meaning of FRAND has been strategically important in a large number of cases.  However, many of these companies are very effective negotiators.  In the vast majority of cases, they are able to agree licences without resorting to litigation.  Where proceedings are initiated, the parties are usually able to settle long before a judgment is reached, particularly given the time and expense required to take a FRAND case all the way to trial.  (Such expense is, however, usually dwarfed by the value of the licence – many licences in this field are valued in $billions.) 

The scarcity of judicial opinion in this area means this is a rare opportunity to see how a respected UK judge has approached a number of the unresolved questions regarding FRAND. 

A number of significant questions remain unanswered however, and we will be exploring these in future blog posts.  There’s also the matter of the upcoming post-judgment hearing in a few weeks’ time, which will establish whether or not Huawei will actually be subject to an injunction in the UK, and of course the chance that either party might wish to appeal.  All in all, there’s plenty of interest to talk about, plenty of advice to be given to clients, and the FRAND debate will undoubtedly continue on.

EU Commission’s Microsoft / LinkedIn Decision – watershed for competition and data?

On 6 December 2016, the European Commission approved the acquisition of LinkedIn by Microsoft, conditional on compliance with a series of commitments.  The full text of the decision has recently been published, affording some useful insight into the Commission’s reasoning.

The merger is one of a number of high profile technology cases in which data is the key asset. Cases such as this are challenging the Commission’s relaxed attitude to the potential effects on competition of deals involving significant volumes of data (for example, the Commission’s 2014 clearance decision of Facebook’s acquisition of WhatsApp – now the subject of an investigation into whether Facebook provided misleading information in the context of that merger review).  

Similarly, in the LinkedIn / Microsoft decision, the Commission’s assessment was that the post-merger combination of data (such as the individual career information, contact details and professional contacts of users) did not raise competition concerns.

The Commission identified two potential concerns: 

  1. The combination of data may increase the merged entity’s market power in the data market or increase barriers to entry / expansion for competitors who need this data in order to compete – forcing them to collect a larger dataset in order to compete with the merged entity; and 
  2. Even if the datasets are not combined, the companies may have been competing pre-merger on the basis of the data they control and that this competition could be eliminated by the merger. 
These concerns were dismissed by the Commission on a number of grounds, the most interesting being that the combination of their respective datasets is unlikely to result in raising the barriers to entry / expansion for other players as there will continue to be large amounts of internet user data available for advertising purposes which are not within Microsoft’s exclusive control.

The Commission’s approach contrasts with that of some commentators (and indeed some of the Commission’s own non-merger enforcement activities) which have highlighted the potential for platforms to gain an unassailable advantage over competitors in relation to data. 

Concerns of data ‘tipping points’ were among the reasons why French and German competition authorities have published a joint paper on data and competition law. 

Germany has amended its domestic competition law to increase the legal tools available to prevent market dominance and abuses in relation to data. These changes will come in to force later this year and include: 

  1. controversially) amending the German merger thresholds to require notification of deals involving innovative companies (like start-ups) with a transaction value of EUR 400 million; and
  2. introducing specific criteria for reviewing market power in (digital) multi-sided markets, for example allowing the Bundeskartellamt (BKA) to consider: concentration tendencies; the role of big data; economies of scale; user behaviour; and the possibilities to switch a platform.
The additional merger threshold is intended to allow the BKA to review mergers in which the transaction value is high but the parties’ turnover in Germany is below the existing EUR 25 million threshold; for example, when Facebook’s acquisition of WhatsApp for USD 22 billion was not notifiable in Germany (although it was reviewed by the Commission). 

France and Germany’s robust approach to competition concerns in relation to data is in contrast with the less interventionist position in the UK. This is demonstrated by recent UK government report on digital platforms which found that, “In many sectors, e.g. search engines or social networks, firm behaviour and survey evidence suggests that in the event of even a modest hike in costs users would expect to find an alternative and cease using the service. It is difficult to reconcile this behaviour, and this finding, with the sense that there is an important “moat” which prevents users switching to alternative services over time. Any moat that does exist only seems to be enough to keep them in one place if the platform continues to be free and improve its service over time.

Given the moves towards ex ante regulation of data in France and Germany, and given the ex post investigation into Facebook/WhatsApp, it remains to be seen whether future merger investigations will take a similarly permissive approach.

Will pricing algorithms be the European Commission’s next antitrust target?

There has been considerable debate over the last year or so about the potential anti-competitive impacts of pricing algorithms. They could lead to discriminatory pricing, for example a company quoting different prices to different people based on an algorithmic analysis of their personal data, or cases of collusion, for example companies using algorithms to automatically fix prices. 

In a recent speech, Commissioner Vestager sounded a clear warning against the latter example: “companies can’t escape responsibility for collusion by hiding behind a computer program”. She also indicated that the use of pricing software forms part of the issues being investigated in the Commission’s new investigation into price-fixing in consumer electronics.

However, as pricing algorithms increase in complexity and sophistication, and their use becomes more prevalent, it will not be easy for competition authorities to establish where the use of such algorithms equates to actionable infringements of competition law.

How might pricing algorithms be used?

A new book by Professors Ariel Ezrachi and Maurice Stucke have identified four scenarios in which pricing algorithms may promote anti-competitive collusion (see here; also developed in more detail in their book Virtual Competition). 

The first is where firms collude as in a traditional cartel, but use computers to manage or implement the cartel more effectively, or to monitor compliance, for example by utilising real-time data analysis. Competition authorities have already investigated this kind of subject-matter – for example, the CMA issued an infringement decision last year against two companies that agreed to use algorithms to fix prices for the sale of posters and frames on Amazon (see here).

The second example is a hub-and-spoke scenario whereby one pricing algorithm may be used to determine prices charged by numerous users. Evaluating this sort of issue is a current challenge for competition authorities. Last year, in Eturas, the CJEU held that travel agents participating in a platform that implemented a discount cap could be liable if they knew about the anti-competitive agreement and failed to distance themselves from it (see here).  An ongoing case in the US (Meyer v Kalanick) is examining Uber’s ‘surge’ pricing algorithm, which increases the price of an Uber journey as demand increases.  The claimants allege that this constitutes an implied horizontal price-fixing agreement.  

The examples seen so far involve relatively straightforward cases of the use of algorithms as an aid or means to fix prices (although the Uber example arguably involves only unilateral conduct, rather than collusion).  However, Ezrachi and Stucke’s final two scenarios move into more uncertain territory – what if there is no express collusion by the companies? 

In the third scenario, each firm independently adopts an algorithm that continually monitors and adjusts prices according to market data. Although this can lead – effectively – to tacit collusion, particularly in oligopolistic markets (those with a small number of sellers), there is no agreement between companies that could form the basis of an investigation.  However, there can evidently be an anti-competitive effect: if an online retailer can track the prices used by another online retailer for common products, and immediately adjust its own prices to match any discounts, it can prevent the second online retailer from gaining a reputation for lower prices. The incentive for either retailer to lower its prices is removed.  On the other hand, examples from the analogue world suggests that this kind of market review can be used to ensure lower prices for consumers, at least for now (think supermarkets’ price match promises…).

In the fourth scenario, machine learning and the increasing sophistication of algorithms expand tacit collusion beyond oligopolistic markets, making it very difficult even to detect when it’s happening.

The latter two examples pose obvious difficulties for competition authorities. If they do consider such actions to be anti-competitive, how would they prove the requisite intention to co-ordinate prices?

How will competition authorities react?

As discussed above, competition authorities have already undertaken investigations against companies using pricing algorithms in collusion. We have previously noted the CMA’s interest in developing digital tools to aid its investigations (here). It seems certain that such tools will be necessary as these algorithms become more sophisticated and harder to detect.

The actions of non-dominant companies in using pricing algorithms whilst acting independently do not fall within the current competition law framework, even if such use ultimately results in higher prices for consumers. Commissioner Vestager has accepted that “what matters is how these algorithms are actually used”. This sensibly suggests that for now the Commission’s focus will remain on the more clear-cut cases of collusion. Anything else is arguably a matter for policy and regulation rather than enforcement by competition authorities.

However, Commissioner Vestager also stated that “pricing algorithms need to be built in a way that doesn’t allow them to collude”, suggesting that they needed to be designed in a way that will oblige them to reject offers of collusion. It is unclear whether this means Commissioner Vestager intends to target the use of pricing algorithms more generally, or simply to drive home that the competition rules apply equally where collusion is achieved algorithmically.

The fourth scenario, where machine learning algorithms tacitly collude to fix prices, does sound speculative. However, recent developments such as Carnegie Mellon’s Liberatus beating four of the world’s best professional poker players (here) and Google Deep Mind’s AlphaGo victory against Lee Sedol (here) indicate that it might not be too far from becoming reality in the near future.

Back to the future: the Commission opens e-commerce competition investigations

True to its current focus on all things digital, the European Commission has recently announced that it has launched three separate investigations into whether certain online sales practices prevent, in breach of EU antitrust rules, consumers from benefiting from cross-border choice in their purchases of consumer electronics, video games and hotel accommodation at competitive prices.

The context to the investigations is the Commission's Digital Single Market Strategy and its related sector inquiry on e-commerce, which suggested that the use of online sales restrictions were widespread throughout the EU (previous posts here and here).

The Commission is now examining whether the companies concerned are breaking EU competition rules by “unfairly restricting retail prices” or by excluding customers from certain offers because of their nationality or location (geo-blocking). 

The Commission’s rationale for the inquires is that these practices may make cross-border shopping or online shopping in general more difficult and ultimately harm consumers by preventing them from benefiting from greater choice and lower online prices.  Whether the evidence gathered from the investigations ultimately bears out this hypothesis is very much an open question. 

Whatever the wider benefits to the Commission of the sector investigation, it is questionable whether these investigations in themselves justify the full arsenal of an antitrust sector inquiry.  To judge by the press release, at least a significant part of the Commission’s concern appears to relate to classical infringements of competition law – resale price maintenance and contractual barriers to parallel trade – which merely happen to have come to light through the sector inquiry.  Time will tell whether this hypothesis is correct, or whether more specific types of online anti-competitive conduct are in fact concerned.

Amazon’s E-Books antitrust saga - War now Peace?

Amazon has offered commitments to the European Commission to end the antitrust investigation into its use of ‘most favoured nation’ (MFN or parity) clauses in its e-books contracts with publishers, launched in 2015. The Commission is now inviting comments on these proposed commitments from customers and rivals. 

The Commission’s concern is that the clauses may breach EU antitrust rules and result in reduced competition among e-book distributors and less consumer choice.

Amazon’s MFN clauses require publishers to inform Amazon about more favourable terms or conditions offered to Amazon's competitors and to offer Amazon similar terms and conditions. This includes requiring publishers to offer Amazon any new or different distribution methods or release dates, any better wholesale prices or agency commissions, or to make available a particular catalogue of e-books.

The Commission considers that the cumulative effect of these clauses is to make it harder for other e-book retailers to compete with Amazon by developing new and innovative products and services. It also takes the view that imposing these clauses on publishers may amount to an abuse of a dominant market position.

In parallel, Audible, Amazon’s audio-books subsidiary, has announced the end of its exclusivity provisions in its distribution agreement with Apple following a joint antitrust investigation by the Commission and the German competition authority, the Bundeskartellamt. 

Amazon’s proposed commitments

Amazon disputes the competition law basis for the Commission’s investigation.  Nevertheless, in order to bring the investigation to a close (and to avoid the risk of a costly infringement decision), it has offered commitments:
  • Not to enforce:
    1. any clause requiring publishers to offer Amazon similar terms and conditions as those offered to Amazon's competitors; or 
    2. any clause requiring publishers to inform Amazon about such terms and conditions. 
  • To allow publishers to terminate e-book contracts that contain a clause linking discount possibilities for e-books to the retail price of a given e-book on a competing platform. Publishers would be allowed to terminate the contracts upon 120 days' advance written notice.
  • Finally, not to include, in any new e-book agreement with publishers, any of these clauses.
The commitments would apply for five years and (as is usual for behavioural commitments) be subject to oversight by a monitoring trustee.

E-Books - déjà vu? 

This is not the first time the Commission has investigated the e-books sector. In 2011 it opened antitrust proceedings against Apple and five international publishing houses (Penguin Random House, Hachette Livres, Simon & Schuster, HarperCollins and Georg von Holtzbrinck Verlagsgruppe) on the basis that it considered that they had colluded to limit retail price competition for e-books. In that case the companies also offered commitments to address the Commission's concerns (see our previous comment).

Where does this leave MFNs?

The Commission and national competition authorities have conducted investigations into MFN clauses in a number of other sectors, including online motor insurance and online sports goods retail, on which we have previously commented.  

While MFNs are not per se unlawful, and in some circumstances may even be pro-competitive, companies should carefully consider their possible anti-competitive effects before including them in new contracts. 

Compulsory Licencing: the Brave New World for (non-personal) data in Europe?

The Commission has published its Data Economy Package for non-personal data*, which is the final building block of its Digital Single Market (DSM) strategy – see our previous posts on the DSM here, here; and here.

With its new package, the Commission aims to: 

  • review the rules and regulations impeding the free flow of non-personal data and present options to remove unjustified or disproportionate data location restrictions; and
  • outline legal issues regarding access to and transfer of data, data portability and liability of non-personal, machine-generated digital data.
The package includes a Consultation on Building the European Data Economy, a Communication and Staff Working Paper.

Why is the Commission acting on data?

The economic rationale is that the EU data economy was worth €272 billion in 2015, and is experiencing close to 6% growth a year.  It is estimated that it could be worth up to €643 billion by 2020, if appropriate policy and legal measures are taken. Data also forms the basis for many new technologies, such as the Internet of Things and robotics.  The Commission’s ambition is for the EU to have a single market for non-personal data, which the EU is a long way from achieving.  The Commission refers to the issues in terms of – the “free movement of data”, suggesting something akin to a fifth EU fundamental freedom. 

What action is the Commission proposing to take? 

The Consultation sets out options for addressing the legal barriers to the free flow of non-personal data, in particular in relation to:

  • data access and transfer;
  • unjustified localisation of data centres;
  • liability related to data-based products and services; and
  • data portability.
Some of the more eye-catching (and interventionist) options set out by the Commission are the introduction of:

  • legislation to define a set of non-mandatory contract rules for B2B contracts when allocating rights to access, use and re-use data;
  • creation of a sui generis data producer right for non-personal machine-generated data, with the aim of enhancing tradability; an obligation to license data generated by machines, tools or devices on fair, reasonable and non-discriminatory (FRAND) terms; and
  • technical standards to facilitate the exchange of data between different platforms.
The Consultation is also seeking evidence on whether anti-competitive practices are restricting access to data.  In particular, the Consultation refers to: the use of unfair business practices; the exploitation of bargaining power when negotiating licences; and abuses of a dominant position.  Interestingly, it also asks whether current competition law and its enforcement mechanisms sufficiently address the potentially anti-competitive behaviour of companies holding or using data.

So where are we headed?

To date, competition law has mandated the compulsory licensing of IP rights only in exceptional circumstances, where the owner has a dominant position and there are no alternatives to the technology.  The Commission is now considering a range of regulatory options, of which the most interventionist could require access to be granted to non-personal data in a far wider range of contexts (albeit without any proposal to amend the existing database right and the new Trade Secrets Directive). These issues are likely to be of considerable concern for any company holding large amounts of non-personal data.  The Consultation runs until 26 April 2017. 


* Non-personal data includes personal data, where it has been anonymised

European Commission publishes two new studies on the interplay between patents and standards

In December 2016, the European Commission published two new studies on standard essential patents (SEPs).  As regular readers of this blog will know, SEPs protect technologies that are essential to standards such as 4G (LTE) and Wi-Fi, which rely on hundreds of patented technologies to function effectively.  For the same reason, SEPs will be crucial to 5G and the nascent “Internet of Things”.

The two studies form part of the Commission’s project to improve the existing IPR framework and to ensure easy and fair access to SEPs.  The specific aims of the Commission’s project were outlined in its April 2016 Communication “ICT Standardisation: Priorities for the Digital Single Market”, which we commented on here.

The first new study, titled “Transparency, Predictability and Efficiency of SSO-based Standardization and SEP Licensing”, and prepared by economics consultancy Charles River Associates (CRA), examines a number of issues relating to the standardisation process and SEP licensing.  Building on a previous 2014 report on patents and standards, and on the responses to a 2015 public consultation, the authors outline what they see as the main “problems which have real significance and impact ‘on the ground’”.  They then go on to consider a number of specific policy options which might help alleviate those problems.  Particular focus is placed on “practical and readily implementable solutions” which would, according to the authors, enhance the transparency of the standardisation process and reduce the transaction costs of SEP licensing.

One of the CRA study’s most notable – and doubtless controversial – proposals is the imposition of a ceiling on the aggregate royalty for a given standard.  The authors suggest that a commitment by SEP holders to observe a maximum total royalty burden would go a long way to tackling the problems of patent hold-up and royalty-stacking*.  While the study recognises that there would be a number of difficulties in implementing such an approach, it arguably underestimates the challenges.  The first problem would be determination of the aggregate royalty level.  Assuming that can be overcome, allocation of total royalties between SEP holders would be a formidable challenge, even for a ‘static’ standard.  The landscape here is far from static, however.  Not only do SEPs change hands regularly (as the second report by IPlytics emphasises), but telecoms standards themselves evolve, through the addition of new releases which improve on or supplement existing technologies.  When you throw into the mix the lack of public information about licence fees charged across the industry (something which the authors also have in their sights**), and the multiplicity of methods for comparing the relative values of SEP portfolios, it is difficult to see how such a system would work in practice – except, perhaps, as very general guidance.

The CRA study goes on to emphasise the importance of preserving flexibility on issues such as the appropriate royalty base and the level of the value chain at which SEP licensing should occur.  In the authors’ opinion, economic analysis of these issues suggests there is no appropriate one-size-fits-all solution.  This stands in contrast to the conclusion on royalty-stacking, where greater control is advocated.

The second new report, prepared by the Berlin-based data analytics company IPlytics, uses a dataset of over 200,000 SEPs to paint a more quantitative portrait of the SEP landscape.  It provides detailed empirical evidence on a number of issues, including:

  • Technology trends – The report shows that most declared SEPs relate to communication technologies, followed by audio-visual and computer technologies.  More than 70% of all SEPs are declared as essential to ETSI.
  • Regional trends – The proportion of SEPs filed at the Korean and Chinese patent offices has increased in recent years (particularly in the telecommunications sector), reflecting the growing importance of Asian markets in the global economy.
  • SEP transfers – More than 12% of all SEPs have been transferred at least once.  The study reveals that the top sellers of SEPs are Motorola, Nokia, Ericsson, InterDigital and Panasonic.  The most active buyers include Qualcomm, Intel and – perhaps surprisingly, given its recent suit alleging that Nokia evaded FRAND by transferring patents to two PAEs – Apple.  
  • Comparison with non-SEPs – A comparison with a control group of patents which have not been declared as standard essential suggests that SEPs are more frequently transferred, litigated, renewed and cited as prior art than non-SEPs.  This implies that SEPs are generally more valuable than non-SEPs, but the study refrains from considering whether the technology protected by SEPs is intrinsically more valuable than that protected by non-SEPs, or whether the higher value of SEPs is merely a product of their incorporation into a standard.

One striking feature of both studies is their attempt to grapple with the thorny issue of ‘over-declaration’.  The authors of the CRA study point to research showing that, when tested rigorously, only between 10 and 50 per cent of declared SEPs turn out to be actually essential.  Both studies propose some form of independent essentiality testing to address the problem.  The CRA study claims that random testing of a sample of each SEP holder’s portfolio would provide useful information about how royalty payments should be allocated between SEP holders; and that the benefits of such testing would be especially pronounced when combined with the imposition of an appropriate ceiling on the total royalty stack.  According to the IPlytics report, patent offices have the requisite technical competence and industry recognition to perform essentiality testing at a reasonable cost.

To conclude, the two studies provide a reminder – if any were needed – that issues relating to the standardisation process and SEP licensing remain high on the Commission’s agenda.  The Commission says it intends to draw fully on the studies’ findings when assessing the interplay between patents and standards in the EU Single Market.  However, whether the Commission will embrace any of the practical solutions proposed by the studies remains to be seen.

* As mentioned in this December 2015 blog post, royalty-stacking refers to the situation where the royalties independently demanded by multiple SEP holders do not account for the presence of other SEPs, potentially resulting in excessively high total royalty burdens for implementers.

** See pages 71 and 85 of the CRA study.