If the focus on fact finding in Aatrix, Berkheimer, and Exergen from earlier this year helped provide additional clarity on the analysis of "something more," the SAP America decision, at least to my mind, failed to clarify, and possibly further muddied, the analysis.

Reaching This Result Could Have Been Easy

First, a representative claim:

  1. A method for calculating, analyzing and displaying investment data comprising the steps of:

    1. selecting a sample space, wherein the sample space includes at least one investment data sample;
    2. generating a distribution function using a re-sampled statistical method and a bias parameter, wherein the bias parameter determines a degree of randomness in a resampling process; and,
    3. generating a plot of the distribution function.

The other independent claims differ a bit from claim 1, but like claim 1 recite "resampling" of the data set and, in claim 11, doing so with a "bias parameter." According to the patent, this resampling of investment data permits analysis that doesn't assume a normal distribution of the data.

On a judgment on the pleadings, the district court found all claims ineligible and the Federal Circuit affirmed.

This result appears to flow cleanly from prior decisions and may be the right result. The claims appear to recite statistical analysis of the data in the resampling and (in claim 1) displaying that data in a plot. The statistical analysis appeared relatively generic, and seems to recite the notion of resampling the data, but without expressing how to perform that resampling or how that resampling could be affected by the fact that this is an "investment data sample." The investment data sample thus appears more like a field of use for this statistical analysis, rather than an analysis that is tailored to investment data. That is, you could replace "investment data sample" with "grass growth data sample" and recite effectively the same claim.

Ineligibility of this claim could follow naturally from Flook (1970), Bilski (2010), or Electric Power Group (Fed. Cir. 2016). Flook's claim recited "updating an alarm limit" according to an equation that used some initial parameters and adjusted an alarm limit with a current reading and the prior alarm limit. But, as the Supreme Court put it:

The patent application does not purport to explain how to select the appropriate margin of safety, the weighting factor, or any of the other variables. Nor does it purport to contain any disclosure relating to the chemical processes at work, the monitoring of process variables, or the means of setting off an alarm or adjusting an alarm system. All that it provides is a formula for computing an updated alarm limit.

Said another way, the Flook claim recited a formula, but nothing about how to tailor the formula to the catalytic conversion process. Bilski's rationale with respect to commodities was similar. The claim recited the "fundamental economic concept" of hedging, and then recited that it was performed on a processor and used for a commodities market. But the processor and the commodities markets didn't affect the operation of the fundamental concept of hedging. Likewise, Electric Power Group has a claim that (with an impressive wordcount) performed analysis of electric power data but said little about how to perform its analysis: "The advance they purport to make is a process of gathering and analyzing information of a specified content, then displaying the results, and not any particular assertedly inventive technology for performing those functions" (emphasis added).

This claim could have readily been compared to these claims. The resampling and bias processes are generic mathematical structures or frameworks for performing analysis, and the claims just recited this process in the field of investment data. Like Flook, Bilski, and Electric Power Group, there was nothing that appeared different about using general mathematical processes for this purpose.

But the Decision Went Further & Favored Broader Reasoning

Instead, the opinion by Judge Taranto with Lourie and O'Malley in the SAP decision reaches this result with a more expansive explanation.

The opinion opens with some blockbusters: "groundbreaking, innovative, even brilliant" is "not enough," when a "claim for a new abstract idea is still an abstract idea." Then the opinion goes even further down this slope: "the claims here are ineligible because their innovation is an innovation in ineligible subject matter." And that's all before the court begins its review of the claims and more detailed analysis.

In that analysis, the court's approach to "mathematical formulas" appears to suggest that application of math to new areas cannot render claims eligible:

The focus of the claims, as is plain from their terms, quoted above, is on selecting certain information, analyzing it using mathematical techniques, and reporting or displaying the results of the analysis. That is all abstract.

The court then distinguishes the "rules" of McRO's lip-syncing invention as dissimilar to such math because McRO was "directed to the creation of something physical" and "the improvement was to how the physical display operated." In contrast, the claim in this case is described as, "a claimed improvement to a mathematical technique with no improved display mechanism."

In the "something more" analysis, the court doesn't seem to squarely address the claims on an as-a-whole basis for resampling when applied to finance. Nor does the decision appear to meaningfully explain whether or how application of statistical methods to finance could be eligible except to cite Electric Power Group for the proposition that data type can't render claims eligible. Then, the court continues on to say that particular techniques for resampling (e.g., bootstrap, jackknife, cross-validation) "simply provide further narrowing of what are still mathematical operations" and "add nothing outside the abstract realm."

One Step Back

This approach worries me for several reasons. First, I am concerned that the court's analysis provides fodder for readers who generalize broad statements without considering the claims at issue in the decision and without placing the decision in the broader context of the other decisions noted above. The re-citation that "narrower abstract ideas are still abstract ideas" risks swallowing meaningful advances in technology without some limiting principle for evaluating what constitutes "narrower" ineligible advances instead of eligible applications.

That the court distinguishes McRO as a "physical" invention increases this challenge. McRO related to virtual characters and automatically lip-syncing mouth movements to audio in generating an electronic video file. It's unclear what the court intended as the "physical" aspect of the invention when the result is automation of lip-syncing for a virtual character. Similarly, "better lip-syncing" doesn't improve computer functions like memory storage of processor speed or have apparent benefits besides the improved aesthetic of more faithfully representing real-life lip-syncing in a virtual creation. McRO itself also didn't describe the subject invention as directed to the creation of something physical, or "how the physical display operated" as articulated in SAP America. Instead, the McRO decision emphasized how the claimed invention provided a different way of lip-syncing than the ways that human experts previously did this analysis. And prior systems did generate videos with lip-syncing as specified by human experts. So the SAP court must mean something different – that the invention affected the content of the created video with the new approach. But without articulating what the "physical" concept means in making this distinction, it's hard to know what would constitute a qualifying "physical" aspect in other cases.

The court also distinguishes Thales as a "physical-realm improvement." This explanation feels incomplete, particularly given the Thales court's decision to treat the method claim the same as the system claim. That method claim required a single "calculating" step that received as inputs the data from inertial sensors as explicitly claimed in the system claim but not in the method (which only received the data). While the sensors are physical, the method claim directly recites only the calculation of a value (e.g., the position of an object relative to another). What characteristics of that invention make it a "physical realm" improvement when the result is effectively a number that could have been performed in the mind or with pen and paper given the sensor inputs? What distinguishes a "physical realm improvement"? And how does that practically differ from the ("just a clue") machine or transformation test?

I prosecute many applications in computer modeling, artificial intelligence, content selection and finance-related spaces. The decision is frustrating from this perspective for its discussion of ineligibility in such broad ways and without clearly delineating limiting principles. Many prior decisions explicitly explained possible ways things might be different and ways in which the present claims failed to articulate some advance related to the field of use or explained the claimed narrower "way" as ineligible when it was computerizing the same way humans performed a task (e.g., FairWarning) and eligible when it was not (e.g., McRO). But the broad descriptions here appear to risk unintentionally excising broad types of invention.

For example, machine learning is typically improved by some new process or approach that helps a model better predict an output from new input data given some formerly known or discerned relationship between input and output data. These improvements represent "smarter" or "better" automated decisions, where the raw inputs for the decision process may be the same. And, once the decision is made, implementing the decision is straightforward. An automated car has only so many inputs, and executing a lane change, turning, accelerating, and breaking are all straightforward conventional control operations. Are better automated "decisions" for when to perform these tasks ineligible as just "narrower" abstract ideas because they are improvements in "abstract" decision-making?

Consider an improved approach for distinguishing a practical joke from a medical emergency. This determination affects when an automated system decides to call for help. Depending on how this invention is cast, the invention is just a new mathematical equation that outputs different values for which emergency calls are a conventional "apply it" step, or it's a different way of initiating tangible, real emergency calls that are more likely to represent real emergencies, which in this invention are now performed on an unconventional basis, and also has the effect of reducing wasted emergency operator time while being more likely to get help for an injured person. Because of the invention, different inputs (or maybe the same inputs) now generate different outputs and commensurate actions by the system.

The broad strokes of this decision make it more difficult to know how to analyze this hypothetical. Is it eligible because, like Thales, it better reflects real-world information and therefore improves the sensory perception of the system? Or was Thales limited to the different physical sensor that was required and this hypothetical invention has no new sensor and thus may be ineligible. But in SAP, the "investment data" was not enough to be "physical" despite that "investment data" represents very real real-world interactions and describes complex behavioral systems: e.g., at what prices do parties decide to trade investments, in what conditions does that change and how do you improve the way a system measures and analyzes data from an "economic machine" that avoids poor data sampling? When is that analysis simply a "narrower" abstract idea? Does our claim need to recite an action that is now performed differently on the basis of the practical joke detector, or is an improved predicted likelihood sometimes enough?

Or, is it impossible to consider these questions without a specific, claimed invention before us?

Finally, while the prior decisions this year emphasized a factual inquiry for determining a conventional application of an abstract idea, this decision affirms a judgment on the pleadings, where the patent itself described prior problems and the advantages of using this resampling approach for investment data. But by affirming a judgment on the pleadings, it becomes harder to tell when facts might matter to demonstrate eligibility through unconventionality. Previously, a description of the unconventional advance in the application (or at least no admission of what was conventional) might avoid dismissal before discovery. Now, however, the question becomes more mushy – when are limitations just narrower abstract ideas where dismissal may be proper, even when the patent itself lauds those limitations as unconventional advances, and when are limitations outside the abstract idea for which we may need trial testimony to properly evaluate for conventionality?

The rationale also makes differentiating eligible decisions more murky. How do we distinguish McRO's eligible rules from a "narrower" ineligible concept of using math represented in rules? Finjan's eligible invention related to analyzing viruses in an application, but why wasn't the claimed "behavior-based" analysis just a "narrower" version of a profiling an application for virus risk?

Ultimately, while the cases earlier this year helped clarify analysis of the "something more" analysis, this case highlights a challenge in correctly scoping the breadth of the abstract idea with respect to a claim. While exceptions for "natural laws" or naturally-occurring substances can be evaluated by factually determining if the concept actually is found or dictated by nature, the more amorphous categories of "abstract ideas" or "mathematical formulas" call for more deterministic ways of deciding how to scope the exception in the claim.

If there is some clarification in this decision, however, it may be to consider the decision along with Flook, Bilski, Electric Power Group (all discussed above), and consider them with recent life sciences cases Vanda and Exergen (non-precedential). In both Vanda and Exergen, while a natural relationship was discovered and reflected in the claim, the claims were eligible as reciting the actual use of the natural law. Vanda was eligible because it did more than recite the natural law that "indicates a need" for treatment (see Mayo) because it claimed actually applying the natural law to modify treatment. Exergen was eligible because its natural law for temperature measurement was embodied in a device that now measured temperature differently by using the natural law. Applying these concepts to "abstract ideas," perhaps all that is required is for a "narrow" abstract idea to be claimed with something that has a tangible effect on the world.

However, even this approach has problems and I worry that it would glorify arbitrary "apply it" steps while possibly preventing effective protection for inventions with a broad variety of possible applications. And, how to tell the difference between such eligible "tangible differences" and steps that recite the abstract idea and then ineligibly "apply it" or have token, "conventional" recitation of a computer for carrying out the idea or in merely "displaying data." While one application of the emergency/practical joke detection may be better emergency calls, another would be deciding whether the system automatically generates a joke on its own, or performs some other action. If the emergency/practical joke detection approach itself is considered a "narrower" abstract idea then we may be calling for applicants to draft lengthy examples of all such conceivable applications.

Stepping back, it has now been exactly four years to the day of the Alice decision (June 19, 2014) and four more since Bilski (June 28, 2010). While it has become almost stereotypical to end many discussions of this subject with a call for a legislative solution, this case and others highlight the continuing difficulty in reliably applying the "directed to" and "something more" tests. Even if the application of these tests were reliable to experts who have read a great many 101 cases and could agree on resolving particular claims, the everyday patent examiner, judge, and inventor does not have the time to develop the judgment and expertise to reliably apply such a nuanced test. As reflected in the recent denials for en banc review of Aatrix and Berkheimer, there continues to be a need for clearer, easy-to-apply boundaries that could be resolved legislatively.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.