Of Marks and Markets: An Empirical Study of Trademark Litigation
By
By
Jessica M. Kiser, Sean P. Wright, and Benjamin P. Edwards[1]*
Trademarks are increasingly valuable assets, and some companies aggressively enforce and protect these assets. Such aggressive tactics can harm small businesses and chill creativity and speech, but trademark owners are routinely told that the law requires them to stop all similar third-party trademark usage or risk abandonment of their rights. While prior scholarship has discussed how the risk of trademark abandonment is quite low, incentives built into trademark law still push companies to court. This Article presents the results of an event study utilizing an established database of trademark infringement cases to provide insight to decisionmakers on whether the stock market supports such enforcement actions when taken by publicly traded companies. Unlike in prior litigation event studies, this study finds that the market responds negatively to the plaintiff’s filing of the trademark suit but does not respond negatively toward the defendant. This result suggests that, unlike in patent litigation, the market may prefer that trademarks be protected outside of court. Given that corporate law protects officers and directors from liability in most circumstances, these decisionmakers have the freedom to choose more creative, strategic enforcement measures. Public companies would be wise to consider this additional source of data when balancing trademark law’s push toward aggressive enforcement.
Trademark owners receive constant mixed messages. Brands, and therefore the trademarks symbolizing brands, are incredibly valuable assets that must be protected at all costs.[2] However, customers should be encouraged to engage with those brands—to develop brand affinities and, ultimately, loyalties.[3] Yet, too much engagement invites the risk of unlicensed, unmonitored trademark use by those same customers (and competitors).[4] That must be stopped for fear of losing one’s trademarks based on trademark law’s abandonment rules.[5] Of course, if the trademark owner enforces their trademark rights aggressively and stops third-party uses, then they may be labeled a “trademark bully” and demonized online and in the press.[6] It can feel like a double-edged sword. This Article explores another voice trademark owners should consider—the stock market—when making decisions about trademark enforcement and infringement litigation. Companies’ compliance with trademark law and their public image are not all they have to worry about. For public companies with valuable brands to protect, trademark law and corporate law become intertwined. Companies must be aware of the relationship between these two areas of law to select more strategic trademark enforcement options.
This Article proceeds in four parts. Part I discusses trademark and corporate law and how legal and psychological factors may impact a trademark owner’s decision to litigate. Part II presents the event study technique, discusses prior financial and legal scholarship utilizing the technique, and explains the design of the study conducted herein. Part III presents the results of our event study of trademark litigation. Finally, Part IV provides an interpretation of those results in light of the trademark enforcement question and the problems associated with brand-related corporate decision-making.
In simple terms, the phrase “trademark bully” is often used to refer to a trademark owner that is perceived to be overly aggressive in enforcement of its trademark rights.[7] For example, Apple Inc. has been accused of being a trademark bully. It is true that they submitted more than 200 oppositions to trademark applications filed by other companies, and in some cases, they opposed applications that were only tangentially related to Apple’s APPLE mark or logo.[8] For example, Apple threatened and then agreed to a settlement with Prepear, a company that provides meal planning services, over their use of a pear fruit design as their logo.[9] In such an instance, it is highly unlikely that consumers would be confused about the fact that these are two unrelated companies, so Apple’s actions fall on the more extreme or aggressive side of a spectrum of trademark enforcement, earning it the “trademark bully” label. Professor Kenneth Port once referred to such aggressive tactics as trademark extortion: “the use of a non-famous trademark to enjoin (or seek or threaten to enjoin) a non-competing use by a third party.”[10] Based on empirical research using the database discussed below, Port argued:
Trademark bullying is an actual, measurable harm and it continues to grow. In fact, as the quality of trademark claims continues to decline, spurious claims increase. Trademark bullying itself happens as often as a trademark holder is awarded money damages; it is present in at least 5.5% of all reported trademark cases.[11]
Scholars have explored the consequences of aggressive trademark enforcement.[12] For example, Port’s work illustrated that trademark bullying results in an increased number of spurious claims and thus an increased burden on our judicial system, a general societal harm.[13] There is also the obvious harm to the defendants in such cases when they are forced to pay legal fees, court costs, and perhaps reputational costs to defend themselves against these allegedly spurious claims.[14] These sorts of transactional costs are likely much higher than reported as most of the instances of trademark bullying likely do not make it to court at all.[15] After receiving a cease-and-desist letter from a trademark owner (with the financial means to pay an attorney to draft such a letter), many business owners are likely to absorb the costs of changing their trademark rather than fight the allegations in court.[16] As a matter of business strategy, it may be a better use of resources to simply change trademarks instead of fighting.[17] Given this evidence that trademark bullying exists and may have significant consequences, understanding the reasons for such trademark enforcement generally and for aggressive tactics, in particular, becomes more important.
Trademarks are valuable assets. In 2020, Apple’s brand was valued at $241.2 billion.[18] At the same time, Google’s brand was valued at $207.5 billion.[19] Those huge numbers reflect the value of the brand itself, and its related goodwill, and not the physical assets owned by those companies.[20] It makes sense that trademarks have been called “vital organs of the free-market system . . . [that function as a] business information capsule, capable of housing business image and consumer impression.”[21] In the age of tech companies and internationally-recognized computer applications, a company’s trademarks may be that company’s primary assets and may then be the collateral used for financing and other development efforts.[22] While trademarks are intangible assets, their value to companies is hard to overstate. Thus, aggressive trademark enforcement may be understandable on a psychological level.
In prior scholarship, one of the authors of this Article argued that trademark bullying may be the result of unconscious psychological biases interacting with uncertain trademark law requirements.[23] Trademark owners are routinely advised that they have a duty to police their marks.[24] That is, they must seek out and stop third-party uses of identical or confusingly similar marks. Failure to do so could result in a judicial determination that the trademark owner has abandoned their trademark or that they have lost the right to pursue claims against specific defendants due to the equitable defenses of laches or acquiescence.[25] In rare cases, widespread use of a mark by consumers in a non-trademark sense can lead courts to conclude that the mark is now a generic term and no longer protectable.[26] William Gallagher’s extensive interviews with trademark attorneys (that occurred during the period of time being analyzed by this Article’s event study) provided evidence that attorneys commonly stress these risks to clients and routinely use cease-and-desist letters to support these efforts.[27] However, such risks are incredibly small in comparison to the large amount of resources some companies devote to trademark overenforcement or bullying.[28] Courts routinely uphold a trademark owner’s continuing rights even when faced with egregious examples of a failure to police third-party usage.[29] This is due, in part, to the fact that a party alleging that a trademark should be deemed abandoned for failure to police third-party usage must meet a strict burden of proof and must essentially show that the mark fails to indicate the source of the product at all.[30]
However, there may be a very rational basis for trademark owners to engage in bullying: it often works to allow the trademark owner to increase market share or to create a larger umbrella of perceived protection around their mark into which the company may grow in the future.[31] Where a third party is using a similar mark that is unlikely to cause consumer confusion, a trademark owner may still decide to take steps to stop that similar use.[32] Stacey Dogan argues:
The encroachment may not confuse consumers, and trademark holders’ failure to object to it may not result in loss of rights in their mark; but it can affect the value of their trademarks and the scope of their legal rights going forward. For this reason, Professor McCarthy warns that “[t]he only way a trademark owner can prevent the market from being crowded with similar marks is to undertake an assertive program of policing adjacent ‘territory’ and suing those who edge to close.” And given the inherent fuzziness of trademark’s boundaries, judging what comes “too close” can, itself, be daunting.[33]
Leah Chan Grinvald has argued that the Lanham Act, and its amendments over the past seventy-five years, directly incentivizes overly broad trademark enforcement due to:
(1) the expansion in what is considered actionable confusion through the adoption of the Lanham Act [which removed the point-of-sale distinction previously used by judges in analyzing purchaser confusion]; (2) the expansion in the scope of protectable trademarks through judicial interpretation; and (3) the heightened importance of having a “famous” mark through the adoption of federal trademark dilution law.[34]
The fact that the test for trademark infringement is based on a “likelihood” of confusion, rather than actual confusion, could also contribute to trademark bullying. William P. Kratzke argues, “Judicial willingness to protect against a possibility of injury rather than actual injury increases the likelihood that a trademark user will employ a lawsuit as a weapon of propertization, irrespective of the chances of actual victory.”[35] Additionally, evidence of a company’s past efforts to police third-party uses can be used by the trademark owner to prove the strength of their mark in future court proceedings and as evidence to counter a defendant’s claim that the mark is now generic.[36]
This Article seeks to add additional insight into this question of what inspires companies to pursue more aggressive trademark enforcement actions by focusing on the market effects of trademark litigation involving public companies. Ultimately, decisions about how to protect corporate assets are made by a corporation’s human agents. With a publicly traded company, that decision is likely made by corporate board members and officers who are bound by fiduciary duties to act in the best interests of the corporation.[37] Specifically, corporate law provides that:
Directors have the power to institute, prosecute, compromise or appeal suits at law and in equity that the corporation brings or that are brought against the corporation. Incident to the control over the conduct of litigation is the authority to settle a suit or to make compromises. This authority may be vested in the directors or the president or chief executive officer within the limits of his or her delegated authority.[38]
Those same directors and officers would also have the authority to decline to bring litigation when they believe that such an action was unnecessary or generally unwise.[39] Although some states allow corporate boards to consider outside stakeholder interests, Delaware law embodies that dominant norm that directors and officers should work “to promote the value of the corporation for the benefit of its stockholders.”[40] Absent conflicts of interest, the decisions of directors to sue, or not to sue, are typically insulated from liability by the business judgment rule.[41] Because of the deference to corporate decision-making provided by the business judgment rule, courts generally review corporate decisions by exploring the process followed by the board in making that decision (including whether the decision was made in good faith and by disinterested parties) rather than by evaluating the consequences, good or bad, of the ultimate decision made.[42] According to business scholar Andrea M. Matwyshyn:
Provided the decision can be attributed to any rational business purpose and the process appears disinterested and independent, directors are almost never found to have violated their fiduciary duties. Successfully asserting a failure to decide is even more difficult than asserting an improperly made decision. Omissions or failures to proactively manage or discuss a corporate issue rarely provide an adequate basis for asserting a breach of fiduciary duties.[43]
The protections afforded by the business judgment rule suggest that it would be harder to find a director in breach of fiduciary duties for opting against bringing a suit rather than for bringing one in error. If directors and officers were fully knowledgeable about these potentially competing incentives, this insight could balance trademark law’s push toward litigation.
To explore how the stock market responds to trademark litigation (and relatedly, to provide data to those corporate decisionmakers faced with trademark enforcement issues), this Article has applied the event study technique to an established database of trademark cases. For clarification: “An event study is a statistical method for determining whether some event—such as the announcement of earnings or the announcement of a proposed
merger—is associated with a statistically significant change in the price of a company’s stock.”[44] In 1997, business and finance professors Sanjai Bhagat and U.N. Umesh published an article utilizing event study methodology to explore whether trademark infringement lawsuits impact the value of publicly traded companies.[45] Their data were drawn from announcements of trademark infringement cases that occurred between 1975 and 1990 in the Wall Street Journal Index and from 1985 to 1990 in Westlaw’s West’s General Digest.[46] This resulted in sixty cases where the plaintiff was publicly traded, fifty-three cases where the defendant was publicly traded, and twenty-eight cases where both parties were publicly traded.[47] Bhagat and Umesh concluded that “trademark infringement lawsuits produce a net negative effect on the defendant both upon filing by plaintiffs and the passing of the verdict” (regardless of whether the verdict is in the defendant’s favor).[48] However, the direct market effect on plaintiffs who filed trademark infringement suits was less clear: the authors found a statistically significantly positive effect on stock returns in such cases only when the pool of plaintiffs was limited to “large firms” (defined as those larger than the median market capitalization value of the firms in their sample).[49]
The authors of this Article seek to build upon and expand on the work of Bhagat and Umesh in a number of significant ways. First, this project focuses on trademark lawsuits that occurred entirely between January 1, 2000 and December 31, 2011, in recognition of the impact that the Internet has had on the speed in which information about litigation may be able to reach investors. For this reason, the authors did not rely on notices about litigation in the Wall Street Journal Index. Instead, the authors make the assumption that, by the beginning of the twenty first century, information about litigation was widely and quickly available to any investor who would be savvy enough to act on it.[50] Additionally, the authors seek to connect this econometrics approach to legal scholarship by exploring how stock return data may provide additional incentives for trademark overenforcement, or trademark bullying, an issue that was not at the forefront of discussions at the time of the prior study.
Event studies have been utilized in economics and financial market scholarship for decades.[51] While scholars may argue about the limitations of the method and whether it can be used to make normative claims about future market responses, the underlying event study methodology is widely accepted as an approach to studying the movement of stock prices in response to specific events.[52] The method was originally developed to study the efficiency of the stock market, starting with the premise that an efficient market is one that incorporates all publicly available information immediately into stock prices.[53] Numerous studies provided evidence that this foundational principle—market efficiency—was sound.[54] As such, researchers then shifted their focus toward using the technique to study the impact of specific events on stock returns (which arguably reflect corporate value and investor welfare). Scholarship has explored how the market responds to corporate events such as stock repurchases, merger announcements, and patent infringement suits.[55]
Event studies are now frequently utilized by social scientists and policymakers. It has been argued: “Event studies are among the most successful uses of econometrics in policy analysis.”[56] Event studies are also routinely used to provide economic evidence of loss causation and materiality in securities litigation.[57] Their use is now so ubiquitous that numerous practical resources have been published to assist litigators (who may lack an econometrics background) in understanding the technique, how to present it to the court, and how to criticize event studies offered by opposing counsel.[58]
The event study technique rests on the principle that the “price of a stock reflects the time- and risk-discounted present value of all future cash flows that are expected to accrue to the holder of that stock.”[59] This is why the early research into the efficiency of the market was so crucial to continued use of the event study methodology. Having established that the efficient market hypothesis is sound, one can presume that all public information about a company is reflected in the price of that company’s stock at any moment in time. Therefore, it follows that only an unanticipated event can change that company’s stock price, and such change should reflect the market’s view of how the event will impact the company’s value and future cash flows. In event studies, an event is held to have had an impact on a company’s financial performance if it produces a statistically abnormal movement in the price of that company’s stock.[60] Therefore, for the purposes of this Article, we are exploring whether the market responds positively, negatively, or not at all to the filing or conclusion of a lawsuit pertaining to that company’s trademark.
Past event studies have been used to evaluate the costs of lawsuits in terms of shareholder wealth on publicly traded companies.[61] The results of these studies generally suggest that the filing of a suit often has little to no effect on the stock return for plaintiffs but a statistically significant negative effect on the defendant. For example, in 1994, Bhagat, Brickley, and Coles studied the stock return effect of corporate litigation (of various types) on a large sample of cases.[62] Their study included 550 cases reported in the Wall Street Journal (WSJ) from 1981 to 1983.[63] They reported that defendants experienced a statistically significant, negative change in stock price upon the filing of a lawsuit against them.[64] However, plaintiffs did not see a statistically significant gain or loss.[65] In 1995, Bizjak and Coles conducted a similar event study looking specifically at antitrust litigation.[66] Again, they found significant losses to the defendants upon the filing of an antitrust suit.[67] They concluded that defendants face losses that average $10 million dollars larger than the wealth gains by plaintiffs in the studied disputes.[68] Additional research by Bhagat, Bizjak, and Coles in 1998 discovered that litigation involving environmental laws, patent infringement, and product liability have more severe negative effects for defendants when compared to antitrust or breach of contract cases.[69]
As with any event study, we are concerned with four aspects of analysis: the identification and timing of the event, the stock return during the period of the event, the estimated stock return in the absence of the event, and the determination of the event’s statistical significance. For the sake of clarity, we will discuss each aspect relevant to this trademark litigation study in more detail below.
An event study can be used to investigate a single event impacting a single public company. For example, Professors Jonathan Klick and Robert H. Sitkoff used the event study technique to analyze the market impact of the announcement of a proposed sale of a single entity’s controlling interest in Hershey Company stock.[70] The announcement was “associated with a positive abnormal return of over 25%.”[71] When the sale was abandoned after pressure from the Attorney General and the general community in Hershey, Pennsylvania, the event study found a “negative abnormal return of 12%.”[72] Such studies can be valuable tools to examine the interplay between corporate decision-making, shareholder impact, and external influences. However, event studies with such a singular focus can be challenging to interpret because there is not an agreed method to evaluate the statistical significance of changes to returns in a single stock. When similar events happen multiple times (across the same company or across companies), these events can be pooled to calculate statistical significance. Interpretations based on these combined events are at less risk of reaching erroneous conclusions.[73] The benefit of aggregating events is increased as the sample size increases since it increases the power of statistical tests.[74] For this reason, the authors of this study sought out a much larger pool of initial events to study the overall effect of litigation decisions across multiple companies.
The initial set of trademark cases used in this study was created by Professor Kenneth Port. To create this database, Port collected cases by searching for the word “trademark” on Westlaw’s database of federal cases.[75] He limited these cases to those cases decided after the Lanham Act went into effect on July 5, 1947, through December 31, 2005.[76] That search initially returned 7,414 reported cases, and Port reviewed each case and briefed and coded every case that “rendered a dispositive opinion terminating a case arising under the Lanham Act.”[77] After deleting duplicate opinions, this reduced the initial number of cases to 2,659.[78] Port used this database of cases to look for trends in trademark litigation and published his results in the law review article “Trademark Extortion: The End of Trademark Law” published by the Washington and Lee Law Review in 2008.[79] In the article, Port argues that trademark extortion is the best possible explanation for a trend that he observed in this data: the number of trademark suits filed appears to be increasing while the number of such suits ultimately resulting in damages or injunctions in favor of the plaintiff appears to be decreasing.[80] A few years later, Port updated his database to include additional trademark cases filed through December 31, 2011.[81] It is this larger database of 2,973 cases that served as the initial focus of our research.
We recognize that this initial database comes with inherent limitations. Port himself noted that “there are an indeterminate number of unreported trademark cases that arise under the Lanham Act . . . because they are unreported, it is impossible to know how many of these cases exist.”[82] Additionally, many of the cases included in the database are appellate court decisions where the district court opinions were not reported. In such instances, Port relied on the appellate court’s representations about the prior proceedings, which may have been more or less detailed.[83] For the purposes of this study, we decided to remove appellate court decisions from the dataset entirely in recognition of the challenge of pinpointing an event announcement window for appellate court proceedings. The decision to file an appeal is likely to be anticipated by the market, which violates the assumption event studies rely on that the information was not known prior to a specific announcement window. In a few instances, we were able to find the original district court opinions for the removed appellate cases, which must have been reported in the window of time since Port last updated his database. We have coded those cases according to Port’s original methodology and include them in our analysis. Additionally, Port discovered that many jury verdicts are not reported; therefore, Port’s database may be somewhat skewed toward the results of bench trials.[84]
While this project started with Port’s database of nearly 3,000 cases, our results include a considerably smaller number. First, for the reasons discussed below related to our “event announcement” determinations, we limited our cases to those filed after January 1, 2000.[85] From that smaller set of cases, we then searched publicly available records to include only cases where one or more of the litigants was publicly traded on the New York Stock Exchange (NYSE) or National Association Securities Dealers Automated Quotation System (NASDAQ) at the time of the litigation. This undertaking was more burdensome than one might suspect due to the fact that many public companies possess multiple layers of subsidiaries through which trademark litigation may be undertaken.[86] In those instances where we could determine that a trademark litigant was a subsidiary of a public company, we noted this fact and then included the parent company’s stock symbol in our database for coding purposes, treating the subsidiary as a public company. We excluded companies traded only on foreign markets from this initial study but expect that such companies may be the focus of a follow-up project. When this curation was finished, our resulting database included eighty-five cases where the plaintiff was publicly traded, seventy-four cases where the defendant was publicly traded, and six cases where both parties were public.
Each of these cases includes two events being investigated in this study: the filing date and the holding date of each suit. The next point of discernment regarding data collection is to make certain decisions regarding the timing of the event under investigation. While it may seem easy to determine the date that a lawsuit is filed, such information may not be immediately known to the investing public.[87] As such, early scholars in the field often tied the date of the event to its first public announcement in a well-known trade publication like the Wall Street Journal.[88] The selection of this event date is especially significant given that the efficient market hypothesis assumes that the market would react quickly to this first public announcement. If a researcher selects a date too far after the event has become publicly known, then the market effect may no longer be evident.
The importance of the timing of when such information is public leads to a frequent criticism of the event study method: results may be skewed by information leakage.[89] Information leakage refers to the fact that the public may learn about an event, or perhaps simply about its possibility or probability, before the event has actually occurred. Information could be “leaked” in a variety of legal ways.[90] For example, the announcement of a merger is likely to follow several weeks of negotiations—that may themselves have been covered by the financial press. A stock repurchase or stock split may be discussed by financial analysts as a possible strategy given a company’s public financials long before the company’s management makes that decision (perhaps in response to goading from those same analysts). Very few corporate events are truly a surprise to investors. Even something as financially impactful as approval or denial of a company’s new pharmaceutical by the Food and Drug Administration (FDA) would be anticipated to some degree by the research results obtained on the pathway to seeking that approval. One of the few instances of a truly surprising event could be the sudden death of a prominent founder; however, in the example of Apple founder and CEO Steve Jobs, investors were given numerous updates about his prior health woes before his ultimate death.[91]
To respond to the problem of information leakage, some studies have extended the study’s “event date” to include a few days before the date in question.[92] Some studies have included an event window that is much longer—including periods of months or years.[93] However, extending this date range has statistical implications for the results of the study. Bhagat and Romano explain: “Technically, as we increase the length of the announcement period, the noise-to-signal ratio increases and it becomes increasingly difficult to measure the impact of [the event] on share price with precision.”[94] Additionally, studies have shown that the statistical power (and thus significance) of event studies improves as the number of days in an announcement window decreases.[95] Thus, shorter announcement windows are always preferred. When using daily stock return data, an announcement window of a single day is statistically preferable; though this comes with the tradeoff of potentially finding less effect on the market if information leakage occurred in the day or two prior to the official announcement/event. [96]
Event studies that specify the event window a priori have the advantage of being simpler to calculate. Additionally, pre-specification of one’s analytic strategy has methodological strength because it reduces researcher “degrees of freedom.”[97] In scientific publishing, pre-registration of analytic protocols has been an increasing trend to address concerns that researchers may be cherry-picking analyses that lead to preferred results.[98] However, studies that make a priori decisions about how data will be collected and analyzed are subject to a serious limitation: if the assumptions that informed the a priori specification are violated, the validity of the results can be questioned. In the case of event studies, for example, the “true” event window might be different from the short pre- and post-day window around the publication of a news story that is conventionally used. Moreover, although early proof of concept studies established the conditions under which event studies could handle violations of the analytic assumptions,[99] few applied event studies characterize how robust their findings are if different analytic choices had been made (such as adding or removing one day from their specified event window). Just as the call for replication studies in psychology has recently trended toward action, empirical legal research using event studies has also started to be replicated, sometimes finding an entirely different result.[100]
Our study is, in a sense, a replication of Bhagat and Umesh’s event study analysis of trademark infringement lawsuits, which was published in 1997. However, given some important differences between our dataset and theirs, we did not believe we could make a definitive a priori choice of the event window. Three differences are of note. First, while Bhagat and Umesh restricted their analysis to cases where there was a news story either in the Wall Street Journal or Westlaw’s West’s General Digest, we analyzed any case from Port’s database that involved a publicly traded company and whose dates of filing and holding were known. Thus, we abandoned the earlier convention of identifying events based on publication in a trade publication. Our sample—although it is still subject to the limitations that Port acknowledged in how his database was constructed—is therefore more diverse than the previous study and likely more representative of trademark enforcement actions nationally.
Second, since we have chosen the reported filing date and holding date as our definition of the “announcement” (also called event “day zero” in event study terminology), we cannot assume that the same issues of information leakage that apply when the announcement is defined by publication of a news story will be equivalent under our definition of an announcement.[101] If the dynamics of information leakage are different, then using the conventional event window parameters could lead us to make erroneous or meaningless conclusions from our results. For example, if it takes longer for information about a lawsuit’s conclusion to be incorporated into the price of a stock when there is not a news story to inform investors, then the proper event window may start and/or end later than has been traditionally measured.
Third, Bhagat and Umesh’s study included trademark infringement lawsuits from 1975 to 1990. Our dataset includes lawsuits from 2000 to 2011. Although the differences in date have no impact on how event study effects are calculated, we do not assume that the way information gets to the market investor was necessarily equivalent in 1975 and 2011. One notable difference is the decreased reliance on specific publications for information about companies that was largely replaced by a variety of formal and informal sources of information made available in the twenty first century by the Internet.[102] Arguably, the rise of the Internet has competing impacts on information leakage and dissemination.[103] Leakage might be less in cases where information about a lawsuit is available almost instantaneously due to public electronic records. Leakage might be greater in cases where investors have more opportunities to encounter information that would reduce the surprise of a lawsuit’s filing or conclusion. Dissemination of information affecting a stock’s price might be assumed to happen just as quickly (or more quickly) on the internet as it would through a news story; however, it is not clear if decentralized Internet communication facilitates the same signal-to-noise intensity as a news story, and it may be the case that the timescale for something to become a signal to investors is different on the Internet.[104]
Given all the reasons above, we chose to conduct an exploratory analysis of our dataset rather than to commit to an a priori specification of our event window.[105] The advantage of this approach is that we can assess the robustness of different analytic choices while still allowing us to compare our results to those of previous studies directly. As will be discussed below in the calculation of stock return effects, there is a fortuitous mathematical symmetry in event studies that allows us to get the answer we want two different ways, and one of these ways does not require us to commit to a specific event window prior to the calculation. Thus, the event study methodology is well-suited to our exploratory approach. To mitigate concerns that we are cherry-picking our results, we will explicitly describe how we made our analytic choices and discuss the limitations of our conclusions by describing the robustness of our results.
The next step in creating the dataset used for this project required the collection of stock return data. We initially hoped to use open-source stock return data. However, when we analyzed the accuracy of freely available data by comparing free data to data utilized in prior event studies, we observed a high rate of missing or conflicting values. We therefore chose to utilize data provided by Wharton Research Data Services (WRDS) through a purchased license granted to the University of Nevada Las Vegas.[106] WRDS is the leading commercial platform for conducting research on financial markets and it utilizes the University of Chicago’s Center for Research in Security Prices (CRSP) data.[107]
For each publicly traded company that was identified in our dataset, we collected daily stock data, which included stock prices, volume, stock returns, and the daily S&P 500 index as a measure of the overall market.[108] Individual stock returns have been adjusted for splits and dividends.[109]
The central question at issue in an event study is: what is the impact of the event on the stock, specifically the stock’s return? The stock return reflects the change in the stock’s price over some period of time.[110] For example, a stock that has a price of $100 at the beginning of a trading day and moves to $104 by the end of the trading day has a daily (arithmetic) return (assuming there were no stock splits or dividends to adjust for) of ($104-$100)/$100 = 0.04, which is equivalent to a 4% increase over the day. Stock prices, and therefore stock returns, fluctuate in time, both due to general market movements (e.g., the stock market crash of 1987) as well as firm-specific movements (e.g., a profitable company having a trend of raising stock prices over a longer period of time).[111] To calculate the impact of a specific event on a company’s stock return, one must model the “expected” return, which refers to the stock return that would have been seen in the absence of the event.[112] The expected return is then compared to the actual return to see if there is a difference. This difference is called the “abnormal return.”[113] Naturally, there is no way to know definitively what return to expect in the absence of the event. However, there are several well accepted mathematical models for estimating this expected return using stock return data from a long period of time (called the “estimation window”) that does not overlap with the event window.[114] These models are based on the assumption that the expected stock return would follow the same estimated trends in market fluctuation absent the event. Early tests of these assumptions indicated that they generally perform robustly under many conditions.[115]
We chose to use the market model (also called the single-factor benchmark of returns).[116] This is one of the most common models used in the event study literature.[117] This model assumes a linear relation between an individual stock’s return and the contemporaneous market return, described in this equation:
where is the return for stock i at time period t;
and are model parameters;
is the market return (in our case the S&P 500) at time t; and
reflects the model’s error term.[118]
In words, the model states that there is a linear relationship that exists between the return of specific stock (e.g., MSFT: Microsoft) and the overall return of the market (measured here by the S&P 500). The stock’s Beta () is an important measure that is relevant to investors outside of its use in event studies. As Kliger and Gurevich explain: “Beta . . . captures the stock’s systematic risk, reflecting the comovement of the stock and market returns . . . [S]tocks with Betas in excess of unity are considered relatively risky, and stocks with lower Betas are considered more solid.”[119]
To calculate expected returns using the market model, we followed Bhagat and Umesh in using a 150-day estimation window from which to estimate our model parameters.[120] Typically, estimation windows are generated from data prior to the event. However, consistent with the efficient market hypothesis, Kliger and Gurevich note that it is equally possible to use data from after the estimation window to estimate model parameters.[121] This is fortunate as it allowed us to avoid three potential confounding issues. The first potential confound is that there may not always be 150 days of data prior to the event window, which could reflect a company being recently added to a public exchange. Second, the holding date in many cases was often quite close to the filing date. In these cases, relying on data prior to the holding date would have included data around the filing date, which is itself an event we are measuring. Using an estimation window after the holding date allows us to avoid contamination when the two filing and holding dates are close to each other. Finally, sometimes the data from a particular estimation period are not reliable. This is sometimes due to another known event such as a merger that occurs in the months before one of our event dates. In other cases, there is no known reason to suspect the data are compromised, but statistical analysis of the distribution of estimated returns suggests that there may be underlying changes in the relationship between the individual stock and the market such that violate the assumptions of the market model.[122] To minimize this risk in our data, we adopted Giudici and Blount’s suggestion to characterize the return data using more advanced methods such as the Jarque-Bera test. In the end, for each event, we selected either an estimation window prior to the event (days -170 to -21) or an estimation window subsequent to the event (days 31 to 180). We chose the model following this procedure: If the filing date and the holding date were close in time, we used days -170 to -21 for the estimation window for the filing date and we used days 31 to 180 (post holding date) for the estimation window of the holding date. If there was missing data that did not allow us to use a particular estimation window (pre- or post-event) we used the available alternate window. If both windows were available and not overlapping, we chose whichever model had the smaller Jarque-Bera statistic. We coded each event with our selected estimation strategy in order to perform sensitivity analyses later.
We used ordinary-least squares regression to estimate the and parameters for each stock around the filing date and holding date. We used these estimated parameters to calculate the expected return near the event window t = -20 to t = 30 using the equation above. We then calculated abnormal returns for t = -20 to t = 30 by subtracting the expected return from the actual return on the date. Positive values of the abnormal return reflect higher returns than expected; negative values reflect lower returns than expected.
Knowing the abnormal return on a specific date is not typically adequate to make conclusions about the impact of the event.[123] Given potential information leakage as well as potential delays in information reaching the market, many event studies allow the event window to range over multiple days. The abnormal return is cumulated across days with the assumption that the sign (positive or negative) of the abnormal return should be the same over the event window dates such that adding them together leads to a cumulatively larger effect than would be observed from random fluctuations in stock price behaving similarly to the stock activity during the estimation period. One method for cumulating these effects is to add up the abnormal returns for a particular stock during the specified event windows.[124] These cumulative abnormal returns (CARs) for a group of related stocks can be subjected to statistical significance testing to see if there is a consistent effect of the event on related stocks (e.g., testing if average effect is not zero or testing if the relative proportion of cases showing positive effect is more than would be expected by chance).[125] These CARs can also be averaged to calculate a Cumulative Average Abnormal Return (CAAR), which reflects the impact of the event of interest on the group of stocks who experienced the event (often on different calendar dates from each other).[126] There is an alternative way to calculate CAARs, in which the stocks are grouped first and the abnormal returns are averaged across the stocks on a particular day, resulting in an Average Abnormal Return (AAR).[127] By summing the AARs from the start date through the end date of the event window, we can obtain the CAARs.[128] Although the two methods for calculating CAARs are mathematically equivalent, each method offers a unique advantage in answering questions about the impact of a lawsuit on stock returns. The first method yields a result for each firm in our sample, and we can analyze these individual results with inferential statistical tests to determine if the effects are statistically significant. This method, however, requires us to commit to a specific start and end date of the event window before we can conduct the analysis. As described above, we did not have a strong theoretical or empirical rationale for making an a priori specification of the start and end date for the event window. Fortunately, the second method of calculating CAARs affords us the ability to determine the appropriate start and end dates based on the data we calculate. Once we determined the appropriate window for the event, we completed our analysis of the effects on individual firms. We describe the results of each of these analyses in the next section.
We began by calculating the CAARs via the alternative method based on averaged abnormal returns (AARs). This allowed us to calculate cumulative average abnormal returns (CAARs) anywhere from t = -20 to t = 30 days around both filing and holding dates. Visual inspection of the data helped us see where there were regularities that would identify potential start and end dates. Consider the results in Table 1, which shows the CAAR when a publicly traded trademark owner brings suit, using different choices of the start and end date for the event announcement window.[129]
Table 1. Public firm brings suit | |||||
Event window start date | |||||
Event window end date | -2 | -1 | 0 | 1 | 2 |
-2 | -0.00578 | ||||
-1 | -0.00433 | 0.00145 | |||
0 | -0.00699 | -0.00121 | -0.00266 | ||
1 | -0.0083 | -0.00252 | -0.00397 | -0.00132 | |
2 | -0.01032 | -0.00454 | -0.00599 | -0.00333 | -0.00201 |
3 | -0.01554 | -0.00976 | -0.01121 | -0.00855 | -0.00724 |
4 | -0.0181 | -0.01232 | -0.01377 | -0.01111 | -0.00979 |
5 | -0.01826 | -0.01248 | -0.01393 | -0.01128 | -0.00996 |
6 | -0.02075 | -0.01497 | -0.01642 | -0.01376 | -0.01244 |
7 | -0.02158 | -0.0158 | -0.01725 | -0.0146 | -0.01328 |
8 | -0.02258 | -0.0168 | -0.01825 | -0.01559 | -0.01427 |
9 | -0.02126 | -0.01548 | -0.01693 | -0.01427 | -0.01296 |
From Table 1, one can see that the impact of a public owner bringing suit is consistently negative, with an effect that grows a bit more negative each day until it peaks on day 8 after the suit is filed. This effect is observed irrespective of which starting event window date is chosen. Note also that almost all choices of start and end date lead to cumulatively negative effects. There is an exception to this pattern if we restrict our window to the single day before the filing (t= -1), where we find a positive effect of 0.145%. The implication here is that, if we chose our window to be the day before the filing (t= -1 to t= -1), then we would come to the conclusion there was a positive effect, which is in contrast to the negative effect that is observed with almost all other choices of start and end date.
Taken together, Table 1 visually demonstrates the tradeoffs that one must make with an event study, balancing between the larger cumulative effects that are possible with longer windows and the potentially contradictory effects that may be observed in longer windows. In this case, we observe an effect of approximately -1%, suggesting that the stock returns for this sample (plaintiffs bringing suit) were approximately 1% less than would be expected given their pre-event estimated returns. For a company with a market capitalization of $10 billion, a 1% decrease represents a loss of $100 million in value to shareholders.
To finalize the choice of start and end date, we needed to verify that the pattern observed with plaintiffs filing suit accurately described the effects for defendants around the filing of a lawsuit as well as at the resolution of a lawsuit for both plaintiffs and defendants. Consider the results in Table 2, which shows the CAAR when a publicly traded defendant firm is sued.
Table 2. Public defendant is sued | |||||
Event Window start date | |||||
Event window end date | -2 | -1 | 0 | 1 | 2 |
-2 | 0.002085 | ||||
-1 | 0.000187 | -0.0019 | |||
0 | 0.003901 | 0.001817 | 0.003715 | ||
1 | 0.003378 | 0.001294 | 0.003191 | -0.00052 | |
2 | 0.008812 | 0.006728 | 0.008625 | 0.004911 | 0.005434 |
3 | 0.005177 | 0.003092 | 0.00499 | 0.001275 | 0.001798 |
4 | 0.002328 | 0.000243 | 0.002141 | -0.00157 | -0.00105 |
5 | 0.004707 | 0.002623 | 0.00452 | 0.000806 | 0.001329 |
6 | 0.002118 | 3.39E-05 | 0.001932 | -0.00178 | -0.00126 |
From the table, it appears that there is generally a positive impact on cumulative returns, an effect that peaks on day 2 after the filing at 0.8%. The positive effect is consistently observed across different start and end dates close to the filing date.
Table 3 shows the CAAR at the end of litigation (combining the effects for both plaintiffs and defendants).
Table 3. Conclusion of lawsuit for plaintiffs and defendants | |||||
Event window start date | |||||
Event window end date | -2 | -1 | 0 | 1 | 2 |
-2 | 0.001316 | ||||
-1 | 0.001478 | 0.000162 | |||
0 | 0.003467 | 0.002151 | 0.001989 | ||
1 | 0.003792 | 0.002476 | 0.002314 | 0.000325 | |
2 | 0.002032 | 0.000716 | 0.000554 | -0.00143 | -0.00176 |
3 | 0.002017 | 0.000701 | 0.000539 | -0.00145 | -0.00177 |
4 | 0.0031 | 0.001784 | 0.001622 | -0.00037 | -0.00069 |
5 | 0.003018 | 0.001702 | 0.00154 | -0.00045 | -0.00077 |
From this table, it appears that there is generally a positive cumulative return at the conclusion of a lawsuit. This effect is approximately 0.2% and peaks approximately one to two days after the conclusion of the lawsuit. Note that if the event window starts on day -2, -1, or 0, the effect is positive. For event window start dates of 1 or 2, the cumulative average abnormal return is negative. Thus, the observed effect is rather sensitive to the choice of event window start and end days.
Integrating the results from these CAAR data led us to the choice of the event window dates.[130] In fact, we chose two separate windows. The first, from day -2 to day 1 after the event (either the filing date or the conclusion date of the lawsuit) is the same window that Bhagat and Umesh used in their study.[131] We also chose an event window from the day of the event (day 0) to day 3 after the event because we did not see a consistent pattern of information leakage prior to the event. The use of two event windows allowed us to characterize how robust our results were.[132]
The data presented in the first three tables are suggestive. However, taken alone, it is not clear if the effects observed in the data are meaningful ones that can be generalized. To answer these questions, we need to use inferential statistics, which are used to make predictions or draw conclusions from a sample.[133] In our study, we used one aspect of inferential statistics called hypothesis testing to draw conclusions about our results.[134] In traditional hypothesis testing, one specifies a null hypothesis and then computes the appropriate test statistic.[135] The value of the statistic either justifies the rejection of the null hypothesis in favor of an alternative hypothesis or does not justify rejecting the null hypothesis.[136] Researchers have to balance two risks. The first risk is rejecting the null hypothesis when the null hypothesis is actually true (e.g., concluding that a migraine drug has a positive benefit of reducing the number of migraines per month when, in fact, it has no effect).[137] This type of risk is termed Type I error. The second risk (called “Type II error”) is that the null hypothesis will not be rejected when it is, in fact, false.[138] For example, imagine testing a new migraine drug that actually decreases the average number of migraines per month for chronic migraine suffers from 15 to 14 migraines. The null hypothesis is that the drug has no effect on the number of migraines. When the test statistic is calculated, it may not justify rejecting the null hypothesis even though that hypothesis is false. This likely occurs because the size of the effect (one less migraine) is rather small. A more dramatic effect such as decreasing the number of migraines by half is easier to detect. Also, the number of patients in the study may be too low; increasing the number of patients in the study would increase the power of the test to detect the smaller—but still perhaps meaningful—effect of the drug on the number of migraines per month.
A researcher must choose how to balance the risk of Type I and Type II errors based on their specific research question. Two of the most important choices are power and significance level.[139] Statistical power is a function of sample size.[140] Studies without adequate sample sizes are “underpowered” and at risk of not being able to answer the question that the researcher is studying.[141] The problem of underpowered studies has been increasingly recognized in empirical research.[142] Significance level (also called “alpha”) specifies the probability of making a Type I error and rejecting the null hypothesis when that hypothesis is true.[143] Conventionally, an alpha level of 0.05 is chosen.[144] Small alpha levels (such as 0.01 or 0.001) may be chosen for more stringent tests of significance; an alpha level of 0.10 is commonly accepted for identifying an effect that “trends toward significance” or is “marginally significant.”[145]
The result reported for a hypothesis test is a p-value.[146] The p-value represents the probability of observing a result that is as extreme or more extreme if the null hypothesis is true.[147] For example, with the hypothetical migraine drug, the null hypothesis is that the drug has no effect on number of migraines per month. If half of the patients receiving the new drug have one fewer migraine in a month and the other half of the patients have one more migraine in a month, this result would be quite likely assuming that the null hypothesis of no effect was true. The p-value would certainly be greater than 0.05 and probably greater than 0.10 as well. We would not be able to reject the null hypothesis in this case. If instead, the majority of patients had no migraines after taking the drug, but a small majority didn’t respond and had the same number of migraines, then this result would be less likely to be observed if the null hypothesis was indeed true. The p-value would reflect this decreased likelihood; if the p-value was between 0.05 and 0.10, we could consider this trending toward significance, and if the p-value was less than 0.05, we could consider this statistically significant and reject the null hypothesis that the migraine drug had no effect on the number of migraines per month.
For our study, we are constrained by the number of cases in Port’s database that involve publicly traded companies. The number of cases in this trademark database per year is smaller than the number of cases analyzed by the event study methodology for patent special purpose entities.[148] Thus, our statistical power is somewhat limited, and there is no easy way to increase the sample size. In recognition of this constraint, we chose a significance level of 0.10 to capture effects that are significant or trend towards significance.[149]
We assessed statistical significance using two common statistical tests. The one sample t-test tests the null hypothesis that the cumulative abnormal return value equals zero.[150] The alternative hypothesis is that the cumulative abnormal return value is significantly different from zero. In this case, we are interested in values that may be significantly below zero or significantly above zero, so we used a two-tailed t-test.[151] The t-test assumes that the data come from a bell-curve type distribution. If the data are not well described by a bell-curve, the t-test is not the appropriate test to use (although the t-test is fairly robust when these assumptions are not met as long as relatively large sample sizes are used).[152] Therefore, we used the t-test for our larger groups of data, but we omitted it for smaller subgroups where it is less appropriate. For all of our data, we used the sign test to determine if the proportion of positive to negative effects was significant.[153] The null hypothesis is that the number of positive results (that is, cumulative abnormal returns greater than 0) is equal to the number of negative results (cumulative abnormal returns less than 0). The sign test is non-parametric, which means that it does not require the data to come from a bell-curve distribution to be valid.[154] Together, the sign test indicates whether the proportion of effects in one direction is statistically significant, and the t-test, when applicable, indicates whether the size of this effect is statistically significantly different than zero.
Table 4 describes the abnormal returns for plaintiff and defendant firms at the filing of a trademark lawsuit and the statistical significance of these results.
Table 4. Announcement period abnormal returns for plaintiff and defendant firms on the filing of a trademark lawsuit. | |||||||||
Plaintiff | Defendant | ||||||||
Day -2 to 1 | Day 0 to 3 | Day -2 to 1 | Day 0 to 3 | ||||||
Mean | -0.83% | -1.12% | 0.34% | 0.50% | |||||
Median | -0.32% | -0.55% | -0.70% | 0.06% | |||||
Minimum | -17.47% | -11.30% | -9.90% | ‑11.28% | |||||
Maximum | 9.06% | 11.36% | 18.39% | 30.40% | |||||
Sample Size | 85 | 85 | 74 | 74 | |||||
# positive | 38 | 36 | 32 | 37 | |||||
# negative | 47 | 49 | 42 | 37 | |||||
Sign test p-value | 0.193 | 0.096 | 0.148 | 0.546 | |||||
t-test p-value | 0.071 | 0.017 | 0.558 | 0.417 |
The mean abnormal returns due to the filing of lawsuits are -0.83% for plaintiffs using the [-2 to +1] day event window and -1.12% for plaintiffs using the [0 to +3] event window. The most extreme negative value is -17.47% and the most extreme positive value is 11.36%. Thus, the effect on individual firms include both positive and negative cumulative abnormal returns. Of the 85 plaintiffs included in the sample, 38 of the results were positive cumulative abnormal returns and 47 were negative for the [-2 to +1] event window. For the [0 to +3] event window, 36 of the cases were positive and 49 were negative. Thus, the results are relatively similar no matter which event date window is chosen. The p-values for the t-tests were 0.071 for plaintiffs for the [-2 to +1] window and 0.017 for the [0 to +3] window. Thus, there was a trend toward significance of the negative effect observed for the [-2 to 1] window and a statistically significant negative effect observed for the [0 to +3] window. The sign test p-value for plaintiffs during the [-2 to +1] event window was 0.193, indicating that the null hypothesis could not be rejected. The sign test p-value for plaintiffs during the [0 to +3] event window was 0.096, indicating a trend toward significance for the higher proportion of negative effects versus positive effects.
The mean abnormal returns for defendants across the two event windows were 0.34% and 0.50%, respectively. The most extreme negative return was
-11.28% and the most extreme positive return was 30.4%. For the sample of 74 defendants, 32 were positive and 42 were negative for the [-2 to +1] event window and 37 were positive and 37 were negative for the [0 to +3] event window. The p-values for the t-test and for the sign test were all greater than 0.1, indicating that there were no statistically significant effects. Thus, our results suggest a statistically significant negative effect for plaintiffs on the filing of a lawsuit and a variable impact on defendants that does not demonstrate statistical significance. This finding is in contrast to the findings of Bhagat and Umesh, who observed a marginally significant effect of the filing of a lawsuit on the abnormal returns for defendants.[155] The 30 defendants in their sample had a mean abnormal return of -0.4%. The returns ranged from a minimum of -9.2% to 11.7%. The proportion of positive to negative abnormal returns was 11 to 19 (which was marginally significant at a p-value of 0.10 using the sign test).
Table 5 describes the abnormal returns for plaintiffs and defendants on the holding of trademark lawsuits.
Table 5. Announcement period abnormal returns for plaintiff and defendant firms on the holding of a trademark lawsuit. | |||||||||||
Plaintiff | Defendant | ||||||||||
Day -2 to 1 | Day 0 to 3 | Day -2 to 1 | Day 0 to 3 | ||||||||
Mean | 0.52% | 0.22% | 0.22% | -0.14% | |||||||
Median | 0.38% | 0.40% | 0.07% | 0.08% | |||||||
Minimum | -11.56% | -9.94% | -18.57% | -14.00% | |||||||
Maximum | 8.94% | 10.52% | 17.98% | 13.32% | |||||||
Sample Size | 85 | 85 | 74 | 74 | |||||||
# positive | 46 | 50 | 38 | 38 | |||||||
# negative | 39 | 35 | 36 | 36 | |||||||
Sign test p-value | 0.258 | 0.064 | 0.454 | 0.454 | |||||||
t‑test p‑value | 0.134 | 0.586 | 0.642 | 0.788 |
The mean abnormal returns for the plaintiff were 0.52% and 0.22% for the [-2 to +1] and [0 to +3] event windows, respectively. The most extreme negative return was -11.56% and the most extreme positive return was 10.52%. Of the 85 plaintiffs in the sample, 46 demonstrated positive returns and 39 demonstrated negative returns over the [-2 to +1] event window. Over the [0 to +3] event window, 50 of the firms had positive abnormal returns and 35 had negative. This tendency for plaintiffs to show positive returns over the [0 to +3] event window was marginally significant, with the p-value of the sign test equal to 0.064. No other results achieved statistical significance for the plaintiff.
For the defendant, none of the results demonstrated statistical significance with the t-test or the sign test. Defendants had a mean abnormal return over the [-2 to +1] event window that was positive (0.22%), whereas the return over the [0 to +3] event window was negative (-0.14%). There were sizeable extreme values with a minimum return of -18.57% and a maximum return of 17.98% in this sample. The relative proportion of positive to negative results was balanced with 38 positive and 36 negative for both the [-2 to 1] and [0 to +3] event window.
Thus, our results suggest that plaintiffs have a statistically significant trend toward having a positive return for the [0 to +3] event window, but the magnitude of these positive returns is not statistically significantly different from zero. None of the results for the defendants achieved statistical significance.
Our findings vary somewhat from the findings of Bhagat and Umesh. They observed a mean abnormal effect of 0.6% for plaintiffs, but this effect was not statistically significant as they reported a balance of 20 positive to 23 negative abnormal returns for the 43 plaintiffs in their sample.[156] We found the opposite: plaintiffs showed a statistically significant trend towards positive abnormal returns. Our findings for defendants vary as well. Bhagat and Umesh reported a statistically significant negative effect of -1.0% in their sample of 36 defendants.[157] In a subgroup of seven cases where there was a verdict in favor of the plaintiff, Bhagat and Umesh found a larger negative effect of -3.0% for defendants that was statistically significant.[158] We found no statistically significant effect of the holding on our full sample of 74 defendants. We conducted several subgroup analyses inspired by Bhagat and Umesh; these will be described in a later section.
Next, we explored whether firm size had an impact on abnormal returns. Bhagat and Umesh assessed for a large firm effect in their study, and they found a stronger negative effect on stock returns for defendants who were larger (defined as those that had a market capitalization larger than the median market capitalization value of firms in their sample).[159] We calculated the minimum ($34.3 million), maximum ($505.7 billion), and median ($10.3 billion) market capitalization values for the firms in our data set. We defined small firms as those with market capitalization values smaller than the median value. Large firms were defined as having a market capitalization larger than the median value of the overall dataset. The median value we observed was close to the heuristic used in finance of distinguishing large from small firms around a market capitalization value of $10 billion.[160]
Table 6 describes the abnormal returns of small plaintiff and small defendant firms around the filing of a trademark lawsuit.
Table 6. Announcement period abnormal returns for small plaintiff and defendant firms on the filing of a trademark lawsuit. Firms’ market capitalization values are smaller than the median firm size of $10.31 billion | ||||||||
Plaintiff | Defendant | |||||||
Day -2 to 1 | Day 0 to 3 | Day -2 to 1 | Day 0 to 3 | |||||
Mean | -2.24% | -2.24% | 0.71% | 1.27% | ||||
Median | -0.81% | -1.53% | -0.68% | 0.52% | ||||
Minimum | -17.47% | -11.30% | -9.90% | -11.28% | ||||
Maximum | 2.64% | 4.83% | 18.39% | 30.40% | ||||
Sample Size | 46 | 46 | 38 | 38 | ||||
# positive | 16 | 15 | 18 | 21 | ||||
# negative | 30 | 31 | 20 | 17 | ||||
Sign test p-value | 0.027 | 0.013 | 0.436 | 0.314 | ||||
t‑test p‑value | 0.001 | 0.001 | 0.481 | 0.268 |
The mean abnormal return for the 46 small firm plaintiffs was -2.24% over the [-2 to +1] event window and -2.24% over the [0 to +3] event window. The most extreme negative return observed was -17.47% and the most extreme positive return was 4.83%. The p-value of the t-test was 0.001, indicating that negative effect observed was statistically significant. Of the 46 small firm plaintiffs, 16 had positive returns and 30 had negative returns over the [-2 to +1] event window. Over the [0 to +3] event window, 15 firms had positive returns and 31 had negative returns. The relative proportion of more negative returns was statistically significant for both event windows, with p-values of 0.027 and 0.013, respectively.
The mean abnormal returns for the 38 small firm defendants around the filing of a lawsuit was 0.71% for the [-2 to +1] event window and 1.27% for the [0 to +3] event window. The returns ranged from a minimum of -11.28% to a maximum of 30.4%. Of the 38 small firm defendants, 18 had positive abnormal returns and 20 had negative abnormal returns over the [-2 to +1] event window. For the [0 to +3] event window, 21 firms had positive abnormal returns and 17 were negative. For defendants, none of the results achieved statistical significance using the t-test or the sign test.
These results suggest that, for small firm plaintiffs, there is a statistically significant tendency to have a negative abnormal return around the filing of a lawsuit and that the size of this negative return is significantly different than zero. For small firm defendants, there are no statistically significant effects observed around the filing of a lawsuit.
Table 7 describes the abnormal returns of small plaintiff and small defendant firms around the holding of a trademark lawsuit.
Table 7. Announcement period abnormal returns for small plaintiff and defendant firms on the holding of a trademark lawsuit. Firms’ market capitalization values are smaller than the median firm size of $10.31 billion | ||||||||
Plaintiff | Defendant | |||||||
Day -2 to 1 | Day 0 to 3 | Day -2 to 1 | Day 0 to 3 | |||||
Mean | 0.35% | -0.01% | -0.19% | -1.07% | ||||
Median | 0.05% | 0.51% | 0.18% | -0.57% | ||||
Minimum | -11.56% | -9.94% | -18.57% | -14.00% | ||||
Maximum | 8.94% | 10.52% | 17.98% | 10.82% | ||||
Sample Size | 40 | 40 | 37 | 37 | ||||
# positive | 21 | 24 | 20 | 16 | ||||
# negative | 19 | 16 | 17 | 21 | ||||
Sign test p-value | 0.437 | 0.134 | 0.371 | 0.256 | ||||
t-test p‑value | 0.543 | 0.986 | 0.822 | 0.184 |
The mean abnormal returns for the 40 small firm plaintiffs around the holding of a lawsuit were 0.35% and -0.01% for the [-2 to +1] and [0 to +3] event windows, respectively.[161] The returns ranged from a minimum of -11.56% to a maximum of 10.52%. Over the [-2 to +1] event window, 21 firms had positive abnormal returns and 19 had negative abnormal returns; over the [0 to +3] event window, 24 firms had positive abnormal returns and 16 had negative abnormal returns. None of these differences were statistically significant using the t-test or sign test.
For the 37 small firm defendants, the mean abnormal returns were -0.19% and -1.07% over the [-2 to +1] and [0 to +3] event windows, respectively. The abnormal returns ranged from a minimum of -18.57% to a maximum of 17.98%. The proportion of positive to negative abnormal returns was 20 to 17 for the [-2 to +1] event window and 16 to 21 for the [0 to +3] event window. None of these differences were statistically significant.
These results indicate that there are no statistically significant trends evident for our sample of small firm plaintiffs or small firm defendants around the holding of a lawsuit.
Table 8 describes the abnormal returns of large plaintiff and small defendant firms around the filing of a trademark lawsuit.
Table 8. Announcement period abnormal returns for large plaintiff and defendant firms on the filing of a trademark lawsuit. Firms’ market capitalization values are larger than the median firm value of $10.31 billion | ||||||||
Plaintiff | Defendant | |||||||
Day -2 to 1 | Day 0 to 3 | Day -2 to 1 | Day 0 to 3 | |||||
Mean | 0.78% | 0.18% | -0.06% | -0.31% | ||||
Median | 0.28% | 0.19% | -0.91% | -0.31% | ||||
Minimum | -5.95% | -11.29% | -5.38% | -4.95% | ||||
Maximum | 9.06% | 11.36% | 7.77% | 4.02% | ||||
Sample Size | 38 | 38 | 36 | 36 | ||||
# positive | 21 | 20 | 14 | 16 | ||||
# negative | 17 | 18 | 22 | 20 | ||||
Sign test p-value | 0.314 | 0.436 | 0.121 | 0.309 | ||||
t-test p-value | 0.146 | 0.793 | 0.914 | 0.426 |
The mean abnormal returns for the 38 large firm plaintiffs around the filing of a lawsuit are 0.78% for the [-2 to +1] event window and 0.18% for the [0 to +3] event window. The abnormal returns range from a minimum of
-11.29% to 11.36%. The proportion of cases with positive to negative abnormal returns is 21 to 17 for the [-2 to +1] event window and 20 to 18 for the [0 to +3] event window. None of these differences demonstrated statistical significance using the t-test or the sign test.
For the 36 large firm defendants, the mean abnormal returns were -0.06% and -0.31% on the [-2 to +1] and [0 to +3] event windows, respectively. The abnormal returns ranged from a minimum of -5.38% to 7.77%. The relative proportion of positive to negative abnormal returns was 14 to 22 for the [-2 to +1] event window and 16 to 20 for the [0 to +3] event window. None of these differences were statistically significant.
These results indicate that there is no statistically significant effect for large firm plaintiffs or large firm defendants around the filing of a lawsuit. These findings contrast with the findings of Bhagat and Umesh. In their sample of 14 large firm plaintiffs, 11 firms had positive abnormal returns, which was statistically significant (p = 0.03 using the sign test).[162] Large firm plaintiffs in their sample had a mean abnormal return of 0.4%. In their sample of 14 large defendants, 10 firms had negative abnormal returns, which was marginally significant (p = 0.09 using the sign test). Large firm defendants had a mean abnormal return of -1.0%.[163] The mean and median effects we observed in our sample (positive for large firm plaintiffs and negative for large firm defendants) are in agreement with the findings of Bhagat and Umesh. Our larger sample of plaintiffs shows an approximately equal number of positive to negative abnormal returns, which leads to a non-significant difference using the sign test. The sign test is also not significant for our larger sample of defendants; however, we note that the pattern appears to be similar to that observed by Bhagat and Umesh. Our observation of 14 positive abnormal returns to 22 negative abnormal returns for defendants over the [-2 to +1] event window is suggestive of a trend (with a p-value of 0.121 close to the 0.10 significance level).
Table 9 describes the abnormal returns of large plaintiff and large defendant firms around the holding of a trademark lawsuit.
Table 9. Announcement period abnormal returns for large plaintiff and defendant firms on the holding of a trademark lawsuit. Firms’ market capitalization values are larger than the median firm size of $10.31 billion | ||||||||
Plaintiff | Defendant | |||||||
Day -2 to 1 | Day 0 to 3 | Day -2 to 1 | Day 0 to 3 | |||||
Mean | 0.69% | 0.45% | 0.63% | 0.74% | ||||
Median | 0.52% | 0.31% | 0.00% | 0.30% | ||||
Minimum | -7.07% | -5.64% | -2.26% | -8.24% | ||||
Maximum | 8.65% | 5.85% | 11.99% | 13.32% | ||||
Sample Size | 46 | 46 | 38 | 38 | ||||
# positive | 26 | 27 | 19 | 22 | ||||
# negative | 20 | 19 | 19 | 16 | ||||
Sign test p-value | 0.231 | 0.151 | 0.564 | 0.209 | ||||
t-test p-value | 0.092 | 0.261 | 0.178 | 0.222 |
The mean abnormal returns for the 46 large firm plaintiffs on the holding of a lawsuit are 0.69% and 0.45% for the [-2 to +1] and [0 to +3] event windows, respectively. The abnormal returns range from a minimum of -7.07% to a maximum of 8.65%. The number of positive to negative abnormal returns was 26 to 20 and 27 to 19. These proportions were not statistically significant using the sign test. The positive returns over the [-2 to +1] event window were statistically significant using the t-test (p-value = 0.092).
For the 38 large firm defendants, the mean abnormal returns were 0.63% and 0.74% for the [-2 to +1] and [0 to +3] event window, respectively. The abnormal returns ranged from a minimum of -8.24% to a maximum of 13.32%. The number of positive to negative abnormal returns was 19 to 19 and 22 to 16. None of the differences were statistically significant using the t-test or sign test.
These results provide some support for the conclusion that the effect of a holding on large firm plaintiffs is positive. There is no clear effect on large defendants.
We can briefly summarize the conclusions of these group analyses. In general, the filing of a trademark lawsuit has a significantly negative impact on the returns for plaintiffs. The size of this effect is approximately -1% and it is observed one to three days after the filing date of the lawsuit. For smaller firm plaintiffs (those with a market capitalization value smaller than the median of value of the firms we studied), there is a significantly negative effect that is somewhat larger at -2.24%. There is no statistically significant impact of filing for large plaintiffs. The effect of the conclusion of a lawsuit is significantly positive for plaintiffs with a cumulative average abnormal return of approximately 0.4%. There is no statistically significant effect of the conclusion of the lawsuit for smaller firm plaintiffs. Larger firm plaintiffs have statistically significantly positive returns of approximately 0.5%. Using the assumptions of the event study method, we can conclude that the market tends to respond negatively to a plaintiff filing a trademark lawsuit, and that this is more pronounced for smaller firm plaintiffs and perhaps not applicable to larger firm plaintiffs. The market responds positively to the conclusion of trademark lawsuits for plaintiffs, and the response is slightly more positive for larger firm plaintiffs.
For defendants, there were no statistically significant effects on the filing or conclusion of lawsuits, and these results were consistent across small and large firm defendants. Thus, it appears in general that the market has no consistent pattern of response to defendants being sued or reaching the conclusion of a lawsuit. We note, however, that for both plaintiffs and defendants, there are interesting individual variations within these groups (with individual firms’ cumulative abnormal returns ranging from -18.57% to 30.4%).
As previously mentioned, Bhagat and Umesh found a stronger negative effect for the defendants in cases where the verdict was in favor of the plaintiff. This suggested that there may be relevant subgroups of cases where the effect of the filing or holding might be different from the effects observed at the group level. We examined several such subgroups. In addition to separating plaintiffs from defendants, filing from holding, and small, large, and combined firm sizes, we grouped the cases using the following categories.
We analyzed each combination of these variables using the sign test. The t-test did not appear applicable because it is less appropriate to use with smaller sample sizes (like the ones we had in our subgroups) when the data distribution deviates from a normal bell curve. These subgroup analyses should be considered exploratory given that we had a relatively liberal significance level of 0.10. Additionally, we did not correct for multiple comparisons, so there is an increased risk that results that are identified as significant represent a Type I error in which we have rejected the null hypothesis of no effect when there actually is no effect. Therefore, these effects should not be overinterpreted as generalizable until tested with a separate confirmation sample. In Table 10, we report each combination that demonstrated a trend toward statistical significance using the sign test with a p-value less than 0.1 using the sign test.
Table 10. Exploratory sub-group analyses | ||||||
Combination | Combination | |||||
Plaintiff x Filing x Subsidiary | CAAR Day -2 to 1 | -0.0322 | Small Plaintiff x Filing x Non-subsidiary | CAAR Day 0 to 3 | -0.02336 | |
Positive | 2 | Positive | 14 | |||
Negative | 7 | Negative | 27 | |||
p-value | 0.09 | p-value | 0.03 | |||
Plaintiff x Filing x Subsidiary | CAAR Day 0 to 3 | -0.00795 | Small Plaintiff x Filing x Win | CAAR Day -2 to 1 | -0.02903 | |
Positive | 2 | Positive | 8 | |||
Negative | 7 | Negative | 19 | |||
p-value | 0.09 | p-value | 0.026 | |||
Plaintiff x Filing x Win | CAAR Day -2 to 1 | -0.00984 | Small Plaintiff x Filing x Win | CAAR Day -2 to 1 | -0.02279 | |
Positive | 22 | Positive | 8 | |||
Negative | 33 | Negative | 19 | |||
p-value | 0.089 | p-value | 0.026 | |||
Plaintiff x Filing x Win | CAAR Day 0 to 3 | -0.01079 | Small Plaintiff x Filing x Win x Non-subsidiary | CAAR Day -2 to 1 | -0.02465 | |
Positive | 21 | Positive | 7 | |||
Negative | 34 | Negative | 16 | |||
p-value | 0.052 | p-value | 0.047 | |||
Plaintiff x Filing x Win x Non-subsidiary | CAAR Day 0 to 3 | -0.01151 | Small Plaintiff x Filing x Win x Non-subsidiary | CAAR Day 0 to 3 | -0.02428 | |
Positive | 19 | Positive | 7 | |||
Negative | 29 | Negative | 16 | |||
p-value | 0.097 | p-value | 0.047 | |||
Small Plaintiff x Filing x Non-subsidiary | CAAR Day -2 to 1 | 0.01922 | Small Plaintiff x Holding x Non-subsidiary | CAAR Day 0 to 3 | 0.001229 | |
Positive | 15 | Positive | 21 | |||
Negative | 26 | Negative | 12 | |||
p-value | 0.059 | p-value | 0.081 | |||
Five significant combinations were observed related to plaintiffs around the filing of a lawsuit. There was a tendency toward negative abnormal returns for 9 subsidiary plaintiffs for the [-2 to +1] event window (with a mean abnormal return of -3.22%). Subsidiary plaintiffs also showed a tendency toward negative abnormal returns for the [0 to +3] event window, although the mean abnormal return was positive at 0.795%. In the 55 cases where the plaintiff ultimately won, a significant trend was found around the filing date, where plaintiffs experienced a tendency toward negative abnormal returns looking at both the [-2 to 1] and [0 to +3] event windows. A smaller subgroup of 48 winning cases for non-subsidiary plaintiffs showed the same trend toward negative abnormal returns for the [0 to +3] event window.
An essentially similar pattern of results was observed for small firm plaintiffs around the filing date of a lawsuit. No effects were observed for large firm plaintiffs around the filing date. There was only one significant result for plaintiffs around the holding date. Small non-subsidiary plaintiffs showed a significant trend towards positive abnormal returns with 21 of the 33 cases having positive abnormal returns and an average effect of 0.12% over the [0 to +3] event window. We did not observe any potentially significant effects for defendants on the filing or the conclusion of lawsuits.
These exploratory subgroup analyses largely match what we observed in our group analyses. Plaintiffs tend to have significantly negative abnormal returns on the filing of lawsuits, and this effect may be more pronounced for smaller plaintiffs and not as common for larger plaintiffs.[164]The conclusion of a lawsuit is generally received positively by the market for plaintiffs. This holds especially for larger plaintiffs and less so for smaller plaintiffs except if they are a small non-subsidiary firm. No effect is observed at the group or subgroup level for defendants. Our subgroup analyses strengthen these overall conclusions since they help us rule out the possibility that entirely different results would be observed if we were to define our group sample in alternative ways.
We note that, with a slightly larger sample size, we observe different results than Bhagat and Umesh, particularly in not observing the expected negative impact on defendants. Given the sample size of each study and the limitations we have described, the question of how trademark lawsuits impact defendants especially remains somewhat unanswered. Our results refute the assumption that the impact on defendants is a large negative effect that is robustly observed in trademark lawsuits. However, we cannot rule out the possibility that there are negative effects for defendants as Bhagat and Umesh reported (and which was a possible trend in our dataset for defendants and large defendants). Resolution of this question would require the use of a confirmatory sample either using an entirely new sample of cases, resampling methods, or an “out sample” approach in which some of the cases are excluded when the statistical model is being fit and then the fitted model is applied to these cases to determine if the model accurately predicts the behavior of these out-of-sample cases.[165]
To compare our results with the previous literature, we explored how well previous methodologies would describe our dataset. In order to determine if the cases we found would have been identified based on an announcement in the Wall Street Journal, (WSJ) as in Bhagat and Umesh’s study, we hand searched for news coverage of our cases in the WSJ within a month before or after the filing and holding dates. For cases where the first party was publicly traded, we find 3.5% (3 out of 85) of filings and 3.5% (3 out of 85) of holdings were reported in the WSJ. One case had both the filing and holding reported. The number of trading days between the filing of the lawsuit and the coverage in the WSJ ranged from one to two days for filings and one to five days for the holdings. For cases where the second party was publicly traded, 5.3% (4 of 75) of filings and 1.3% (1 of 75) of lawsuit holdings were reported in the WSJ. Two of the news stories came out prior to the filing date of the lawsuit. One was one trading day before the filing (which would be captured in our event window of -1 to +2 days). The other came out eleven trading days before the date the lawsuit was filed, indicating a clear leakage of information that would likely decrease the impact of stock returns around the filing date. In the other two cases where there was coverage in the WSJ associated with the filing date, the timing of the news story was one or two days after the filing date, which would be included in both our [-1,2] and [0,3] day event windows. There was one case where the second party was a publicly traded company and there was news coverage of the lawsuit’s holding. The WSJ story was published one day after the holding date; thus, the announcement would be captured in both of our event windows.
There are two patterns worthy of note in these comparisons. First, the fact that only approximately 5% of the cases in our dataset had been covered in the WSJ suggests that there may be important differences between studies based on the way that cases are selected. Many of our cases may not have been newsworthy to the WSJ. One reason may be that some of our cases included defendants that were not large companies. It makes sense that not every trademark suit brought by a large company would receive coverage in the WSJ, particularly if the defendants are not publicly traded companies that are routinely given press coverage themselves. Additionally, the venue of the lawsuit may have influenced the probability of news coverage. The few cases that were covered in the WSJ tended to be heard in prominent metropolitan venues, such as the Southern District of New York.
The second pattern of note is the observation that news coverage was generally one to two days after the filing or holding date, suggesting that our analytic strategy nearly replicates that of Bhagat and Umesh for cases that were covered in the WSJ. We cannot, of course, assume that this agreement would be observed in cases that were not covered in the WSJ, but this observed concordance between our methods with the published cases provides some support for our assumption that the newsworthiness of a lawsuit is likely communicated to the market in a brief window around the filing and conclusion date of the suit.
Another assumption we made in our study was that the dynamics of new information coming into the market may have changed since Bhagat and Umesh conducted their study (and more generally, the way previous event studies have defined what constitutes an “announcement”—the signal of new information for the market to assimilate). Specifically, we wondered if the rise of diverse digital platforms might change the relative importance of traditional media coverage as the definition of an announcement. We cannot definitively answer this question, but we did make some relevant comparisons. We replicated Bhagat and Umesh’s methodology of searching the WSJ for the term “trademark infringement.” In their study looking at cases from 1975 to 1990, they found 141 lawsuits that included publicly traded companies.[166] When we used the same methodology looking at our 12 years of data, we observed 179 results for “trademark infringement” reported in the WSJ: of these, approximately 20 appeared to be actual first announcements of trademark lawsuits, and only one appeared to be for a case in our dataset. We know from our data that there were several more lawsuits that involved publicly traded companies that occurred during this period. Therefore, it seems reasonable to conclude that the coverage of trademark lawsuits in the WSJ has changed in the time between their study and ours. This raises a question of whether news coverage in a major periodical like the WSJ captures the information that is coming into the market, or if it is itself the signal to the market.
As discussed above, our results found that the market typically responded negatively to the plaintiff’s decision to file a trademark suit. Plaintiffs generally saw a statistically significant 1% decrease in their stock returns at the time of filing. This effect was much larger for the less wealthy “Small Plaintiffs” who saw a 2.25% decrease. This comports with prior empirical research and follows logically from the premise that the market recognizes that litigation, of any kind, is evidence that the plaintiff company has suffered some sort of harm (that may need to be reflected in its finances) and that the company will now face unplanned litigation-related costs for the near future. Obviously, companies with larger market capitalization are more able to weather the financial costs of litigation so it would be reasonable for the market to respond more negatively to the filing of litigation by companies with less financial resources. Such results are in line with scholars that have previously argued that trademark enforcement is structured to favor wealthier companies: “trademark policing and protection favors those with the financial resources to spare. These same tools [for trademark enforcement] . . . are now susceptible to “weaponization” by firms with strong market power and massive fortunes.”[167]
Similarly, our results found a statistically significant positive effect on plaintiff’s stock returns upon the conclusion of litigation (regardless of how the litigation was concluded). Generally, plaintiffs’ returns saw a 0.4% increase at the time of the holding. That result for “Large Plaintiffs” was slightly higher at 0.5% and lower for “Small Plaintiffs” at 0.12%. Overall, this could be explained by the market responding favorably to the cessation of the added expenses related to the legal dispute.
Of course, our data included noteworthy cases that varied from the overall, statistical results. For example, in Philip Morris USA, Inc. v. Tammy’s Smoke Shop, Inc., Philip Morris was successful in obtaining damages for the defendant’s sale of counterfeit cigarettes.[168] While the market typically reacted negatively to the filing of litigation, the likelihood of success in such a direct counterfeiting case may overshadow the negative perceived consequences of filing suit; as such, this might explain why Philip Morris saw a 7% increase in their stock return upon the filing of the suit.
Additional insight could also be gained from looking at some of the more extreme data points and some of the cases with more unexpected results. It is hard to hypothesize why the market occasionally responded in a large, negative fashion to positive news for a party. For example, Sony’s stock returns saw a 6% decrease on the holding date when they were successfully granted a motion for summary judgment in Sherwood 48 Associates v. Sony Corporation of America.[169] Less logical results like this one could just be a fluke. Perhaps, Sony received negative press on the date in question, unrelated to the trademark dispute. This is why having a larger case sample size is important when calculating the statistical significance of such studies; it helps filter out the effects of unrelated stock movement. Similarly, in the case of World Wrestling Federation Entertainment, Inc. (WWFE) v. Bozell, the plaintiff saw a 12% decrease in the company’s stock returns upon filing of the lawsuit and an 8% decrease in their stock returns upon the holding date (even though the court’s resolution on that particular motion was in WWFE’s favor).[170] In that case, WWFE brought suit against Bozell (and organizations affiliated with him) alleging trademark dilution, various forms of unfair competition, and defamation based on Bozell’s claims in the media that the WWFE should be held responsible for the deaths of four children who imitated wrestling moves they witnessed on the plaintiff’s television programs.[171] The market’s strong negative response to the suit (and the favorable holding) may be less about the market’s estimation of the merits of WWFE’s Lanham Act claims and more about the publicity that the suit brought further connecting WWFE to the deaths of children.
Our results differ from most of the prior litigation event studies in that we did not find any statistically significant effect on the defendants’ stock returns at the timing of filing or holding. Our initial data actually indicated that there could be a positive response for defendants on the filing of the trademark litigation, but that effect was ultimately not statistically significant. Even so, a negative effect on defendants’ stock returns at the time of filing has been a consistent result in past event studies so it is noteworthy that our outcome is different.[172] As previously discussed, a prior event study actually found that the negative consequences to defendants were significantly increased in cases specifically involving patent litigation.[173] Perhaps, this is due to the statistical limitations of our study or those previously conducted. However, it is also possible that there is something unique about trademark litigation that makes the impact on a defendant less obvious. We take note of four possible reasons for this different outcome for trademark litigation defendants: (1) investor recognition of the lower costs and damages involved in trademark litigation, (2) investor understanding of trademark “bullying,” (3) investor concern about management focus, or (4) a concept we are calling “borrowed goodwill.”
First, investors may recognize that trademark infringement cases may be less costly, in terms of both direct litigation expenses and average damage awards, than cases involving patent or even copyright infringement. According the 2022 Economic Survey of intellectual property attorneys conducted by the American Intellectual Property Law Association, trademark infringement suits lead to median litigation costs between $325,000 to $1,000,000.[174] Patent infringement costs are much higher at $675,000 to $4,000,000, while copyright infringement costs are much more variable with median costs from $350,000 to $6,750,000.[175] In addition to variations in the cost of litigation, there are variations in damages associated with different kinds of intellectual property infringement. Successful plaintiffs in a copyright suit may be entitled to incredibly large statutory damage awards. In 2021, the average patent infringement damage award was nearly $61,00,000.[176] Kenneth Port’s research indicated that trademark infringement suits only rarely result in an award of damages,[177] and those awards are of a smaller magnitude then those of patent infringement litigation. Given the smaller damage awards and their relatively rarity, investors may be less concerned about the potential harm to a defendant caused by a trademark infringement suit.
A related second possibility is that investors may already be savvy enough to recognize that trademark law seems to encourage aggressive enforcement tactics and “bluffing” (if not bullying). Investors may already realize that very few trademark infringement suits proceed to a full trial on the merits, and only perhaps 5% ever result in an award of damages. Being able to place the filing of a trademark infringement suit into this context would help to minimize the perceived harm facing a defendant. In essence, the filing of the litigation might be viewed as nothing more than an unnecessarily expensive cease-and-desist letter. Therefore, the market may respond negatively to the plaintiff taking such a costly step while responding in a more neutral fashion on behalf of the defendant. If it is true that trademark-savvy investors do not “penalize” defendants as strongly in terms of the filing of trademark litigation as they might if the defendant was being sued for patent infringement, this suggests that companies could do more to incorporate investor education into their brand development strategies. If investors better understood a company’s goals in bringing suit, the market may respond more favorably toward that trademark owner (or more negatively toward the specific defendant).
A third possibility is that the filing of trademark litigation may provide the market with a signal about a firm’s management and focus. As corporate directors and officers, like all humans, may only focus on a limited number of objectives at a time, firms filing trademark suits may have management distracted by legal actions. This theory offers an explanation consistent with the reduced effect for larger firms. Larger firms have more resources and the ability to delegate enforcement matters to a greater degree. Similarly, a stock price rise on the conclusion of the suit may signal that management’s distraction by a trademark suit has come to an end.
A fourth possibility is that defendants may occasionally benefit from “borrowed goodwill” upon the filing of a lawsuit. Given that these results were not statistically significant, it is possible that this result was impacted by outside influences or simply oddities in our data set. However, looking at a few of the relevant cases provides some context from which we could speculate about possible reasons for the market to respond favorably to a public defendant being sued. In President and Fellows of Harvard College v. Harvard Bioscience, Inc., the mere filing of the suit might remind the public (and thus investors) of the actual historical association between the public defendant and the well-regarded institution of higher education.[178] Being reminded of that connection could allow the defendant to temporarily borrow some of Harvard College’s goodwill (resulting in a 15% increase in stock returns upon filing of the suit).
A similar transfer of positive goodwill could explain the 12% increase in the defendant’s stock returns in Too, Inc. v. The TJX Companies, Inc. where the plaintiff sued the defendant discount retailer for its sales of current season merchandise from the plaintiff’s higher-priced Limited Too clothing brand (that the defendant purchased from manufacturers without the plaintiff’s consent).[179] The filing of this suit may have been regarded positively by investors because it notified consumers that the defendant was in possession of plaintiff’s current season products (or close facsimiles) for substantially reduced prices.[180] That’s practically an advertisement for the discount retailer that normally sells “past season” merchandise.
In Brown v. Electronic Arts, Inc., the defendant video game company saw an 8% increase in stock returns upon plaintiff’s filing of the suit.[181] Here, the defendant is being sued by the plaintiff, who the court described as “one of the best professional football running backs of all time,” for violation of the football player’s rights in his name, identity, and likeness in its football videogame.[182] The market may have responded positively to the reminder that the defendant’s game includes such respected players (and with enough similarities and attention to detail to encourage this lawsuit). Again, the defendant may have temporarily borrowed goodwill simply by the association of the parties to the suit. This would benefit from further investigation in a future project.
Our results suggest that corporate decisionmakers may want to evaluate whether to pursue trademark litigation with a bit more nuance. Let’s presume that there are a handful of goals that a corporation might be pursuing in bringing trademark litigation: (1) to obtain damages for a violation of trademark rights by a competitor, (2) to obtain an injunction to stop a harm, like cybersquatting or false advertising, that could directly harm the company’s goodwill, (3) to send a message to competitors, other businesses, consumers, and even the market that the company “protects its brand”, or (4) as discussed in Part I.A. above, to build evidence of trademark strength, fame, and enforcement to aid in future trademark litigation efforts.[183] According to this study, the market is likely to respond negatively toward the plaintiff, for a short while, upon the filing of trademark litigation, but that negative response could ultimately be offset by the benefits to the company pursuing goals (1) or (2) if the court ultimately awards damages or an injunction resulting from the litigation.
However, goals (3) and (4) could result in unnecessary market harm to the plaintiff—possibly without a similar harm being felt by the defendant. In those instances, the corporate decisionmakers may be wise to consider other, more creative options. Perhaps, those goals are better obtained by forgoing the costs of litigation (both direct and to stock returns) and instead placing more emphasis on alternative dispute resolution. Making a company’s non-litigation enforcement efforts more explicitly public, possibly as part of the company’s marketing efforts, could help to build the same public perception that the company “protects its brand.” Additionally, such efforts may also be sufficient to provide evidence of exclusivity of use and enforcement efforts if the company is ever required to litigate.
It is clear that trademark attorneys have encouraged clients to engage in aggressive trademark enforcement efforts based on speculative fears of trademark abandonment or genericide, but it might be time to reconsider that advice. Corporate law gives directors and officers wide discretion about how to manage the corporation’s affairs. Rather than think about each trademark enforcement action in isolation, directors and officers should consider “asset sensitive” governance, which would require them to plan for the longevity of a brand.[184] This would better recognize that “[u]nlike most corporate assets which depreciate over time, most intangible assets can, instead, gain value the longer they exist.”[185] However, these same intangible assets are also fragile and subject to the whims of public perception. The public perception of the company’s trademark enforcement efforts is an important variable to consider.
Business and trademark law scholar Deven Desai argues that trademark law has actually evolved over the last century in tandem with corporate and antitrust law to allow for greater flexibility and risk-taking.[186] The result is that both trademark and corporate law support a “conception of the firm as able to do almost anything it wishes,”[187] such that trademark law supports firm autonomy and flexibility—perhaps to the detriment of consumer protection at times. Because corporate law has adopted the business judgment rule’s gross negligence standard of liability for directors and officers, those decisionmakers have incredible freedom to take risks.[188] The business judgment rule essentially makes directors and officers liable for only conflicted transactions or gross negligence. As such, corporate law now “allows firms to do as they please and leaves shareholders or consumers little recourse other than selling their stake or not buying goods. The corporation comes out on top with more power to do as it wishes.”[189] Trademark lawyers need to recognize this interaction between trademark law and corporate law when advising corporate clients. The best advice might be to take bigger risks. That risk could be to invest resources into litigation that may or may not be supported by the market (but which may help to grow one’s trademark strength and fame in the future); however, the best risk could also be to allow third parties to utilize non-competitive, similar trademarks. Given that corporate law makes it very unlikely that an officer or director would be liable for their failure to bring a trademark suit in such instances, the best advice might be to simply sit back and do nothing—for now.
https://www.nytimes.com/2022/03/11/technology/appletrademarks.html?searchResultPosition=1 [https://perma.cc/L8NT-BMSH]. ↑
§ 4219 (Thomson Reuters 2022) [hereinafter Fletcher Cyclopedia]. ↑