Programmed to Kill

A self-driving car is something that has been dreamed of, but considered unobtainable, for decades. Now, self-driving cars are being tested by Uber in the streets of Pittsburgh and San Francisco.1 Although there is an inexhaustible list of benefits self-driving cars bring to society, there are also a number of legal issues.  Advances in technology tend to outdate current laws, and self-driving cars are not the exception. One of the biggest legal issues facing courts and lawmakers is: in the case of an inevitable crash, how should the car act and who do we hold liable if the car reacts the “wrong way”?

In the near future, there will be instances where a self-driving car will have to act when an accident is inevitable. When that happens, what should the car be programmed to do? Should the car try to minimize damage to society, or should it protect the passengers at all costs? Minimizing damage to society seems like a good idea until you are the one inside the car. Many people would feel uncomfortable riding in a car which might prioritize the wellbeing of others over their own life.2 Instead, people would probably prefer a car that values their lives more than others.3 However, a self-driving car that puts the life of its passengers above all else would be extremely dangerous, as it could cause a substantial amount of harm to others to protect a single passenger. At some point, someone will have to decide how to appropriately program self-driving cars.

The issues in programming self-driving cars also raises a question of liability. Is the manufacturer liable for programming the car liable, or should the consumer be made liable as the owner of the car?
In many ways, holding the manufacturer liable for any injuries caused by a self-driving car’s programming makes sense. The manufacturer directs the car what to do in the case of an emergency. As such, the car arguably suffers from a design defect, and the manufacturer should be held liable for the damages that stem from the car’s programming.4 In the majority of jurisdictions, it is enough to show that a product “does not perform safely as an ordinary customer might expect it to” to show a design defect.5 A car that is programmed to minimize damaged might be different from the ordinary consumer’s expectation, which is that the car should protect the owner first and foremost. However, such liability might disincentivize manufacturers from further developing self-driving cars.6 A reasonability test could be applied to the actions of programmers, but that would then require the court to determine what programming is reasonable and what programming is not after the damage is already done.

Instead of the manufacturer, owners could be held liable for the decisions made by self-driving cars. Holding the owner responsible also seems consistent with traditional notions of driver liability. However, owners of self-driving cars do not actually exercise control over the car. Holding owners of self-driving car would hold owners liable for the programming of manufacturers.7 Not only does this go against basic notions of justice, but it also disincentivizes consumers from purchasing cars, which would stifle technological advancement and deprive society of a social good.8

The introduction of self-driving cars to the world will undoubtedly change the legal landscape for automobiles. A simple reasonability test can no longer be applied to drivers of cars. Moreover, it is unclear who to hold liable and for what to hold people liable for. Hopefully lawmakers are proactive in enacting legislation to solve these issues.

Regulating Home-Sharing Services

Legislators across the country are proposing regulations for home-sharing services like Airbnb and Expedia’s HomeAway, which allow users to rent their properties like hotels. In New York City, for example, Governor Andrew Cuomo is considering signing a bill to fine users of Airbnb $7,500 for advertising their short-term rentals.1 These restrictions on using Airbnb are causing controversy between landlords, tenants, and homeowners. Airbnb warns hosts on the website to understand and abide by contracts that bind the user, such as leases or co-op rules.2 Despite these warnings, however, lawsuits are spawning between buildings and landlords and also between landlords and tenants for claims like operating illegal hotels and breach of lease contracts. Additionally, Airbnb has seen an increase in racial bias allegations where guests are declined reservations based on their race.3

New York and California are two of the more prominent states with legislators threatening the business model of Airbnb and its competitors.4 New York’s pending legislation is in response to New York law, which makes renting an apartment or home for fewer than thirty days illegal.5 According to a report by MYL Legal Service and Housing Conservation Coordinators, New York City had 51,397 listings in 2015 with fifty-six percent of those listings being illegal.6 Airbnb’s General Counsel has since responded with allegations that the New York bill violates the federal Communications Decency Act and the free speech rights under the First Amendment.7

On the opposite coast, Californian legislators are considering a statewide solution to address the cities’ concerns that renters cause complaints and do not pay taxes.8 Airbnb and HomeAway have both initiated complaints against the city of Santa Monica and filed a joint motion for preliminary injunction.9 The spike in short-term rentals in the United States due to Airbnb has inevitably caused the recent regulations on home-sharing.10 To address some of the legal issues arising, Airbnb is committing to automatically collect the occupancy taxes from its hosts in certain markets to pay the city directly.11 On the other hand, some cities have taken it upon their own initiative to pass ordinances that regulate the home-sharing activity. Nashville approved an ordinance requiring the renter to be twenty-one years of age and possess a permit for short-term rentals.12 The ordinance also regulates the number of persons per rental, persons per bedroom, and prohibitions on food services.13

The legal issues presented by home-sharing services are increasing, but the solutions being proposed by cities and Airbnb seem promising for the future of short-term rentals.

Could a Retro Approach Work to Protect the Electricity Grid Against Hackers?

With economic loss estimates stemming from a blackout reaching a trillion dollars in the most damaging situations, the National Renewable Energy Laboratory (NREL) has taken steps to protect the grid from cyber threats and hackers.1 To prepare for the ongoing cyber-security threat, the NREL uses a “test bed” which mimics power utility systems and allows friendly hackers to identify issues within the system.2 Once vulnerability in the test bed is discovered, those findings are shared with the utility industry to increase overall infrastructure protection.3

U.S. lawmakers and power companies are worried of a possible threat as all of the systems become more interconnected. As a solution to this problem, a group of bipartisan lawmakers are calling for a less sophisticated approach, simply installing analog technology so that separate grids are isolated, leaving hackers without full access.4 This concept arose when a cyber-attack on the Ukrainian power grid left almost 250,000 people without power.5 Experts noted that the blackout would have been worse, but because of the older technology the Ukraine uses; the hackers did not have access to the entire grid. The hackers could only hack what they could access.

United States Senator Angus King (I-ME) commented that, “The United States is one of the most technologically-advanced countries in the world, which also means we’re one of the most technologically-vulnerable countries in the world,”.6 King is leading a group of bi-partisan lawmakers who want to pass a bill to examine the ways to replace a currently advanced grid with more manual procedures, as to make it more difficult for a hacker to achieve full access.7 King said that with a more manual system in place hackers would need to, “actually physically touch the equipment, thereby making cyber-attacks much more difficult,”.8 The other senators involved in the passage of this bill are Jim Risch (R-Idaho), Martin Heinrich (D-NM), and Susan Collins (R-Maine), all are members of the Senate Intelligence Committee.9

It’s no surprise that lawmakers see this threat as a pressing issue. Lloyds of London estimates that a successful attack on the U.S. energy grid could cause damage ranging from $200 billion to a $1 trillion.10

The bill, which is in the process of being proposed, would be titled the “Securing Energy Infrastructure Act”.11 The act includes many key provisions which will allow the U.S. government to explore ways in which it can make the energy grid less vulnerable to cyber-security threats.12

This retroactive approach to a modern problem is an original solution that may just work. As discoveries are made and the costs and benefits are analyzed, it will be interesting to see if this old time approach will be an effective tool in combating the pressing threat of cyber-terrorism. At first glance, it is an appealing prospect. I think it would probably be more worthwhile to strengthen our existing system, than to dismantle the current one. This bill may come up with realistic solutions to a pressing problem, but at the same time it could discourage improvement and efficiency.

Creative Destruction: How Uber is Altering the NYC Transportation Industry

Creative destruction, the concept that illustrates how technology improves the lives of many at the expense of a few, has reemerged in New York City.1 Creative destruction materialized in New York City as far back as the early 1900s, when automobiles destroyed the horse and equestrian transportation industry. In recent years, creative destruction occurred when film producers were replaced with digital cameras. Often, creative destruction is met with lawsuits to prevent modern technology from devastating an industry. Hence, the New York City yellow taxicab industry has filed a number of lawsuits in an attempt to curve the “creative destruction” of the New York City transportation industry. Unfortunately, the yellow taxicab industry has not received the support of the New York Courts.

Yellow taxicabs have dominated the New York City (“NYC”) transportation industry since the 1930s.2 The Haas Act of 1937, mandated the New York Taxi and Limousine Commission to issue a limited number of medallions to the highest bidder.3 This Act reduced the viable options for traveling in NYC because only yellow taxicabs with medallions are permitted to accept street hails. A street hail is when a passenger calls out to, whistles at, or hand gestures for a taxicab. In contrast, the Taxi and Limousine Commission issued licenses to black car services. A black car service may only accept passengers on the basis of prearrangement; they are prohibited from picking up street hails.

As hailing taxis became the preferred mode of transporting in NYC, the value of the taxicab medallion grew. In 1938, the value of the medallion was seventy-five dollars. By 2013, the value of the medallion grew to $1.3 million dollars. The value of these medallions allowed owners to live comfortably. Owners rented out their medallion, created small businesses by hiring drivers, or took out mortgages against the medallion value.
Once Uber Technology Inc. (“Uber”) penetrated NYC’s transportation market, the yellow taxicab industry experienced a sharp decline. Uber created a smartphone application that connected passengers with drivers. The Uber app allows a passenger to view the location of available drivers. The passenger can request a driver by tapping a smartphone. The application then transmits the passenger’s request to several drivers. The closest driver picks up the passenger.

In 2013, as the number of Uber’s passengers grew, the value of the taxicab industry declined. The number of trips in a taxicab in NYC decreased by 25,000 per day and the fare box monies decreased by about $200,000 per day, on average. More notably, the NYC taxicab industry experienced a sharp diminution in the value of the medallion. The NYC medallion decreased from a $1.3 million value in 2013 to an estimated value of $600,000 in 2015.
Needless to say, the yellow taxicab industry was not enthusiastic with the declines and they waged a battle against Uber. Specifically, NYC yellow taxicab representatives filed an administrative petition requesting that the Taxi and Limousine Commission compel Uber to abide by the “street hail” regulations. If granted, this request would have prevented Uber drivers from picking up passengers unless they had a medallion. The taxicab industry stated that Uber passengers tap their smartphone and a driver comes. They argued that the time and clicking process is similar to a street hail, and thus, Uber passengers are essentially “e-hailing” for drivers and therefore, performing a modern-day “street hail.”

The Taxi and Limousine Commission denied the administrative petition. They argued that Uber passengers are not “street hailers.” They asserted that Uber is more akin to black cars because the Uber application only allows for a prearranged car service. They conceded that the amount of time it takes Uber passengers to connect to its drivers is similar or sometimes faster than how passengers connect to yellow taxis, but they ultimately held that Uber passengers “pre-arrange” for a ride.

Following the denial of their administrative petition, the representatives for the yellow taxicab industry filed a claim with the New York Supreme Court alleging that the Taxi and Limousine Commission’s decision was arbitrary and capricious.4 Thus, in sum, the Court permitted “creative destruction,” technology’s ability to benefit many at the expense of a few, to impact the NYC transportation industry— much to the chagrin of NYC’s yellow taxicab medallion owners.

The Rise of Self-Driving Cars

Technology has played a huge role in the development of our society. We are where we are today, because of the rise of technology. About two decades ago, people were introduced to the Internet, computers, and wrapping their minds around having portable phones. Today, technology has taken a more advanced turn. We are now experiencing an “intelligent-assistant” boom, whether its Siri on iPhones, robots replacing people’s jobs, or even self-driving cars. As technology rises, a new set of legal ramifications come along as well.

As of recently, self-driving cars have been introduced to the world. It is obvious that the idea of a self-driving car would scare some people. Many questions arise out of this latest invention. Can people trust that this self-driving car will get them from point A to point B? How reliable is the self-driving cars? Most importantly, are self-driving cars safe?

This is where the law and technology intertwine. The legal issue is presented when we have to figure out who takes on the responsibility if something goes wrong.

Although the thought of not being in control does scare some people, there are positives to this new invention. Driverless cars could save lives, issues such as texting and driving or driving while under the influence could be eliminated if a person is not in control of their vehicle. Driverless cars may also allow those who do not have the means to obtain a license or drive, to now have the ability to drive. Even the technology that comes along with a driverless car proves to be beneficial, with sensors, computers, and back-up systems, some can argue that a car having these systems provide for more security and reliability.1

However, although an idea or an invention can prove to be beneficial for people and can prove to be an exciting idea, issues are always bound to happen. From a legal perspective, liability issues will occur and even the mere fact that we, as humans, are still not fully comfortable with robots or robotic objects taking control of our lives. Many people carry the perspective that if they are in control of the car, they can avoid accidents and/or avoid mistakes. Can people really come to trust a robot? The issues of who to sue and where to place the blame – the car manufacturer, the other driver, etc. – will arise. This is where a brilliant idea can easily get clouded by the negative aspects. Future vehicles that are developed through the use of computer systems will usher in a new transition the legal world will have to face.

According to the American Bar Association, driving has proven to be a dangerous activity in general.2 Around 1.2 million people are killed every year and due mostly to traffic accidents.3 The positives about self-driving cars is that they don’t fall asleep, they don’t get drunk, and they don’t get distracted like a human driver would.4 So beyond the fact that people may be slow to accept this idea and although every new invention comes with legal issues, self-driving cars may actually be the next best thing.

Automated Cars Should Never be Called “Driverless”

The concept of self-driving cars is nothing new nowadays. People have dreamt of the self-driving car since at least the 1930s.1 Unfortunately for those dreamers, the actual automation of consumer vehicles was nothing more than science fiction until recent years.2 Nevertheless, today it seems everywhere you look another company is trying to get into the market; companies from Google to Tesla, Apple, Toyota, and even Uber are getting on the autopilot bandwagon.3The list includes seemingly every car, computer, technology, and leading transportation company you can think of totaling an astounding 33 corporations to date.4 But how safe is the idea in the first place to those on the road? Furthermore, how much research has been put into just that question? Even more worrisome, how much as the heads of these corporations paid attention to the safety research in their quest to corner the new market?

Peter Valdes-Dapena with CNN explains that the greatest danger may be in the name.5 He argues that the term “autopilot” “…invites the driver to take their feet off the pedals and hands from the steering wheel for long stretches of highway travel.”6 But what drivers may be missing is that not all “autopilots” are equal. In 2014, the Society of Automotive Engineers International (SAE International) set out a classification system consisting of six different levels of automated vehicles.7

Level 0: No Automation: the full-time performance by the human driver of all aspects of the dynamic driving task, even when enhanced by warning or intervention systems.

Level 1: Driver Assistance: the driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task.

Level 2: Partial Automation: the driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/ deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task

Level 3: Conditional Automation: the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene.

Level 4: High Automation: the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene.

Level 5: Full Automation: the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver.8

Knowing this, it makes much more sense that that Tesla Model S Owner’s Manual, for example, says some things that you might not expect given your prior understanding of “autopilot.”9 The Tesla Model S can do things like, maintain the car’s lane position, maintain a safe following distance behind traffic ahead, and change lanes when you signal.10 The car can even stop when there is something ahead; however, the manual warns that it will not always activate.11 In instances where there is a non-moving object in your path or when you are moving more than 50 miles per hour and a moving vehicle changes lanes revealing a stationary vehicle, the system is unlike to brake.12 “Drivers are also warned that the system is intended for use by a fully attentive driver and only on highways without intersections.”13

Despite the misleading nature of the term “autopilot” which can understandably cause drivers to operate these cars in an unsafe manner, the term is unfortunately not the only reason for fear. Seth Fiegerman with CNN reports that employees of the perceived leader in the field, Tesla, worried that the company was not taking every possible precaution to ensure the safety of the vehicles.14 Fiegerman cites that “[t]hose building autopilot were acutely aware that any shortcoming or unforeseen flaw could lead to injury or death . . . .”15 But Tesla founder and CEO Elon Musk believes that autopilot has the potential to save lives by reducing human error; a source close to Tesla says his driving force is “don’t let concerns slow progress.”16 Some Tesla employees struggled with this, telling CNN Money in interviews that they knew they were “pushing the limits” and that they were scared “someone was going to die.”17 David Keith, an assistant professor of system dynamics at MIT Sloan School of Management, says “It’s hard to believe a Toyota or a Mercedes would make that same tradeoff . . .[b]ut the whole ethos around Tesla is completely different: they believe in the minimum viable product you get out there that’s safe.”18

In the United States, there is little legislative history governing or prohibiting the use of automated cars.19 As of 2016, only eight states and the District of Columbia have enacted Autonomous Vehicle Legislation.20 Additional states are following behind; for example, Arizona Governor Doug Ducey signed an executive order in August 2015 directing agencies to “undertake any necessary steps to support the testing and operation of self-driving vehicles on public roads within Arizona.”21

In January 2016, U.S. Transportation Secretary Anthony Foxx unveiled new policy that updates the National Highway Traffic Safety Administration’s (NHTSA) 2013 preliminary policy statement on autonomous vehicles as well as a commitment of almost $4 billion over the next 10 years to accelerate the development and adoption of safe vehicle automation.23 Nevertheless, it seems California is leading the march against vehicle automation; a Senate bill proposed at the end of 2015 aims to reduce automation attempts despite promises by manufacturers to reduce the 94 percent of accidents that are caused by human error and bring everyday destinations within reach of those who might otherwise be excluded by their inability to drive a car.22

Senate Bill 1298 first establishes “certain vehicle equipment requirements, equipment performance standards, safety certifications, and any other matters the department concludes is necessary to ensure the safe operation of autonomous vehicles on public roads, with or without the presence of a driver inside the vehicle.”23 Meanwhile, the second “requires people to operate their autonomous cars.24 In addition, “driverless car manufacturers would also need to put their vehicles through a third-party safety test and provide measures to report accidents or car software hacks.”25 Here, California would set forth a strong focus on safety legislation surrounding autonomous cars despite the frustration it causes to manufacturers.

The Legality of Ballot Selfies

Nothing is more prevalent and controversial in our current society than the upcoming Presidential election. Everyday the news cycle is fraught with talk of policies, appearances and controversies. However, one aspect of this provocative topic that many people do not immediately think about is the influence of modern technology on the election cycle. With the increased accessibility of new technology and the desire to share all aspects of our lives with others, many technological issues have arisen in the political and legal world. Specifically, ballot selfies and public postings of ballots on social media have caused several legal issues that are currently being addressed.

For example, in 2015 a lawsuit was brought against the state of New Hampshire by three citizens who were convicted under a state statute for taking photographs of their marked ballots and publishing them on social media.1 The specific state statute made it unlawful for a person to allow his or her ballot to be seen and prohibited a person from “taking a digital image or photograph of his or her marked ballot and distributing or sharing the image via social media . . .”2 The Plaintiffs argued that their first amendment rights to freedom of expression were violated by this statute and that it was thus unconstitutional.3

The First Amendment allows for freedom of speech and expression.4 In the case above, the expression at issue was the sharing of the photos. The Plaintiffs argued that posting their photos was a form of political expression that should be protected.5 They also argued that any restriction on this expression would be a content-based restriction, meaning that the expression was restricted purely based on its content, and was thus inherently unfair as content based restrictions are subject to strict scrutiny.6 Strict scrutiny is a standard of review which “requires the Government to prove that the restriction furthers a compelling interest and is narrowly tailored to achieve that interest,” which Plaintiffs argued could not be proven through the New Hampshire statute.7 On the other hand, the Defendants maintained that posting photos of ballots on social media would ultimately amount to voter coercion or selling of votes, and would eliminate the impartial aspect of the voter process.8 Ultimately, the Judge declared the New Hampshire law invalid for the reasons cited by the Plaintiffs, thereby allowing ballot selfies legally in the state of New Hampshire.9

Unfortunately, this case is not the end of the debate over ballot selfies and social media posts. In fact, the case is currently being analyzed by the First Circuit Court of Appeals.10 The issue of ballot selfies is prevalent throughout the nation, and more than half of the states have similar laws to that of New Hampshire.11 Further demonstrating the importance of this issue, various social media sites have joined in as amicus curiae in the argument in favor of allowing the photos as a First Amendment right.12 This issue could have serious implications on the voting process, as well as on First Amendment restrictions. The New Hampshire case demonstrates that something as innocent as ballot selfies, though seemingly unimportant, can have a large impact on our law and society.

Supreme Court Upholds Apple’s Design Patents

In perhaps one of the largest patent infringement suit in the history of intellectual property, the US Supreme Court rendered a decision against Samsung in the famous Apple v. Samsung lawsuit.1 Apple Inc. first filed a suit against Samsung in 2011 alleging infringement of Apple’s intellectual property in the design and utility of their iPhones and iPads.2 Specifically, Apple contended that Samsung infringed four design patents on how their phones and tablets look, and three utility patents on how their devices work.3 Apple won on design patent infringement.

Thanks to Apple’s design patents, the reason Samsung is required to pay the exorbitant damages award of $548 million is due to their use of the rounded colorful icons, the pinch-to-zoom feature, one-finger and two-finger scrolling and the “bounce back” effect.4 This is due to the protection afforded a type of patent known as a design patent, first awarded in the US in 1842.5 This type of patent protects ornamental designs – the form of a device rather than its function.6

Of course, no one can argue that Samsung copied Apple devices completely. There are consistent differences between Samsung and Apple phones, so much so that people are almost religious in their dislike for one over the other. However, since design patents are able to cover small portions of a whole, companies like Apple are able to patent something as seemingly meaningless as their icons, and companies like Samsung could be considered to infringe on those patents even if they have not done so for the entire device.7 In fact, the jury found across the board that Samsung infringed on Apple’s patents, quickly burying Samsung’s countersuit that Apple had infringed on some of its patents.8

This finding and award will make it incredibly difficult for any company to mimic Apple’s products – and even beyond the tech world, make it difficult for similarities with any design patent in any field. The very particular degree of protection afforded to patent applications makes for an interesting legal environment in light of Apple’s design patent. Normally the appearance of any given product is given protection through trademark laws if it comes to represent that product, or copyright protection for things that are solely appearance, not function-related. Now that design patents are in the mix, courts will have to find a balance and a distinction between what constitutes as a trademark issue and what should be allowed a patent. In today’s world especially, the design of any particular product is extremely valuable to its identity and value in the market. Apple products are no exception. People flock to these gadgets in large part because of the way that they look: simple, sleek, and futuristic.

In the tech world, the decision poses a new turn in the appearance and design of smartphones and tablets created to compete with Apple. Other products will either have clumsier interfaces, create stealthy design changes to maneuver the patents claimed by Apple, or come up with revolutionary design interfaces altogether. Whatever the case may be, the change in the smartphone industry will bring about fascinating new technology – a new generation of smartphone.

No Patent Infringement in Apple’s GPS Apps

On Wednesday, September 20, 2016, the Patent Trial and Appeals Board ruled on a challenge of patent claims. 3 preeminent giants in the tech world, Apple, Google, and Samsung, each were successful against their individual defenses of a claim against them made by American Navigation Systems Inc(ANS).1 The debate was regarding ANS’s U.S. Patent numbered 5,902,347,(also known as the “347 patent”) which was published on May 11, 1999. In the patent abstract, it describes “ a hand-held navigation, mapping and positioning device containing among other features a GPS receiver, a topical map, and a user interface”.2

Since the tech bubble, and the prevalence of smart phones have increased exponentially, so has the competition.3 The phones are often reviewed based on their utility and available features. One of the most notable features is the ability to locate ones self and find directions to a nearby location. This technology has tremendously affected how people have gone about daily travel and exploration. The implementation of the Google maps on these smart phones are what gives companies the reliable service which consumers are demanding.

In its complaint filed on May 12, 2014 against Apple, ANS alleges Apple infringed upon its patent in its use of Google Maps and its own Apple Maps app, which were accessible through their smartphones, tablets, and other handheld electronics.4 It alleges that due to these two features Apple experienced success in sales during its numerous releases dating back to the initial iPhone in 2007. ANS argues that the GPS technology used on the Google Maps and Apple Maps is substantially similar to its 347 patent and is a patent infringement.5 The PTAB ultimately ruled in favor of Apple due in part to the relative publicity of the GPS features that ANS claimed, even when their patent was granted. The board was persuaded by Apple’s defense stating that previously patented concept is one in which was published in 1992 and renders the 347 Patent “obvious.6

The issue with the date of the patent application is not the only problem that ANS faced. It is also the manner of which the patent is described. The purpose of patent law is to protect an idea, which is rightfully the owners, but it is not a means to cast a proverbial wall upon others from implementing anything within the confines of a wide net. A patent application must be specific and contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains. Since ANS failed to meet this standard therefore its patent claim was ultimately declared invalid.

Streaming the Illegal

For those who are old enough to have lived through the rise and fall of Napster, the fear of punishment for being caught downloading music illegally was enough to make any one stop. For those who are not old enough to know what Napster is, Napster was one of the first music file sharing sites that allowed users to download music that someone else had on their computer.1

Napster made its debut in late 1999 and in 2000, record labels saw the first drop “in global record sales.”2 Lawsuits ensued and the founders of Napster even abandoned their creation as they “had been ordered to start charging [for the music] or else close entirely.”3 While the shutdown of Napster led to the creation of programs like iTunes and Spotify, it was not the end of pirating.

The litigation that enveloped the world because of Napster, and other filed sharing sites such as KaZaA, has not stopped people from hosting pirating sites, in which illegal music and movies are available for download and streaming without authorization.4 However, “[r]ecord labels, movie studios, and ISPs have joined forces for an industry-led warning system that will notify users when they are suspected of illegally downloading music, TV shows, or movies.”5 This is the essence of the Copyright Alert System. It is enforced by having the internet service providers (“ISPs”) send warnings to the users, which, if ignored, allows the ISPs to “turn to ‘Mitigation Measures’” which include such things as “temporary reductions of Internet speeds or redirection to a landing page until you contact your ISP to discuss the matter.”6

The question remains: what happens when a user streams pirated copyrighted works?

For sophisticated users, many believe they will not get caught by using a virtual private network (“VPN”) server, which allows the user to have an internet connection between the user’s router and a proxy server in a different location.7 This causes the internet protocol (“IP”) address to be linked to the proxy server.8 The internet traffic that is seen from the IP address linked to their home is the traffic between their home router and that proxy server.9 That traffic is encrypted, which means no one can really tell what the user is doing.10

For users that read the prior paragraph and still do not understand what using a VPN server is, the bottom line is the risk of being caught is still there. A “site that ‘makes available or facilitates the availability’ of rights-owners’ content without their permission is unlawful.”11 For the people that stream the content from these sites, it “is generally legal.”12 This activity becomes illegal “[w]hen the user downloads even part of a file . . . [a] when the user streams content as a ‘public performance.’”13 But user beware, even if there is no plan to do anything illegal, there is the “risk of exposure to viruses. . . poor quality, pop-up ads, and other annoyances” that may not make that free show or movie worth it.14