“Blurred Lines” between Copyright and Inspiration

The number one hit song of 2013, “Blurred Lines” by Robin Thicke, garnered over seven million copies in sales, and over 341 million views on YouTube.1 Thicke, who admitted that he was inspired by Marvin Gaye2, filed a preemptive suit in the District Court of Central District of California seeking declaratory judgment subsequent to the song’s release after Gaye’s heirs threatened to sue for copyright infringement.3 Also in the suit was Pharrell Williams, who produced the song, and Clifford “T.I.” Harris Jr., who provided the rap in the song.4 Thicke, Williams, and Harris argued in their complaint that they were not copying Gaye’s songs, that “Blurred Lines” was original, and the song merely has the same “feel” and “sound” as Gaye’s song, and thus not a copyright infringement.5

Gaye’s children, Frankie and Nona Gaye, and Bridgeport Music countered the preemptive suit, suing the trio for copyright infringement.6 The Gayes also sued EMI, a subsidiary of Universal Music Group, arguing that the company breached its fiduciary duties by not responding to the infringement claims.7 The children inherited the copyright of the composition to “Got to Give It Up,” which they are claiming Thicke copied off of.8 Bridgeport Music, owner of the band, Funkadelic’s, song, “Sexy Ways,” similarly claims copyright infringement9, but dropped out of the suit after the parties came to an agreement.10

Thicke principally argued that his song’s feeling was the essential style of a particular genre, and thus not copyrightable under copyright law.11 Thicke asserted that the Gayes were claiming to own the entire genre, and not a particular song.12 Under copyright law, to sustain a copyright infringement claim, the appellant must show: “(1) they owned a valid copyright in the work alleged to have been infringed, and (2) the infringing party copied protected elements of that work.”13 The protected element of the work was only the compositional notes which, Thicke argued, was different from the “lyrics, rhythm, and melody” of his song.14

In addition to admitting that his song was influenced by Gaye, Thicke disclosed during his deposition that he was on alcohol and drugs when collaborating for the song, and it was Williams who mainly wrote it.15 After the incident, Thicke tried to convince the jury by singing a number of songs in front of the twelve-member audience.16 Unpersuaded by his performance and arguments, the jury found “Blurred Lines” to have infringed on “Got to Give It Up” and awarded Gaye’s children $7.4 million, $4 million for the amount Gaye’s children would have received if the song was licensed, and $3.4 million in profits.17

While a victory for copyright holders, many musical artists and companies worry this judgment will affect the production and release of new songs when it holds to a similar style of a certain genre.18 In response to this, a number of artists have grouped together and submitted an amicus curiae brief to try and persuade the court that its ruling will impede on an artist’s creativity and will in essence, “punish songwriters for creating new music that is inspired by prior works . . . .”19 More than 200 artists have joined together to submit it, a number of whom are well-known in the industry.20 Stating that there is no hardline rule or test placed for the judge or jury to go by, many fear this will lead to a floodgate of litigation whenever a new song comes out.21 Thus begging the question: When is a song copying another song, and when is a song influenced by another song?

Sex + Extortion: A Call for Federal Criminalization of a Rising Cybercrime

“I can never get that photo back. It’s out there forever.”1 These were the words of 15-year old Amanda Todd as she documented her story of bullying, harassment, and extortion on YouTube.2 Todd used flashcards to narrate how she became a victim — detailing how her aggressor tormented her by posting an uncompromising photo of her on the Internet after she refused to give into his sexual demands.3

While Todd was struggling to find an escape route from her tormentor, meanwhile in the United States, Luis Mijangos was being prosecuted for hacking into the computers of approximately 230 victims and blackmailing them for sexual material.4 Federal investigation revealed that the 32-year old paraplegic had more than 15,000 web-cam captures, 900 audio recordings, and 13,000 screen captures of his victims saved on his computer.5

Todd committed suicide just over a month later after the posting of the 2012 YouTube video.6 To date, the video imploring for help and support by the Canadian teenager7 has garnered more than eleven million views.8 Todd’s suspected tormentor is a 38-year old Dutch male, who will be facing separate charges for blackmail and distribution of child pornography in the Netherlands before being extradited to Canada to stand trial.9 On the other hand, Mijangos — whose blackmailing and harassment scheme reached as far as New Zealand — was sentenced to six years, and is scheduled to be released next year.8

Both cases highlight the devastating and egregious effects of the cybercrime phenomenon informally known as “sextortion.” Sextortion is the use of coercion to intimidate victims into producing and satisfying demands for sexual favors.9 With the advent of new technologies and cyberspace, the concept of having clear and defined borders has diminished as human interactions have increased exponentially over the World Wide Web. Therefore, sextortion is a sex crime that has transgressed national borders: “For the first time in the history of the world, the global connectivity of the Internet means that you don’t have to be in the same country as someone to sexually menace that person.”10

Given the serious privacy issues that arise in connection with this cybercrime, one would think that the subject has been thoroughly addressed. However, sextortion is a heavily understudied issue.11 Despite it being a crime that has always existed, sextortion has recently garnered media attention as a growing threat12, and a proposal for federal criminalization of sextortion was just introduced in July.13

Although in the U.S. sextortion is recognized as a crime, there is no state or federal statute specifically classifying it as so.14 Instead, sextortion is prosecuted under a myriad of federal and state laws concerning extortion, cyber hacking, and child pornography, thereby producing inconsistent sentencing across jurisdictions.15 Furthermore, due to the Government’s strong interest in protecting minors, child pornography laws produce invariable sentencing results in federal courts between sextortion cases involving minors and those involving adults.16

Although there is much to be done with tightening cyber security and online privacy, the passage of the Interstate Sextortion Prevention Act — calling for the federal criminalization of sextortion — is a first step towards that direction.17

Security Overhaul in iPhone7

On September 7, 2016, Apple, Inc. (“Apple”) revealed it’s much anticipated new product, the iPhone 7.1 The iPhone 7’s reveal was met with mixed reviews, with much of the focus on the company’s decision to remove the 3.5mm headphone jack, in an attempt to push users to wireless headphones.2 However, lost in the debate, over whether Apple made a mistake in removing the headphone jack from its new product, was the overhaul Apple had made in the security of their products.

On February 2016, a United States Magistrate Judge issued an order pursuant to the All Writs Act, that directed Apple to assist the Federal Bureau of Investigation (“FBI”) in bypassing the passcode security feature of the iPhone 5c.3 The order requested that Apple develop a new version of the iPhone’s operating system that would allow the FBI to circumvent the phone’s encryption and security systems.4 In response, Apple declined and the CEO, Tim Cook, released a letter to Apple’s customers addressing the order and insisted that the company would not acquiesce the request or honor any similar requests in the future.5 Further, Cook stated, “In today’s digital world, the ‘key’ to an encrypted system is a piece of information that unlocks the data, and it is only as secure as the protections around it. Once the information is known, or a way to bypass the code is revealed, the encryption can be defeated by anyone with that knowledge.”6

As a result of Apple’s refusal to create the software, Apple and the FBI were set to appear in Court on March 22nd, 2016. However, the FBI claimed to have found a third-party who was capable of bypassing the phone’s security and subsequently withdrew its request.7 While the case may have ended when the request was withdrawn, the issue still continues to be a hot topic in the world of technology. In response to the standoff between Apple and the FBI, may major competitors such as Facebook and Google pledged their support for Apple and vowed to implement their own version of encryption on their software’s.

In response to the case, Apple had a major commitment to the security and encryption of their software and products.8 Apple has gone on to state that they believe they are the “most effective security organization in the world.”9 With the release of the newest iPhone and operating system, Apple has added many features that are aimed to better protect their customers’ data and information. iPhone’s will now utilize Apple File System (“APFS”) which “improves the way information is organized and protected to make it faster and more secure.”10 The APFS will introduce many new encryption and security features that will make it much more difficult for hackers to access information stored on Apple products.

As Apple continues to introduce new innovations to the way in which they protect their customers’ data and information, the FBI will find it increasingly difficult to bypass the security features and access the information they desire. Undoubtedly, it will be very interesting to see where the battle over information between technology companies and the FBI goes from here.

Privacy Concerns and Fitness Trackers in the Workplace

There has been an enormous increase in recent years in the amount of people utilizing “wearable technology.”1 Wearable technology can be described as devices that have the ability to “collect data [and] track activities.”2 Fitness trackers, including Fitbit3, Jawbone4, have all been part of this market increase.5

Employers have latched on to this increasing trend by encouraging,6 and sometimes mandating7 employees to wear these devices as part of a health and wellness program.8 Approximately ninety percent of companies offer wellness programs9, and about forty to fifty percent use fitness trackers as part of these programs.10 There are increased financial incentives for employers to encourage, or mandate, employees to wear these trackers.11 Similarly, there are incentives for employees to take part in health programs that utilize fitness trackers, because of reductions in the price of employee’s health care plans.12

There are concerns, however, due to the tracking component of these fitness devices. Fitbit, for example, enables constant tracking,13 which can be a benefit to employees, who wish to track their activity, food and exercise.14 However, there are also concerns for employee’s privacy and the data that these devices collect.15 Employers will have access to ample information about employees collected through these tracking devices.16

Moreover, there are additional concerns that employers could use the information gathered from the tracking devices to factor into employment decisions, including raises and promotions.17 There could be a new wave of litigation from less active, or disabled, employees if employers were to use this data in job performance reviews.18

Additionally, many of the companies that develop these fitness trackers, including Fitbit19 and Apple20, sell the data collected from these devices to employers and third-parties. There are additional concerns that these wearables can track employee’s locations, and may have audio and video recording features.21 Employers could potentially track employees, and their exact location, by “spying” on them in and out of the workplace.22 This has created an area in privacy laws where consumer information is unprotected.23

There are many benefits for employees to use wearable technology, including sleep, activity, and health and wellness management.24 Additionally, certain professions may be drastically improved by wearable technologies, such as the medical profession.25 However, there is not enough, or seemingly any, legislation to protect employees.26 Currently, the Health Insurance Portability and Accountability Act, the Americans with Disabilities Act Amendments Act, the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act, do not protect employees from this very specific form of health data collection.27

The only way to protect employees who wish to use these trackers while in their employment, is to create specific rules, laws, and guidelines for employers. This would enable regulation as to what employers can and cannot do with the information they collect. Further, employees should be made aware of what data is being collected from them and who is able to access it, opting either to allow or disallow their information from being collected.

Privacy Victory or Criminal Loophole?: Implications of Second Circuit’s Decision in Microsoft v. USA

Many are calling the recent Second Circuit decision in Microsoft Corp. v. USA “a victory for privacy.”1 The Second Circuit ruled that a Stored Communication Act (SCA) warrant does not compel production of email content stored exclusively on foreign servers.2 The Stored Communications Act was passed in 1986 with the intent “to protect the privacy of digital communications.”3 The SCA allows a customer to sue a service provider that discloses private data under the law unless the disclosure was made in “good faith reliance on a warrant, order, or subpoena.”4

After asserting that a Microsoft email account was being used to facilitate drug trafficking, federal prosecutors in New York executed a search warrant to Microsoft Corporation seeking disclosure of “information associated with a particular individual’s email address, including the email contents.”1 Microsoft complied in part with the search warrant and provided basic “information about the customer that was being stored” on its United States servers.5 However, Microsoft refused to provide the email contents asserting it was stored on its servers in Dublin, Ireland.6 Microsoft justified its refusal to fully comply with the search warrant on the grounds that the court did not have the authority to compel the production of data maintained outside of the United States.7 The Second Circuit agreed.

The Court reasoned that traditional federal rules dictate that warrants issued by the courts only permit law enforcement officials to search property within the boundaries of the United States.8 Even if the property the government seeks is electronic in nature, the same rules apply. The Second Circuit’s ruling demonstrates the United States’ desire to avoid interfering with the laws of foreign countries.9 Foreign countries should be free to utilize services and technology of American tech companies without having to answer to the United States government.10

On its face this may seem like a victory for privacy, but what are the negative implications of this decision? In theory, this decision has now created a loophole in the system that will allow criminals to conduct illegal business activity through email. Upon registering for a Microsoft email account, an individual is prompted to enter their location and Microsoft takes this information at face value.11 Based on the location entered in the system by the customer, Microsoft will then store the customer’s email contents in a “data center assigned to that country.”12 As long as a criminal enters a location outside of the United States, the content of their emails will remain private if a search warrant is issued. The only way U.S. law enforcement will be able to access the emails it seeks is by collaborating with law enforcement officials in the country where the data is being stored.13 This could significantly thwart law enforcement’s ability to investigate illicit activity. In addition, this decision could have a significant impact on U.S. service “providers’ decisions to exclusively store information abroad.”14 More U.S. based service providers may opt to store data exclusively on foreign servers to protect their customers’ privacy rights and to decrease the likelihood of U.S. government interference with those rights.[Id.]

FinTech: Trust in the Trustless, Advancing Blockchain Regulation

‘FinTech’, which is short for Financial Technology, is overhauling the legal playing field. As an umbrella term, FinTech additionally encompasses educational technology (“Ed Tech”) and regulatory technology (“RegTech”). One of the fasting growing sections of FinTech involves banking software and crypto-currency. Crypto-currency, the most popular of which is “Bitcoin” is transferred through blockchain technology. Blockchain is the financial infrastructure through which Bitcoin, and other crypto-currencies are exchanged. As an emerging technology, blockchain has immense potential benefits such as “lower transaction costs,” “potential to combat poverty and oppression,” and “stimulus for financial innovation”1 Contrarily, there are a lot of risks which warrant regulation, such as “The Bitfinex Hack”2, double spending3, and anonymity4

Currently there is a public policy debate on how to regulate blockchain technology, and whether to allow “permissionless” blockchains, like Bitcoin or only allow private “permissioned” blockchains, like those being designed at many large banks.5 Bitcoin can be used as currency in select locations, bought and sold as a currency, and mined.6 The Bitcoin mining process records transactions on the blockchain public ledger, while also distributing Bitcoins to the miners.7

Examples of the blockchain technology can be found within FinTech alliance groups such as the R3 Consortium. The R3 Consortium is a group of over 40 banks world-wide working together to innovate the financial market.8 Currently, the R3 Consortium is targeting “secure and fast solutions for payment transactions and securities trading.”9 Simply put, blockchains are automated databases that store transactions, which is “a data-base for transactions that manages itself according to rules that have been set and is tamper proof.”10

Regulation and implementation of blockchain technology islimited, although the Financial Stability Oversight Council (FSOC) has recognized blockchain technology as a “valuable mechanism for improving market transparency.”11 Policymakers have identified any potential risks and abuses of bitcoin, including black-market transactions,12 tax evasion,13 money laundering,14 and terrorist financing.15 Policymakers arecurrently revisiting “complex, interwoven regulatory frameworks—primarily banking laws, commodities laws, and securities laws—toshoehorn the technology into existing frameworks and considerwhere new ones might be appropriate.”16 According to the IRS, virtual currencies should be treated as property for federal tax purposes.17 In short, there are still many kinks in the technology which would create large risks in mainstream financial infrastructure. Bitcoin continues to exist, with or without government adoption, and thus should continue to do so until a more reliable network can be established.

YouTube Demonetization and Vlogging Woes

In the age of technology, it is not such a far flung idea that people can make a living by filming themselves speaking about whatever comes to mind, posting that video on a website, and waiting for the money to roll in. The most prolific of these host sites is YouTube, which traffics an estimated 1,000,000,000 unique monthly visitors.1 But how do the content creators profit from their videos through YouTube, and what say does YouTube have in which videos are eligible to earn revenue?

YouTube content creators, or ‘vloggers’, can make their money a number of different ways, one of which is to allow advertisements to be displayed prior to their videos.23 YouTube has recently come under fire from content creators after they began flagging multiple videos a day as being unable to earn money through advertising.4 YouTube’s current guidelines for being ‘advertiser-friendly’ mean that a video cannot show things including, but not limited to, “[s]exually suggestive content […], violence, […] promotion of drugs and regulated substances, […] or controversial or sensitive subjects and events, including subjects related to war, political conflicts, natural disasters and tragedies, even if graphic imagery is not shown.”5 YouTube has assured its users that their content is not facing stricter guidelines, but their method of alerting creators of the demonetization of their videos has been modified.6 However, just because YouTube hasn’t further restricted their ‘advertiser-friendly’ guidelines doesn’t mean that creators are accepting of what many deem to be censorship according to vague guidelines that are enforced using a simple algorithm, capable of making errors.7 It will be important to see what recourse a creator will have against YouTube in the event that a video is incorrectly demonetized, causing them to lose advertising revenue. A creator can earn an estimated $18.00 per 1,000 views of a video.8 If a video is demonetized for a full 24 hours, the loss in revenue can be hundreds, even thousands of dollars. This problem may be further exacerbated by YouTube’s new YouTube Heroes initiative, “a global community of volunteer contributors who help create the best possible YouTube experience for everyone.”9 This crowdsourced censorship allows volunteer YouTube viewers to report inappropriate videos to earn points.10 Much like Twitter bashing, this platform may open up an avenue for individuals to over-report videos they do not agree with, leading to additional inaccurately demonetized videos. Whichever method causes videos to be incorrectly deemed not advertiser friendly, YouTube will have to deal with content creators who demand to be compensated for a loss in advertising revenue, leading to litigation with thousands of individuals who have the support of the internet masses behind them.

Weighing the Costs of Privacy and Security

Security. For most people, this means putting their money in a bank, having basic home security, and their birthday as passwords on their computer. A simple concept, which for the most part is inexpensive.

However, on a national scale, security becomes prohibitively expensive. Not only because of money, but rather because of the cost to citizens’ privacy and liberty. These costs were emphasized when, in the wake of the San Bernardino terror attack, the Department of Justice (DOJ) demanded that Apple should create a program which would allow the DOJ to access the information in San Bernardino terrorist’s phone under the All Writs Act of 1789.1 When Apple refused, the DOJ won a federal suit to compel Apple to produce such a program.2

This particular request is almost unrivaled in its audacity compared to previous advances by the United States Government.3 While national security is vital to our nation’s interests, this grab at power by our government ups the ante. It does not only limit private citizens, corporations, and entities in a preventative sense, but rather forces them to act.

There are three problems with this. First, providing the government with access to anyone’s private phones, computers, and documents at any time may violate the Fourth Amendment.4 Second, to force a citizen or entity to act affirmatively can violate their basic freedom and liberty under the First and Fifth Amendments.5 Finally, when an executive branch gets to decide what can be demanded from a citizen in the name of homeland security judicial oversite is limited.6 The only remaining question would then be what can your country request/demand from you. If you ask Stalin or Mussolini, a lot.

Until now, and even with the recent policy decided in Sebelius7, the government could only compel private citizens to act in very limited circumstances. However, compelling private entities to create and do things as the government wishes, is a vastly enlarged scope of government power with untold realities. Of course, anything done will be in the name of national security and the United States’ interest. However, the key phrase is “national” security. Essentially, this is for the “greater good”, not the personal citizen. Further, everything has a good reason and a real reason, all the government needs to put forth to compel citizen action under rational basis is the good reason.

Also, as evidenced by the debacle between Edward Snowden and the NSA,8 once the government has power it cannot be blatantly assumed that the government will use such power appropriately and fairly. If the government would be able to compel entities to produce programs, what else can the government compel? Can the government demand every citizen register and produce the keys and passwords to their home, car, personal locker, computer and phone “just in case” the government needs to enter? True, in this instance, this request was part of an investigation proceeding, but this was not a request which specifically applied to that security situation, this request encompassed the privacy of millions of users. Further, it demanded that Apple affirmatively act.

However, in conclusion, one has to wonder based on the amount of access millions of users sign away on phone contracts and signing up for the latest apps, how much we as a society value our privacy, and therefore, how much such privacy concerns should weigh against our national security needs.

Racial Profiling by the Government Sent Right to Your Cell-Phone? How can this be Stopped?

In the new era of technology, cellphones have become an indispensable part of human daily activity. With this, the government has implemented new methods to communicate with citizens in events of emergencies.1 Wireless Emergency Alerts (WEA) were implemented pursuant to the Warning, Alert and Response Network (WARN) Act2 that allow “national, state, or local governments [to] send alerts regarding public safety emergencies.”3 These alerts are free, and consumers have the option to block these messages through their subscribers with the exception of presidential warnings.4

While WEAs sound like excellent news to society, they are extremely limited.5 WEAs are limited to 90 characters, which is less than the twitter limit for tweets, no pictures are allowed with the text, and no clickable pictures or links can be put on the alerts.6 Essentially, the government has to communicate threats, weather alerts, amber alerts, and other public safety emergencies in less than three lines.7 This can be problematic, especially when names of suspects for both Amber Alerts, or other threats are sent via this method.

The text limit on WEAs can be a forum for explicit racial profiling by the authorities allowed to send said messages. For example, simply by just providing a name that can be related to a certain ethnicity or religion, or skin color descriptions of wanted individuals, this can cause severe impacts on minorities and their relationships within their communities.8 According to The American Psychological Association (APA), research has shown that the effects of racial profiling on minority groups include “post traumatic stress disorder (PTSD), perceptions of race-related threats and failure to use available community resources.”9 In addition, the APA’s psychologist have concluded that racial profiling does not only affect the individual, but it “also impacts families, friends, classmates, and neighbours.”10 Allowing the WEAs to continue to send these sorts of alerts to the devices of people nationwide that potentially racially profile a certain minority group “means that the social and economic cost of racial profiling is widespread.”11

In New York, on September 19, 2016 at around 8:30am, every phone that had emergency alerts activated received a WEA that read, “WANTED: Ahmad Khan Rahami, 28-yr-old male. See media for pic. Call 9–1–1 if seen.”12 This emergency alert did not contain a picture, a hyperlink, and was also extremely racist.13 It potentially left an open ground for people all over New York City to call 9-1-1 in the event they saw someone that could have relatively looked like they were a Muslim man named “Ahmad.”14

Perhaps WAEs do not violate the law by sending alerts that racially profile, but a proposed solution to this issue could be to amend the WARN Act to explicitly demand that the WAEs include text, hyperlinks, pictures, and more than 90 characters.15 In addition, if the WARN Act provides that WAEs cannot be sent to the mass population if nothing but a description with no picture is available, this could prevent racial profiling’s like the one that occurred in New York September 19, 2016. Finally, if the WARN Act is amended to require the government officials to receive trainings on how appropriate WAEs can be sent without affecting minorities, perhaps WAEs can be more effective.

The Use of Stingrays – Consitutional?

We are not talking about the stingray found in the ocean. We are talking about a new, emerging piece of technology that can spy, record, and track people down via their cell phones. This new piece of technology is called the StingRay and its prominent use has been under wraps for almost two decades.1 Police have been using this spy tool and most often without a proper warrant or permission from supervising officers.2

As it stands today, sixty government agencies in twenty-three states now employ StingRay technology allowing police to grab cell phone data, text messages and more.3 Those states are Washington, California, Idaho, Arizona, Texas, Tennessee, Georgia, Florida, Louisiana, Delaware, Oklahoma, Michigan, Wisconsin, Illinois, Minnesota, North Carolina, Maryland, Pennsylvania, New York, Virginia and Massachusetts, Hawaii and Missouri.4

The way in which this portable, high tech scanning device works is that it, first, masquerades itself as a cell tower and is usually mounted in a police vehicle.5 Cell phones are constantly seeking the nearest cell tower even when you are not using it.6 Your phone could connect to the police StingRay when it is nearby and route data through the StingRay.7 The data is relayed to a connected laptop which displays and translates the data for the officers.8 The data is passed onto an actual cell tower and the phone’s user would not know the difference.9 Police can get the identification of the phone user, call records, voicemails, text messages, location of the connected phone, and much more.10 When a StingRay is used to track a suspect’s cell phone, they also gather information about the phones of countless bystanders who happen to be nearby even if they are not the target of surveillance.11 In essence, StingRays are invasive cell phone surveillance devices that mimic cell phone towers and send out signals to trick cell phones in the area into transmitting their locations and other identifying information.

The use of these devices by government agencies is warrantless cell phone tracking as they have frequently been used without informing the court system or obtaining a warrant.12 The Fourth Amendment of the Constitution protects the public from warrantless searches.13 As such, the use of StingRays for warrantless cell phone searches should be held as unconstitutional. In Kyllo v. United States, the Supreme Court ruled that thermal imaging of Kyllo’s home constituted a search within the meaning of the Fourth Amendment.14 In this case, since the police did not have a warrant when they used the device, which was not commonly available to the public, the search was presumptively unreasonable and therefore unconstitutional.15 The majority opinion in this case argued that a person has an expectation of privacy in his or her home and therefore, the government cannot conduct unreasonable searches, even with technology that does not enter the home.16 Justice Scalia discussed how future technology can invade one’s right of privacy and therefore authored the opinion so that it protected against more sophisticated surveillance equipment.17 As a result, Justice Scalia asserted that the difference between “off the wall” surveillance and “through the wall” surveillance was non-existent because both methods physically intruded upon the privacy of the home.18

Here, the facts about the use and operation of StingRay technology is analogous to the use of thermal imaging. Justice Scalia warned the American people about more sophisticated, invasive technology of the future19, and here it is! StingRay devices. For the first time, a federal judge, Judge Pauley, puts everyone on notice about warrantless searches by actually kicking DEA StingRay evidence to the curb in court as he deemed them unconstitutional.20

At this point, it seems as though the only problem going forth, is that citizens who are subjected to the use of StingRay technology may have a difficult time defending their right to privacy and constitutional right against a warrantless search. The main reason is because most people are unaware of the mere existence of this new technology and, further, often unknowingly fall victim to the use of this “all-you-can-eat-data-buffet.” This is why the proper call to action would be for all police departments to abandon this technology as it is a direct violation of each American citizen’s right against unreasonable searches and an invasion of their privacy right. Although, it certainly is hard to rule something as unconstitutional if the case never reaches the Supreme Court. Our challenge, then becomes, how do we as American citizens protect our right against the invasion of privacy and unreasonable searches if we do not know our rights are being violated? The use of StingRays and they are so secretive and are deemed to be classified information by the FBI21 that, in moving forward, this will be an arduous task for the American people.