Privacy News StoriesExcerpts of Key Privacy News Stories in Major Media
Below are key excerpts of revealing news articles on privacy and mass surveillance issues from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: This comprehensive list of news stories is usually updated once a week. Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
Larry Ellison, the billionaire cofounder of Oracle ... said AI will usher in a new era of surveillance that he gleefully said will ensure "citizens will be on their best behavior." Ellison made the comments as he spoke to investors earlier this week during an Oracle financial analysts meeting, where he shared his thoughts on the future of AI-powered surveillance tools. Ellison said AI would be used in the future to constantly watch and analyze vast surveillance systems, like security cameras, police body cameras, doorbell cameras, and vehicle dashboard cameras. "We're going to have supervision," Ellison said. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on." Ellison also expects AI drones to replace police cars in high-speed chases. "You just have a drone follow the car," Ellison said. "It's very simple in the age of autonomous drones." Ellison's company, Oracle, like almost every company these days, is aggressively pursuing opportunities in the AI industry. It already has several projects in the works, including one in partnership with Elon Musk's SpaceX. Ellison is the world's sixth-richest man with a net worth of $157 billion.
Note: As journalist Kenan Malik put it, "The problem we face is not that machines may one day exercise power over humans. It is rather that we already live in societies in which power is exercised by a few to the detriment of the majority, and that technology provides a means of consolidating that power." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In an exchange this week on "All-In Podcast," Alex Karp was on the defensive. The Palantir CEO used the appearance to downplay and deny the notion that his company would engage in rights-violating in surveillance work. "We are the single worst technology to use to abuse civil liberties, which is by the way the reason why we could never get the NSA or the FBI to actually buy our product," Karp said. What he didn't mention was the fact that a tranche of classified documents revealed by [whistleblower and former NSA contractor] Edward Snowden and The Intercept in 2017 showed how Palantir software helped the National Security Agency and its allies spy on the entire planet. Palantir software was used in conjunction with a signals intelligence tool codenamed XKEYSCORE, one of the most explosive revelations from the NSA whistleblower's 2013 disclosures. XKEYSCORE provided the NSA and its foreign partners with a means of easily searching through immense troves of data and metadata covertly siphoned across the entire global internet, from emails and Facebook messages to webcam footage and web browsing. A 2008 NSA presentation describes how XKEYSCORE could be used to detect "Someone whose language is out of place for the region they are in," "Someone who is using encryption," or "Someone searching the web for suspicious stuff." In May, the New York Times reported Palantir would play a central role in a White House plan to boost data sharing between federal agencies, "raising questions over whether he might compile a master list of personal information on Americans that could give him untold surveillance power."
Note: Read about Palantir's revolving door with the US government. As former NSA intelligence official and whistleblower William Binney articulated, "The ultimate goal of the NSA is total population control." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Meta whistleblower Sarah Wynn-Williams, the former director of Global Public Policy for Facebook and author of the recently released tell-all book "Careless People," told U.S. senators ... that Meta actively targeted teens with advertisements based on their emotional state. In response to a question from Sen. Marsha Blackburn (R-TN), Wynn-Williams admitted that Meta (which was then known as Facebook) had targeted 13- to 17-year-olds with ads when they were feeling down or depressed. "It could identify when they were feeling worthless or helpless or like a failure, and [Meta] would take that information and share it with advertisers," Wynn-Williams told the senators on the subcommittee for crime and terrorism. "Advertisers understand that when people don't feel good about themselves, it's often a good time to pitch a product – people are more likely to buy something." She said the company was letting advertisers know when the teens were depressed so they could be served an ad at the best time. As an example, she suggested that if a teen girl deleted a selfie, advertisers might see that as a good time to sell her a beauty product as she may not be feeling great about her appearance. They also targeted teens with ads for weight loss when young girls had concerns around body confidence. If Meta was willing to target teens based on their emotional states, it stands to reason they'd do the same to adults. One document displayed during the hearing showed an example of just that.
Note: Facebook hid its own internal research for years showing that Instagram worsened body image issues, revealing that 13% of British teenage girls reported more frequent suicidal thoughts after using the app. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
There has been a surge of concern and interest in the threat of "surveillance pricing," in which companies leverage the enormous amount of detailed data they increasingly hold on their customers to set individualized prices for each of them – likely in ways that benefit the companies and hurt their customers. The central battle in such efforts will be around identity: do the companies whose prices you are checking or negotiating know who you are? Can you stop them from knowing who you are? Unfortunately, one day not too far in the future, you may lose the ability to do so. Many states around the country are creating digital versions of their state driver's licenses. Digital versions of IDs allow people to be tracked in ways that are not possible or practical with physical IDs – especially since they are being designed to work ... online. It will be much easier for companies to request – and eventually demand – that people share their IDs in order to engage in all manner of transactions. It will make it easier for companies to collect data about us, merge it with other data, and analyze it, all with high confidence that it pertains to the same person – and then recognize us ... and execute their price-maximizing strategy against us. Not only would digital IDs prevent people from escaping surveillance pricing, but surveillance pricing would simultaneously incentivize companies to force the presentation of digital IDs by people who want to shop.
Note: For more along these lines, read our concise summaries of news articles on corporate corruption and the disappearance of privacy.
Digital technology was sold as a liberating tool that could free individuals from state power. Yet the state security apparatus always had a different view. The Prism leaks by whistleblower Edward Snowden in 2013 revealed a deep and almost unconditional cooperation between Silicon Valley firms and security apparatuses of the state such as the National Security Agency (NSA). People realized that basically any message exchanged via Big Tech firms including Google, Facebook, Microsoft, Apple, etc. could be easily spied upon with direct backdoor access: a form of mass surveillance with few precedents ... especially in nominally democratic states. The leaks prompted outrage, but eventually most people preferred to look away. The most extreme case is the surveillance and intelligence firm Palantir. Its service is fundamentally to provide a more sophisticated version of the mass surveillance that the Snowden leaks revealed. In particular, it endeavors to support the military and police as they aim to identify and track various targets – sometimes literal human targets. Palantir is a company whose very business is to support the security state in its most brutal manifestations: in military operations that lead to massive loss of life, including of civilians, and in brutal immigration enforcement [in] the United States. Unfortunately, Palantir is but one part of a much broader military-information complex, which is becoming the axis of the new Big Tech Deep State.
Note: For more along these lines, read our concise summaries of news articles on corruption in the intelligence community and in Big Tech.
AI's promise of behavior prediction and control fuels a vicious cycle of surveillance which inevitably triggers abuses of power. The problem with using data to make predictions is that the process can be used as a weapon against society, threatening democratic values. As the lines between private and public data are blurred in modern society, many won't realize that their private lives are becoming data points used to make decisions about them. What AI does is make this a surveillance ratchet, a device that only goes in one direction, which goes something like this: To make the inferences I want to make to learn more about you, I must collect more data on you. For my AI tools to run, I need data about a lot of you. And once I've collected this data, I can monetize it by selling it to others who want to use AI to make other inferences about you. AI creates a demand for data but also becomes the result of collecting data. What makes AI prediction both powerful and lucrative is being able to control what happens next. If a bank can claim to predict what people will do with a loan, it can use that to decide whether they should get one. If an admissions officer can claim to predict how students will perform in college, they can use that to decide which students to admit. Amazon's Echo devices have been subject to warrants for the audio recordings made by the device inside our homes–recordings that were made even when the people present weren't talking directly to the device. The desire to surveil is bipartisan. It's about power, not party politics.
Note: As journalist Kenan Malik put it, "It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more, read our concise summaries of news articles on AI.
Reviewing individuals' social media to conduct ideological vetting has been a defining initiative of President Trump's second term. As part of that effort, the administration has proposed expanding the mandatory collection of social media identifiers. By linking individuals' online presence to government databases, officials could more easily identify, monitor, and penalize people based on their online self-expression, raising the risk of self-censorship. Most recently, the State Department issued a cable directing consular officers to review the social media of all student visa applicants for "any indications of hostility towards the citizens, culture, government, institutions or founding principles of the United States," as well as for any "history of political activism." This builds on earlier efforts this term, including the State Department's "Catch and Revoke" program, which promised to leverage artificial intelligence to screen visa holders' social media for ostensible "pro-Hamas" activity, and U.S. Citizenship and Immigration Services' April announcement that it would begin looking for "antisemitic activity" in the social media of scores of foreign nationals. At the border, any traveler, regardless of citizenship status, may face additional scrutiny. U.S. border agents are authorized to ... examine phones, computers, and other devices to review posts and private messages on social media, even if they do not suspect any involvement in criminal activity or have immigration-related concerns.
Note: Our news archives on censorship and the disappearance of privacy reveal how government surveillance of social media has long been conducted by all presidential administrations and all levels of government.
Data brokers are required by California law to provide ways for consumers to request their data be deleted. But good luck finding them. More than 30 of the companies, which collect and sell consumers' personal information, hid their deletion instructions from Google. This creates one more obstacle for consumers who want to delete their data. Data brokers nationwide must register in California under the state's Consumer Privacy Act, which allows Californians to request that their information be removed, that it not be sold, or that they get access to it. After reviewing the websites of all 499 data brokers registered with the state, we found 35 had code to stop certain pages from showing up in searches. While those companies might be fulfilling the letter of the law by providing a page consumers can use to delete their data, it means little if those consumers can't find the page, according to Matthew Schwartz, a policy analyst. "This sounds to me like a clever work-around to make it as hard as possible for consumers to find it," Schwartz said. Some companies that hid their privacy instructions from search engines included a small link at the bottom of their homepage. Accessing it often required scrolling multiple screens, dismissing pop-ups for cookie permissions and newsletter sign-ups, then finding a link that was a fraction the size of other text on the page. So consumers still faced a serious hurdle when trying to get their information deleted.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Tor is mostly known as the Dark Web or Dark Net, seen as an online Wild West where crime runs rampant. Yet it's partly funded by the U.S. government, and the BBC and Facebook both have Tor-only versions to allow users in authoritarian countries to reach them. At its simplest, Tor is a distributed digital infrastructure that makes you anonymous online. It is a network of servers spread around the world, accessed using a browser called the Tor Browser, which you can download for free from the Tor Project website. When you use the Tor Browser, your signals are encrypted and bounced around the world before they reach the service you're trying to access. This makes it difficult for governments to trace your activity or block access, as the network just routes you through a country where that access isn't restricted. But, because you can't protect yourself from digital crime without also protecting yourself from mass surveillance by the state, these technologies are the site of constant battles between security and law enforcement interests. The state's claim to protect the vulnerable often masks efforts to exert control. In fact, robust, well-funded, value-driven and democratically accountable content moderation – by well-paid workers with good conditions – is a far better solution than magical tech fixes to social problems ... or surveillance tools. As more of our online lives are funneled into the centralized AI infrastructures ... tools like Tor are becoming ever more important.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise "smart" meter data in most cases. Despite this, Sacramento's power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. For a decade, the Sacramento Municipal Utilities District (SMUD) has been searching through all of its customers' energy data, and passed on more than 33,000 tips about supposedly "high" usage households to police. Ostensibly looking for homes that were growing illegal amounts of cannabis, SMUD analysts have admitted that such "high" power usage could come from houses using air conditioning or heat pumps or just being large. And the threshold of so-called "suspicion" has steadily dropped, from 7,000 kWh per month in 2014 to just 2,800 kWh a month in 2023. This scheme has targeted Asian customers. SMUD analysts deemed one home suspicious because it was "4k [kWh], Asian," and another suspicious because "multiple Asians have reported there." Sacramento police sent accusatory letters in English and Chinese, but no other language, to residents who used above-average amounts of electricity. Last week, we filed our main brief explaining how this surveillance program violates the law and why it must be stopped. This type of dragnet surveillance ... is inherently unreasonable.
Note: For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
U.S. Customs and Border Protection, flush with billions in new funding, is seeking "advanced AI" technologies to surveil urban residential areas, increasingly sophisticated autonomous systems, and even the ability to see through walls. A CBP presentation for an "Industry Day" summit with private sector vendors ... lays out a detailed wish list of tech CBP hopes to purchase. State-of-the-art, AI-augmented surveillance technologies will be central to the Trump administration's anti-immigrant campaign, which will extend deep into the interior of the North American continent. [A] reference to AI-aided urban surveillance appears on a page dedicated to the operational needs of Border Patrol's "Coastal AOR," or area of responsibility, encompassing the entire southeast of the United States. "In the best of times, oversight of technology and data at DHS is weak and has allowed profiling, but in recent months the administration has intentionally further undermined DHS accountability," explained [Spencer Reynolds, a former attorney with the Department of Homeland Security]. "Artificial intelligence development is opaque, even more so when it relies on private contractors that are unaccountable to the public – like those Border Patrol wants to hire. Injecting AI into an environment full of biased data and black-box intelligence systems will likely only increase risk and further embolden the agency's increasingly aggressive behavior."
Note: For more along these lines, read our concise summaries of news articles on AI and immigration enforcement corruption.
Wildlife activists who exposed horrific conditions at Scottish salmon farms were subjected to "Big Brother" surveillance by spies for hire working for an elite British army veteran. One of the activists believes he was with his young daughter ... when he was followed and photographed by the former paratrooper Damian Ozenbrook's operatives. The surveillance of [Corin] Smith and another wildlife activist, Don Staniford, began after they paddled out to some of the floating cages where millions of salmon are farmed every year ... and filmed what was happening inside. The footage, posted online and broadcast by the BBC in 2018, showed fish crawling with sea lice. Covert surveillance by state agencies is subject to legislation that includes independent oversight. But once highly trained operatives leave the police, military or intelligence services, the private firms that deploy them are barely regulated. Guy Vassall-Adams KC, a barrister who has worked for the targets of surveillance, including anti-asbestos activists infiltrated by private spies, believes these private firms "engage in highly intrusive investigations which often involve serious infringements of privacy." He added. "It's a wild west." One firm, run by a former special forces pilot, was found to have infiltrated Greenpeace, Friends of the Earth and other environmental groups for corporate clients in the 2000s. Another, reportedly founded by an ex-MI6 officer, was hired in 2019 by BP to spy on climate campaigners.
Note: For more along these lines, read our concise summaries of news articles on factory farming and the disappearance of privacy.
Recording memories, reading thoughts, and manipulating what another person sees through a device in their brain may seem like science fiction plots about a distant and troubled future. But a team of multi-disciplinary researchers say the first steps to inventing these technologies have already arrived. Through a concept called "neuro rights," they want to put in place safeguards for our most precious biological possessions: our mind. Headlining this growing effort today is the NeuroRights Initiative, formed by Columbia University neuroscientist Rafael Yuste. Their proposition is to stay ahead of the tech by convincing governments across the world to create "neuro rights" legal protections, in line with the Universal Declaration of Human Rights, a document announced by the United Nations in 1948 as the standard for rights that should be universally protected for all people. Neuro rights advocates propose five additions to this standard: the rights to personal identity, free will, mental privacy, equal access to mental augmentation, and protection from algorithmic bias. "This is a new frontier of privacy rights, in that the things that are inside of our heads are ours. They're intimate; we share them when we want to share them. And we don't want that to be made into a data field for experience," said Sara Goering, professor of philosophy and co-lead for the Neuroethics Group for the Center of Neurotechnology at University of Washington.
Note: Watch a new documentary titled, "Cognitive Liberty: Neuroweapons and the Fight for Brain Privacy." For more along these lines, read our concise summaries of news articles on Big Tech and mind control.
Technology already available – and already demonstrated to be effective – makes it possible for law-abiding officials, together with experienced technical people to create a highly efficient system in which both security and privacy can be assured. Advanced technology can pinpoint and thwart corruption in the intelligence, military, and civilian domain. At its core, this requires automated analysis of attributes and transactional relationships among individuals. The large data sets in government files already contain the needed data. On the Intelligence Community side, there are ways to purge databases of irrelevant data and deny government officials the ability to spy on anyone they want. These methodologies protect the privacy of innocent people, while enhancing the ability to discover criminal threats. In order to ensure continuous legal compliance with these changes, it is necessary to establish a central technical group or organization to continuously monitor and validate compliance with the Constitution and U.S. law. Such a group would need to have the highest-level access to all agencies to ensure compliance behind the classification doors. It must be able to go into any agency to inspect its activity at any time. In addition ... it would be best to make government financial and operational transactions open to the public for review. Such an organization would go a long way toward making government truly transparent to the public.
Note: The article cites national security journalist James Risen's book on how the creation of Google was closely tied to NSA and CIA-backed efforts to privatize surveillance infrastructure. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The Electronic Frontier Foundation (EFF) and a nonprofit privacy rights group have called on several states to investigate why "hundreds" of data brokers haven't registered with state consumer protection agencies in accordance with local laws. An analysis done in collaboration with Privacy Rights Clearinghouse (PRC) found that many data brokers have failed to register in all of the four states with laws that require it, preventing consumers in some states from learning what kinds of information these brokers collect and how to opt out. Data brokers are companies that collect and sell troves of personal information about people, including their names, addresses, phone numbers, financial information, and more. Consumers have little control over this information, posing serious privacy concerns, and attempts to address these concerns at a federal level have mostly failed. Four states – California, Texas, Oregon, and Vermont – do attempt to regulate these companies by requiring them to register with consumer protection agencies and share details about what kind of data they collect. In letters to the states' attorneys general, the EFF and PRC say they "uncovered a troubling pattern" after scraping data broker registries. They found that many data brokers didn't consistently register their businesses across all four states. The number of data brokers that appeared on one registry but not another includes 524 in Texas, 475 in Oregon, 309 in Vermont, and 291 in California.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
From facial recognition to predictive analytics to the rise of increasingly convincing deepfakes and other synthetic video, new technologies are emerging faster than agencies, lawmakers, or watchdog groups can keep up. Take New Orleans, where, for the past two years, police officers have quietly received real-time alerts from a private network of AI-equipped cameras, flagging the whereabouts of people on wanted lists. In 2022, City Council members attempted to put guardrails on the use of facial recognition. But those guidelines assume it's the police doing the searching. New Orleans police have hundreds of cameras, but the alerts in question came from a separate system: a network of 200 cameras equipped with facial recognition and installed by residents and businesses on private property, feeding video to a nonprofit called Project NOLA. Police officers who downloaded the group's app then received notifications when someone on a wanted list was detected on the camera network, along with a location. That has civil liberties groups and defense attorneys in Louisiana frustrated. "When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there, and we don't have the tools to ... hold people accountable," Danny Engelberg, New Orleans' chief public defender, [said]. Another way departments can skirt facial recognition rules is to use AI analysis that doesn't technically rely on faces.
Note: Learn about all the high-tech tools police use to surveil protestors. For more along these lines, read our concise summaries of news articles on AI and police corruption.
When National Public Data, a company that does online background checks, was breached in 2024, criminals gained the names, addresses, dates of birth and national identification numbers such as Social Security numbers of 170 million people in the U.S., U.K. and Canada. The same year, hackers who targeted Ticketmaster stole the financial information and personal data of more than 560 million customers. In so-called stolen data markets, hackers sell personal information they illegally obtain to others, who then use the data to engage in fraud and theft for profit. Every piece of personal data captured in a data breach – a passport number, Social Security number or login for a shopping service – has inherent value. Offenders can ... assume someone else's identity, make a fraudulent purchase or steal services such as streaming media or music. Some vendors also offer distinct products such as credit reports, Social Security numbers and login details for different paid services. The price for pieces of information varies. A recent analysis found credit card data sold for US$50 on average, while Walmart logins sold for $9. However, the pricing can vary widely across vendors and markets. The rate of return can be exceptional. An offender who buys 100 cards for $500 can recoup costs if only 20 of those cards are active and can be used to make an average purchase of $30. The result is that data breaches are likely to continue as long as there is demand.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Palantir has long been connected to government surveillance. It was founded in part with CIA money, it has served as an Immigration and Customs Enforcement (ICE) contractor since 2011, and it's been used for everything from local law enforcement to COVID-19 efforts. But the prominence of Palantir tools in federal agencies seems to be growing under President Trump. "The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon," reports The New York Times, noting that this figure "does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent." Palantir technology has largely been used by the military, the intelligence agencies, the immigration enforcers, and the police. But its uses could be expanding. Representatives of Palantir are also speaking to at least two other agencies–the Social Security Administration and the Internal Revenue Service. Along with the Trump administration's efforts to share more data across federal agencies, this signals that Palantir's huge data analysis capabilities could wind up being wielded against all Americans. Right now, the Trump administration is using Palantir tools for immigration enforcement, but those tools could easily be applied to other ... targets.
Note: Read about Palantir's recent, first-ever AI warfare conference. For more along these lines, read our concise summaries of news articles on Big Tech and intelligence agency corruption.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that "threaten our personal safety and undermine America's national security." The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information–often without individuals' knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There's simply too much data on sale from too many corporations and brokers. So the government has a plan for a one-stop shop. The Office of the Director of National Intelligence is working on a system to centralize and "streamline" the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be "misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons." The "Intelligence Community Data Consortium" will provide a single convenient web-based storefront for searching and accessing this data, along with a "data marketplace" for purchasing "the best data at the best price," faster than ever before. It will be designed for the 18 different federal agencies and offices that make up the U.S. intelligence community, including the National Security Agency, CIA, FBI Intelligence Branch, and Homeland Security's Office of Intelligence and Analysis – though one document suggests the portal will also be used by agencies not directly related to intelligence or defense.
Note: For more along these lines, read our concise summaries of intelligence agency corruption and the disappearance of privacy.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.