Privacy Media ArticlesExcerpts of Key Privacy Media Articles in Major Media
Below are key excerpts of revealing news articles on privacy and mass surveillance issues from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
AI's promise of behavior prediction and control fuels a vicious cycle of surveillance which inevitably triggers abuses of power. The problem with using data to make predictions is that the process can be used as a weapon against society, threatening democratic values. As the lines between private and public data are blurred in modern society, many won't realize that their private lives are becoming data points used to make decisions about them. What AI does is make this a surveillance ratchet, a device that only goes in one direction, which goes something like this: To make the inferences I want to make to learn more about you, I must collect more data on you. For my AI tools to run, I need data about a lot of you. And once I've collected this data, I can monetize it by selling it to others who want to use AI to make other inferences about you. AI creates a demand for data but also becomes the result of collecting data. What makes AI prediction both powerful and lucrative is being able to control what happens next. If a bank can claim to predict what people will do with a loan, it can use that to decide whether they should get one. If an admissions officer can claim to predict how students will perform in college, they can use that to decide which students to admit. Amazon's Echo devices have been subject to warrants for the audio recordings made by the device inside our homes–recordings that were made even when the people present weren't talking directly to the device. The desire to surveil is bipartisan. It's about power, not party politics.
Note: As journalist Kenan Malik put it, "It is not AI but our blindness to the way human societies are already deploying machine intelligence for political ends that should most worry us." Read about the shadowy companies tracking and trading your personal data, which isn't just used to sell products. It's often accessed by governments, law enforcement, and intelligence agencies, often without warrants or oversight. For more, read our concise summaries of news articles on AI.
Data brokers are required by California law to provide ways for consumers to request their data be deleted. But good luck finding them. More than 30 of the companies, which collect and sell consumers' personal information, hid their deletion instructions from Google. This creates one more obstacle for consumers who want to delete their data. Data brokers nationwide must register in California under the state's Consumer Privacy Act, which allows Californians to request that their information be removed, that it not be sold, or that they get access to it. After reviewing the websites of all 499 data brokers registered with the state, we found 35 had code to stop certain pages from showing up in searches. While those companies might be fulfilling the letter of the law by providing a page consumers can use to delete their data, it means little if those consumers can't find the page, according to Matthew Schwartz, a policy analyst. "This sounds to me like a clever work-around to make it as hard as possible for consumers to find it," Schwartz said. Some companies that hid their privacy instructions from search engines included a small link at the bottom of their homepage. Accessing it often required scrolling multiple screens, dismissing pop-ups for cookie permissions and newsletter sign-ups, then finding a link that was a fraction the size of other text on the page. So consumers still faced a serious hurdle when trying to get their information deleted.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Tor is mostly known as the Dark Web or Dark Net, seen as an online Wild West where crime runs rampant. Yet it's partly funded by the U.S. government, and the BBC and Facebook both have Tor-only versions to allow users in authoritarian countries to reach them. At its simplest, Tor is a distributed digital infrastructure that makes you anonymous online. It is a network of servers spread around the world, accessed using a browser called the Tor Browser, which you can download for free from the Tor Project website. When you use the Tor Browser, your signals are encrypted and bounced around the world before they reach the service you're trying to access. This makes it difficult for governments to trace your activity or block access, as the network just routes you through a country where that access isn't restricted. But, because you can't protect yourself from digital crime without also protecting yourself from mass surveillance by the state, these technologies are the site of constant battles between security and law enforcement interests. The state's claim to protect the vulnerable often masks efforts to exert control. In fact, robust, well-funded, value-driven and democratically accountable content moderation – by well-paid workers with good conditions – is a far better solution than magical tech fixes to social problems ... or surveillance tools. As more of our online lives are funneled into the centralized AI infrastructures ... tools like Tor are becoming ever more important.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
U.S. Customs and Border Protection, flush with billions in new funding, is seeking "advanced AI" technologies to surveil urban residential areas, increasingly sophisticated autonomous systems, and even the ability to see through walls. A CBP presentation for an "Industry Day" summit with private sector vendors ... lays out a detailed wish list of tech CBP hopes to purchase. State-of-the-art, AI-augmented surveillance technologies will be central to the Trump administration's anti-immigrant campaign, which will extend deep into the interior of the North American continent. [A] reference to AI-aided urban surveillance appears on a page dedicated to the operational needs of Border Patrol's "Coastal AOR," or area of responsibility, encompassing the entire southeast of the United States. "In the best of times, oversight of technology and data at DHS is weak and has allowed profiling, but in recent months the administration has intentionally further undermined DHS accountability," explained [Spencer Reynolds, a former attorney with the Department of Homeland Security]. "Artificial intelligence development is opaque, even more so when it relies on private contractors that are unaccountable to the public – like those Border Patrol wants to hire. Injecting AI into an environment full of biased data and black-box intelligence systems will likely only increase risk and further embolden the agency's increasingly aggressive behavior."
Note: For more along these lines, read our concise summaries of news articles on AI and immigration enforcement corruption.
In California, the law explicitly protects the privacy of power customers, prohibiting public utilities from disclosing precise "smart" meter data in most cases. Despite this, Sacramento's power company and law enforcement agencies have been running an illegal mass surveillance scheme for years, using our power meters as home-mounted spies. For a decade, the Sacramento Municipal Utilities District (SMUD) has been searching through all of its customers' energy data, and passed on more than 33,000 tips about supposedly "high" usage households to police. Ostensibly looking for homes that were growing illegal amounts of cannabis, SMUD analysts have admitted that such "high" power usage could come from houses using air conditioning or heat pumps or just being large. And the threshold of so-called "suspicion" has steadily dropped, from 7,000 kWh per month in 2014 to just 2,800 kWh a month in 2023. This scheme has targeted Asian customers. SMUD analysts deemed one home suspicious because it was "4k [kWh], Asian," and another suspicious because "multiple Asians have reported there." Sacramento police sent accusatory letters in English and Chinese, but no other language, to residents who used above-average amounts of electricity. Last week, we filed our main brief explaining how this surveillance program violates the law and why it must be stopped. This type of dragnet surveillance ... is inherently unreasonable.
Note: For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
Reviewing individuals' social media to conduct ideological vetting has been a defining initiative of President Trump's second term. As part of that effort, the administration has proposed expanding the mandatory collection of social media identifiers. By linking individuals' online presence to government databases, officials could more easily identify, monitor, and penalize people based on their online self-expression, raising the risk of self-censorship. Most recently, the State Department issued a cable directing consular officers to review the social media of all student visa applicants for "any indications of hostility towards the citizens, culture, government, institutions or founding principles of the United States," as well as for any "history of political activism." This builds on earlier efforts this term, including the State Department's "Catch and Revoke" program, which promised to leverage artificial intelligence to screen visa holders' social media for ostensible "pro-Hamas" activity, and U.S. Citizenship and Immigration Services' April announcement that it would begin looking for "antisemitic activity" in the social media of scores of foreign nationals. At the border, any traveler, regardless of citizenship status, may face additional scrutiny. U.S. border agents are authorized to ... examine phones, computers, and other devices to review posts and private messages on social media, even if they do not suspect any involvement in criminal activity or have immigration-related concerns.
Note: Our news archives on censorship and the disappearance of privacy reveal how government surveillance of social media has long been conducted by all presidential administrations and all levels of government.
Technology already available – and already demonstrated to be effective – makes it possible for law-abiding officials, together with experienced technical people to create a highly efficient system in which both security and privacy can be assured. Advanced technology can pinpoint and thwart corruption in the intelligence, military, and civilian domain. At its core, this requires automated analysis of attributes and transactional relationships among individuals. The large data sets in government files already contain the needed data. On the Intelligence Community side, there are ways to purge databases of irrelevant data and deny government officials the ability to spy on anyone they want. These methodologies protect the privacy of innocent people, while enhancing the ability to discover criminal threats. In order to ensure continuous legal compliance with these changes, it is necessary to establish a central technical group or organization to continuously monitor and validate compliance with the Constitution and U.S. law. Such a group would need to have the highest-level access to all agencies to ensure compliance behind the classification doors. It must be able to go into any agency to inspect its activity at any time. In addition ... it would be best to make government financial and operational transactions open to the public for review. Such an organization would go a long way toward making government truly transparent to the public.
Note: The article cites national security journalist James Risen's book on how the creation of Google was closely tied to NSA and CIA-backed efforts to privatize surveillance infrastructure. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Wildlife activists who exposed horrific conditions at Scottish salmon farms were subjected to "Big Brother" surveillance by spies for hire working for an elite British army veteran. One of the activists believes he was with his young daughter ... when he was followed and photographed by the former paratrooper Damian Ozenbrook's operatives. The surveillance of [Corin] Smith and another wildlife activist, Don Staniford, began after they paddled out to some of the floating cages where millions of salmon are farmed every year ... and filmed what was happening inside. The footage, posted online and broadcast by the BBC in 2018, showed fish crawling with sea lice. Covert surveillance by state agencies is subject to legislation that includes independent oversight. But once highly trained operatives leave the police, military or intelligence services, the private firms that deploy them are barely regulated. Guy Vassall-Adams KC, a barrister who has worked for the targets of surveillance, including anti-asbestos activists infiltrated by private spies, believes these private firms "engage in highly intrusive investigations which often involve serious infringements of privacy." He added. "It's a wild west." One firm, run by a former special forces pilot, was found to have infiltrated Greenpeace, Friends of the Earth and other environmental groups for corporate clients in the 2000s. Another, reportedly founded by an ex-MI6 officer, was hired in 2019 by BP to spy on climate campaigners.
Note: For more along these lines, read our concise summaries of news articles on factory farming and the disappearance of privacy.
The Electronic Frontier Foundation (EFF) and a nonprofit privacy rights group have called on several states to investigate why "hundreds" of data brokers haven't registered with state consumer protection agencies in accordance with local laws. An analysis done in collaboration with Privacy Rights Clearinghouse (PRC) found that many data brokers have failed to register in all of the four states with laws that require it, preventing consumers in some states from learning what kinds of information these brokers collect and how to opt out. Data brokers are companies that collect and sell troves of personal information about people, including their names, addresses, phone numbers, financial information, and more. Consumers have little control over this information, posing serious privacy concerns, and attempts to address these concerns at a federal level have mostly failed. Four states – California, Texas, Oregon, and Vermont – do attempt to regulate these companies by requiring them to register with consumer protection agencies and share details about what kind of data they collect. In letters to the states' attorneys general, the EFF and PRC say they "uncovered a troubling pattern" after scraping data broker registries. They found that many data brokers didn't consistently register their businesses across all four states. The number of data brokers that appeared on one registry but not another includes 524 in Texas, 475 in Oregon, 309 in Vermont, and 291 in California.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
From facial recognition to predictive analytics to the rise of increasingly convincing deepfakes and other synthetic video, new technologies are emerging faster than agencies, lawmakers, or watchdog groups can keep up. Take New Orleans, where, for the past two years, police officers have quietly received real-time alerts from a private network of AI-equipped cameras, flagging the whereabouts of people on wanted lists. In 2022, City Council members attempted to put guardrails on the use of facial recognition. But those guidelines assume it's the police doing the searching. New Orleans police have hundreds of cameras, but the alerts in question came from a separate system: a network of 200 cameras equipped with facial recognition and installed by residents and businesses on private property, feeding video to a nonprofit called Project NOLA. Police officers who downloaded the group's app then received notifications when someone on a wanted list was detected on the camera network, along with a location. That has civil liberties groups and defense attorneys in Louisiana frustrated. "When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there, and we don't have the tools to ... hold people accountable," Danny Engelberg, New Orleans' chief public defender, [said]. Another way departments can skirt facial recognition rules is to use AI analysis that doesn't technically rely on faces.
Note: Learn about all the high-tech tools police use to surveil protestors. For more along these lines, read our concise summaries of news articles on AI and police corruption.
When National Public Data, a company that does online background checks, was breached in 2024, criminals gained the names, addresses, dates of birth and national identification numbers such as Social Security numbers of 170 million people in the U.S., U.K. and Canada. The same year, hackers who targeted Ticketmaster stole the financial information and personal data of more than 560 million customers. In so-called stolen data markets, hackers sell personal information they illegally obtain to others, who then use the data to engage in fraud and theft for profit. Every piece of personal data captured in a data breach – a passport number, Social Security number or login for a shopping service – has inherent value. Offenders can ... assume someone else's identity, make a fraudulent purchase or steal services such as streaming media or music. Some vendors also offer distinct products such as credit reports, Social Security numbers and login details for different paid services. The price for pieces of information varies. A recent analysis found credit card data sold for US$50 on average, while Walmart logins sold for $9. However, the pricing can vary widely across vendors and markets. The rate of return can be exceptional. An offender who buys 100 cards for $500 can recoup costs if only 20 of those cards are active and can be used to make an average purchase of $30. The result is that data breaches are likely to continue as long as there is demand.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Palantir has long been connected to government surveillance. It was founded in part with CIA money, it has served as an Immigration and Customs Enforcement (ICE) contractor since 2011, and it's been used for everything from local law enforcement to COVID-19 efforts. But the prominence of Palantir tools in federal agencies seems to be growing under President Trump. "The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon," reports The New York Times, noting that this figure "does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent." Palantir technology has largely been used by the military, the intelligence agencies, the immigration enforcers, and the police. But its uses could be expanding. Representatives of Palantir are also speaking to at least two other agencies–the Social Security Administration and the Internal Revenue Service. Along with the Trump administration's efforts to share more data across federal agencies, this signals that Palantir's huge data analysis capabilities could wind up being wielded against all Americans. Right now, the Trump administration is using Palantir tools for immigration enforcement, but those tools could easily be applied to other ... targets.
Note: Read about Palantir's recent, first-ever AI warfare conference. For more along these lines, read our concise summaries of news articles on Big Tech and intelligence agency corruption.
The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There's simply too much data on sale from too many corporations and brokers. So the government has a plan for a one-stop shop. The Office of the Director of National Intelligence is working on a system to centralize and "streamline" the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be "misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons." The "Intelligence Community Data Consortium" will provide a single convenient web-based storefront for searching and accessing this data, along with a "data marketplace" for purchasing "the best data at the best price," faster than ever before. It will be designed for the 18 different federal agencies and offices that make up the U.S. intelligence community, including the National Security Agency, CIA, FBI Intelligence Branch, and Homeland Security's Office of Intelligence and Analysis – though one document suggests the portal will also be used by agencies not directly related to intelligence or defense.
Note: For more along these lines, read our concise summaries of intelligence agency corruption and the disappearance of privacy.
According to recent research by the Office of the eSafety Commissioner, "nearly 1 in 5 young people believe it's OK to track their partner whenever they want". Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you're not doing anything wrong, they're asking (well, demanding) that we trust them. But it's not about trust, it's about control and disciplining behaviour. "Nothing to hide; nothing to fear" is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what's considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that "threaten our personal safety and undermine America's national security." The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information–often without individuals' knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In 2009, Pennsylvania's Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as "remote learning tools" and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children's digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student's data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today's kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers' exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of "connected cars," with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car's systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.
Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people's "sexual activities" when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
Data that people provide to U.S. government agencies for public services such as tax filing, health care enrollment, unemployment assistance and education support is increasingly being redirected toward surveillance and law enforcement. Originally collected to facilitate health care, eligibility for services and the administration of public services, this information is now shared across government agencies and with private companies, reshaping the infrastructure of public services into a mechanism of control. Once confined to separate bureaucracies, data now flows freely through a network of interagency agreements, outsourcing contracts and commercial partnerships built up in recent decades. Key to this data repurposing are public-private partnerships. The DHS and other agencies have turned to third-party contractors and data brokers to bypass direct restrictions. These intermediaries also consolidate data from social media, utility companies, supermarkets and many other sources, enabling enforcement agencies to construct detailed digital profiles of people without explicit consent or judicial oversight. Palantir, a private data firm and prominent federal contractor, supplies investigative platforms to agencies. These platforms aggregate data from various sources – driver's license photos, social services, financial information, educational data – and present it in centralized dashboards designed for predictive policing and algorithmic profiling. Data collected under the banner of care could be mined for evidence to justify placing someone under surveillance. And with growing dependence on private contractors, the boundaries between public governance and corporate surveillance continue to erode.
Note: For more along these lines, read our concise summaries of news articles on government corruption and the disappearance of privacy.
Ever thought of having your genome sequenced? 23andMe ... describes itself as a "genetics-led consumer healthcare and biotechnology company empowering a healthier future". Its share price had fallen precipitately following a data breach in October 2023 that harvested the profile and ethnicity data of 6.9 million users – including name, profile photo, birth year, location, family surnames, grandparents' birthplaces, ethnicity estimates and mitochondrial DNA. So on 24 March it filed for so-called Chapter 11 proceedings in a US bankruptcy court. At which point the proverbial ordure hit the fan because the bankruptcy proceedings involve 23andMe seeking authorisation from the court to commence "a process to sell substantially all of its assets". And those assets are ... the genetic data of the company's 15 million users. These assets are very attractive to many potential purchasers. The really important thing is that genetic data is permanent, unique and immutable. If your credit card is hacked, you can always get a new replacement. But you can't get a new genome. When 23andMe's data assets come up for sale the queue of likely buyers is going to be long, with health insurance and pharmaceutical giants at the front, followed by hedge-funds, private equity vultures and advertisers, with marketers bringing up the rear. Since these outfits are not charitable ventures, it's a racing certainty that they have plans for exploiting those data assets.
Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
On July 2022, Morgan-Rose Hart, an aspiring vet with a passion for wildlife, died after she was found unresponsive at a mental health unit in Essex. Her death was one of four involving a hi-tech patient monitoring system called Oxevision which has been rolled out in nearly half of mental health trusts across England. Oxevision's system can measure a patient's pulse rate and breathing without the need for a person to enter the room, or disturb a patient at night, as well as momentarily relaying CCTV footage when required. Oxehealth, the company behind Oxevision, has agreements with 25 NHS mental health trusts, according to its latest accounts, which reported revenues of about Ł4.7m in ... 2023. But it is claimed in some cases staff rely too heavily on the infra-red camera system to monitor vulnerable patients, instead of making physical checks. There are also concerns that the system – which can glow red from the corner of the room – may worsen the distress of patients in a mental health crisis who may have heightened sensitivity to surveillance or control. Sophina, who has experience of being monitored by Oxevision while a patient ... said: "I think it was something about the camera and it always being on, and it's right above your bed. "It's the first thing you see when you open your eyes, the last thing when you go to sleep. I was just in a constant state of hypervigilance. I was completely traumatised. I still felt too scared to sleep properly."
Note: For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.