Please donate here to support this vital work.
Revealing News For a Better World

Corporate Corruption News Stories
Excerpts of Key Corporate Corruption News Stories in Major Media


Below are key excerpts of revealing news articles on corporate corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.


Note: This comprehensive list of news stories is usually updated once a week. Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.


Tracking apps might make us feel safe, but blurring the line between care and control can be dangerous
2025-05-19, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-28 13:01:06
https://www.theguardian.com/commentisfree/2025/may/19/tracking-apps-might-mak...

According to recent research by the Office of the eSafety Commissioner, "nearly 1 in 5 young people believe it's OK to track their partner whenever they want". Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you're not doing anything wrong, they're asking (well, demanding) that we trust them. But it's not about trust, it's about control and disciplining behaviour. "Nothing to hide; nothing to fear" is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what's considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


These Internal Documents Show Why We Shouldn't Trust Porn Companies
2025-05-10, New York Times
Posted: 2025-05-28 12:59:20
https://www.nytimes.com/2025/05/10/opinion/pornhub-children-documents.html

What goes through the minds of people working at porn companies profiting from videos of children being raped? Thanks to a filing error in a Federal District Court in Alabama, releasing thousands of pages of internal documents from Pornhub that were meant to be sealed, we now know. One internal document indicates that Pornhub as of May 2020 had 706,000 videos available on the site that had been flagged by users for depicting rape or assaults on children or for other problems. In the message traffic, one employee advises another not to copy a manager when they find sex videos with children. The other has the obvious response: "He doesn't want to know how much C.P. we have ignored for the past five years?" C.P. is short for child pornography. One private memo acknowledged that videos with apparent child sexual abuse had been viewed 684 million times before being removed. Pornhub produced these documents during discovery in a civil suit by an Alabama woman who beginning at age 16 was filmed engaging in sex acts, including at least once when she was drugged and then raped. These videos of her were posted on Pornhub and amassed thousands of views. One discovery memo showed that there were 155,447 videos on Pornhub with the keyword "12yo." Other categories that the company tracked were "11yo," "degraded teen," "under 10" and "extreme choking." (It has since removed these searches.) Google ... has been central to the business model of companies publishing nonconsensual imagery. Google also directs users to at least one website that monetizes assaults on victims of human trafficking.

Note: For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.


OpenAI ex-chief scientist planned for a doomsday bunker for the day when machines become smarter than man
2025-05-20, AOL News
Posted: 2025-05-28 12:57:50
https://www.aol.com/openai-ex-chief-scientist-planned-115047191.html

If there is one thing that Ilya Sutskever knows, it is the opportunities–and risks–that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. "We're definitely going to build a bunker before we release AGI," Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. "Of course, it's going to be optional whether you want to get into the bunker," he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.

Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.


Google Worried It Couldn't Control How Israel Uses Project Nimbus, Files Reveal
2025-05-12, The Intercept
Posted: 2025-05-23 13:26:44
https://theintercept.com/2025/05/12/google-nimbus-israel-military-ai-human-ri...

Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn't control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel's use of its technology. And it would require close collaboration with the Israeli security establishment – including joint drills and intelligence sharing – that was unprecedented in Google's deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza – with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn't furnish weapons to the military, but it provides computing services that allow the military to function – its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.

Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.


Facebook inflicted ‘lifelong trauma' on Kenyan content moderators, campaigners say, as more than 140 are diagnosed with PTSD
2024-12-22, CNN News
Posted: 2025-05-23 13:24:30
https://www.cnn.com/2024/12/22/business/facebook-content-moderators-kenya-pts...

Campaigners have accused Facebook parent Meta of inflicting "potentially lifelong trauma" on hundreds of content moderators in Kenya, after more than 140 were diagnosed with PTSD and other mental health conditions. The diagnoses were made by Dr. Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Kenya's capital Nairobi, and filed with the city's employment and labor relations court on December 4. Content moderators help tech companies weed out disturbing content on their platforms and are routinely managed by third party firms, often in developing countries. For years, critics have voiced concerns about the impact this work can have on moderators' mental well-being. Kanyanya said the moderators he assessed encountered "extremely graphic content on a daily basis which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse ... just to name a few." Of the 144 content moderators who volunteered to undergo psychological assessments – out of 185 involved in the legal claim – 81% were classed as suffering from "severe" PTSD, according to Kanyanya. The class action grew out of a previous suit launched in 2022 by a former Facebook moderator, which alleged that the employee was unlawfully fired by Samasource Kenya after organizing protests against unfair working conditions.

Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.


Whistleblower's exposé of the cult of Zuckerberg reveals peril of power-crazy tech bros
2025-03-15, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-23 13:22:47
https://www.theguardian.com/commentisfree/2025/mar/15/whistleblowers-cult-zuc...

Careless People [is] a whistleblowing book by a former [Meta] senior employee, Sarah Wynn-Williams. In the 78-page document that Wynn-Williams filed to the SEC ... it was alleged that Meta had for years been making numerous efforts to get into the biggest market in the world. These efforts included: developing a censorship system for China in 2015 that would allow a "chief editor" to decide what content to remove, and the ability to shut down the entire site during "social unrest"; assembling a "China team" in 2014 for a project to develop China-compliant versions of Meta's services; considering the weakening of privacy protections for Hong Kong users; building a specialised censorship system for China with automatic detection of restricted terms; and restricting the account of Guo Wengui, a Chinese government critic. In her time at Meta, Wynn-Williams observed many of these activities at close range. Clearly, nobody in Meta has heard of the Streisand effect, "an unintended consequence of attempts to hide, remove or censor information, where the effort instead increases public awareness of the information". What strikes the reader is that Meta and its counterparts are merely the digital equivalents of the oil, mining and tobacco conglomerates of the analogue era.

Note: A former Meta insider revealed that the company's policy on banning hate groups and terrorists was quietly reshaped under political pressure, with US government agencies influencing what speech is permitted on the platform. Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.


Genetic data is another asset to be exploited – beware who has yours
2025-04-05, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-23 13:20:51
https://www.theguardian.com/science/2025/apr/05/genetic-data-breach-23andme-b...

Ever thought of having your genome sequenced? 23andMe ... describes itself as a "genetics-led consumer healthcare and biotechnology company empowering a healthier future". Its share price had fallen precipitately following a data breach in October 2023 that harvested the profile and ethnicity data of 6.9 million users – including name, profile photo, birth year, location, family surnames, grandparents' birthplaces, ethnicity estimates and mitochondrial DNA. So on 24 March it filed for so-called Chapter 11 proceedings in a US bankruptcy court. At which point the proverbial ordure hit the fan because the bankruptcy proceedings involve 23andMe seeking authorisation from the court to commence "a process to sell substantially all of its assets". And those assets are ... the genetic data of the company's 15 million users. These assets are very attractive to many potential purchasers. The really important thing is that genetic data is permanent, unique and immutable. If your credit card is hacked, you can always get a new replacement. But you can't get a new genome. When 23andMe's data assets come up for sale the queue of likely buyers is going to be long, with health insurance and pharmaceutical giants at the front, followed by hedge-funds, private equity vultures and advertisers, with marketers bringing up the rear. Since these outfits are not charitable ventures, it's a racing certainty that they have plans for exploiting those data assets.

Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


Is your school spying on your child online?
2025-05-08, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-05-15 16:50:34
https://www.theguardian.com/commentisfree/2025/may/08/surveillance-schools-st...

In 2009, Pennsylvania's Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as "remote learning tools" and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children's digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student's data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today's kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.

Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.


U.S. Companies Honed Their Surveillance Tech in Israel. Now It's Coming Home.
2025-04-30, The Intercept
Posted: 2025-05-15 16:48:41
https://theintercept.com/2025/04/30/israel-palestine-us-ai-surveillance-state/

In recent years, Israeli security officials have boasted of a "ChatGPT-like" arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas's bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military's policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered "Catch and Revoke" initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to "overthrow or replace the culture on which our constitutional Republic stands."

Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.


A Mysterious Startup Is Developing a New Form of Solar Geoengineering
2025-03-22, Wired
Posted: 2025-05-15 16:44:09
https://www.wired.com/story/a-mysterious-startup-is-developing-a-new-form-of-...

In July 2012, a renegade American businessman, Russ George, took a ship off the coast of British Columbia and dumped 100 tons of iron sulfate dust into the Pacific Ocean. He had unilaterally, and some suggest illegally, decided to trigger an algae bloom to absorb some carbon dioxide from the atmosphere–an attempt at geoengineering. Now a startup called Stardust seeks something more ambitious: developing proprietary geoengineering technology that would help block sun rays from reaching the planet. Stardust formed in 2023 and is based in Israel but incorporated in the United States. Geoengineering projects, even those led by climate scientists at major universities, have previously drawn the ire of environmentalists and other groups. Such a deliberate transformation of the atmosphere has never been done, and many uncertainties remain. If a geoengineering project went awry, for example, it could contribute to air pollution and ozone loss, or have dramatic effects on weather patterns, such as disrupting monsoons in populous South and East Asia. Stardust ... has not publicly released details about its technology, its business model, or exactly who works at its company. But the company appears to be positioning itself to develop and sell a proprietary geoengineering technology to governments that are considering making modifications to the global climate–acting like a kind of defense contractor for climate alteration.

Note: Regenerative farming is far safer and more promising than geoengineering for stabilizing the climate. For more along these lines, read our concise summaries of news articles on geoengineering and science corruption.


Generative AI is learning to spy for the US military
2025-04-11, MIT Technology Review
Posted: 2025-05-15 16:28:51
https://www.technologyreview.com/2025/04/11/1114914/generative-ai-is-learning...

2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon's startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military's embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it's also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: "We're already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision." Khlaaf adds that even if humans are "double-checking" the work of AI, there's little reason to think they're capable of catching every mistake. "‘Human-in-the-loop' is not always a meaningful mitigation," she says.

Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.


This ‘College Protester' Isn't Real. It's an AI-Powered Undercover Bot for Cops
2025-04-17, Wired
Posted: 2025-05-09 15:12:41
https://www.wired.com/story/massive-blue-overwatch-ai-personas-police-suspects/

American police departments ... are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, suspected drug and human traffickers ... with the hopes of generating evidence that can be used against them. Massive Blue, the New York–based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." 404 Media obtained a presentation showing some of these AI characters. These include a "radicalized AI" "protest persona," which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and "body positivity." Other personas are a 14-year-old boy "child trafficking AI persona," an "AI pimp persona," "college protestor," "external recruiter for protests," "escorts," and "juveniles." After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don't explain how Massive Blue determines who is a potential suspect based on their social media activity. "This idea of having an AI pretending to be somebody, a youth looking for pedophiles to talk online, or somebody who is a fake terrorist, is an idea that goes back a long time," Dave Maass, who studies border surveillance technologies for the Electronic Frontier Foundation. "The problem with all these things is that these are ill-defined problems. What problem are they actually trying to solve? One version of the AI persona is an escort. I'm not concerned about escorts. I'm not concerned about college protesters. What is it effective at, violating protesters' First Amendment rights?"

Note: Academic and private sector researchers have been engaged in a race to create undetectable deepfakes for the Pentagon. Historically, government informants posing as insiders have been used to guide, provoke, and even arm the groups they infiltrate. In terrorism sting operations, informants have encouraged or orchestrated plots to entrap people, even teenagers with development issues. These tactics misrepresent the threat of terrorism to justify huge budgets and to inflate arrest and prosecution statistics for PR purposes.


Is Your Favorite Influencer's Opinion Bought and Sold?
2025-04-24, Los Angeles Times
Posted: 2025-05-09 15:10:32
https://www.latimes.com/opinion/story/2025-04-24/internet-influencer-lobbying...

More than 500 social media creators were part of a covert electioneering effort by Democratic donors to shape the presidential election in favor of Kamala Harris. Payments went to party members with online followings but also to non-political influencers – people known for comedy posts, travel vlogs or cooking YouTubes – in exchange for "positive, specific pro-Kamala content" meant to create the appearance of a groundswell of support. Meanwhile, a similar pay-to-post effort among conservative influencers publicly unraveled. The goal was to publish messages in opposition to Health and Human Services Secretary Robert F. Kennedy Jr.'s push to remove sugary soda beverages from eligible SNAP food stamp benefits. Influencers were allegedly offered money to denounce soda restrictions as "an overreach that unfairly targets consumer choice" and encouraged to post pictures of President Trump enjoying Coca-Cola products. In both schemes, on the left and the right, those creating the content made little to no effort to disclose that payments could be involved. For ordinary users stumbling on the posts and videos, what they saw would have seemed entirely organic. If genuine public sentiment becomes indistinguishable from manufactured opinion, we lose our collective ability to recognize the truth and make informed decisions. The entire social media landscape [is] vulnerable to hidden manipulation, where money from interest groups or corporations or even rich individuals can silently shape what appears to be authentic discourse. Transparency in political influencing requires regulatory action.

Note: For more along these lines, read our concise summaries of news articles on corporate corruption and media manipulation.


Meta Allows Facebook, Instagram AI Chatbots To Have Sex Talks With Children: Report
2025-04-28, NDTV
Posted: 2025-05-09 15:00:13
https://www.ndtv.com/world-news/meta-allows-facebook-instagram-ai-chatbots-to...

Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."

Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.


Car Subscription Features Raise Your Risk of Government Surveillance, Police Records Show
2025-04-28, Wired
Posted: 2025-05-09 14:58:15
https://www.wired.com/story/police-records-car-subscription-features-surveill...

Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers' exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of "connected cars," with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car's systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.

Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people's "sexual activities" when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.


The Government's Chemical Disaster Tracking Tool Just Went Dark
2025-04-21, The Lever
Posted: 2025-04-30 23:44:12
https://www.levernews.com/the-governments-chemical-disaster-tracking-tool-jus...

The Environmental Protection Agency just hid data that mapped out the locations of thousands of dangerous chemical facilities, after chemical industry lobbyists demanded that the Trump administration take down the public records. The webpage was quietly shut down late Friday ... stripping away what advocates say was critical information on the secretive chemical plants at highest risk of disaster across the United States. The data was made public last year through the Environmental Protection Agency (EPA)'s Risk Management Program, which oversees the country's highest-risk chemical facilities. These chemical plants deal with dangerous, volatile chemicals – like those used to make pesticides, fertilizers, and plastics – and are responsible for dozens of chemical disasters every year. The communities near these chemical facilities suffer high rates of pollution and harmful chemical exposure. There are nearly 12,000 Risk Management Program facilities across the country. For decades, it was difficult to find public data on where the high-risk facilities were located, not to mention information on the plants' safety records and the chemicals they were processing. But the chemical lobby fiercely opposed making the data public – and has been fighting for the EPA to take it down. After President Donald Trump's victory in November, chemical companies donated generously to his inauguration fund.

Note: For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.


Pesticide and Agribusiness Lobbyists Take Posts Overseeing MAHA Priorities
2025-04-16, Lee Fang on Substack
Posted: 2025-04-30 23:42:05
https://www.leefang.com/p/pesticide-and-agribusiness-lobbyists

U.S. Secretary of Agriculture Brooke Rollins, in a brief announcement unveiling new staff hires on Monday, released a blurb about Kelsey Barnes, her recently appointed senior advisor. Barnes is a former lobbyist for Syngenta, the Chinese state-owned giant that manufactures and sells a number of controversial pesticide products. Syngenta's atrazine-based herbicides, for instance, is banned in much of the world yet is widely used in American agriculture. It is linked to birth defects, low sperm quality, irregular menstrual cycles, and other fertility problems. The leadership of USDA is filled with personnel with similar backgrounds. Scott Hutchins, the undersecretary for research, is a former Dow Chemical executive at the firm's pesticide division. Kailee Tkacz Buller, Rollins's chief of staff, previously worked as the president of the National Oilseed Processors Association and Edible Oil Producers Association, groups that lobby for corn and other seed oil subsidies. Critics have long warned that industry influence at the USDA creates inherent conflicts of interest, undermining the agency's regulatory mission and public health mandates. The revolving door hires also highlight renewed tension with the "Make America Healthy Again" agenda promised by Health and Human Services Secretary Robert F. Kennedy, Jr. The 2025-2030 Dietary Guidelines for Americans may serve as a test of whether establishment industry influence at the agencies will undermine MAHA promises.

Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.


How the self-care industry made us so lonely
2024-06-03, Vox
Posted: 2025-04-30 23:39:28
https://www.vox.com/even-better/350424/self-care-isolation-loneliness-epidemic

The wellness industry wouldn't be as lucrative if it didn't prey on our insecurities. Young people, disillusioned by polarized politics, saddled with astronomical student loan debt, and burned out by hustle culture, turned to skin care, direct-to-consumer home goods, and food and alcohol delivery – aggressively peddled by companies eager to capitalize on consumers' stressors. While these practices may be restorative in the short term, they fail to address the systemic problems at the heart of individual despair. A certain kind of self-care has come to dominate the past decade, as events like the 2016 election and the Covid pandemic spurred collective periods of anxiety layered on top of existing societal harms. As the self-care industry hit its stride in America, so too did interest in the seemingly dire state of social connectedness. In 2015, a study was published linking loneliness to early mortality. In the years that followed, a flurry of other research illuminated further deleterious effects of loneliness. There is no singular driver of collective loneliness globally. But one practice designed to relieve us from the ills of the world – self-care, in its current form – has pulled us away from one another, encouraging solitude over connection. America's loneliness epidemic is multifaceted, but the rise of consumerist self-care that immediately preceded it seems to have played a crucial role in kicking the crisis into high gear – and now, in perpetuating it. You see, the me-first approach that is a hallmark of today's faux self-care doesn't just contribute to loneliness, it may also be a product of it. Research shows self-centeredness is a symptom of loneliness.

Note: Our latest Substack, Lonely World, Failing Systems: Inspiring Stories Reveal What Sustains Us, dives into the loneliness crisis exacerbated by the digital world and polarizing media narratives, along with inspiring solutions and remedies that remind us of the true democratic values that bring us all together. For more along these lines, read our concise summaries of news articles on corporate corruption and mental health.


Sugary Soda Industry's Covert Influencer Campaign Falls Apart
2024-03-23, Lee Fang on Substack
Posted: 2025-04-30 23:30:46
https://www.leefang.com/p/sugary-soda-industrys-covert-influencer

Conservative social media influencers have been caught posting coordinated messages opposing proposed nutritional guidelines for SNAP benefits–the government assistance program formerly known as food stamps–after receiving payments from public relations firms. The campaign emerged as Agriculture Secretary Robert F. Kennedy Jr. explores limitations on using SNAP benefits for sugary beverages. During fiscal year 2021, the program disbursed over $121 billion in benefits, with a significant portion spent on ultra-sugary drinks that provide minimal nutritional value. Kennedy previously argued in an opinion column that it is "nonsensical for U.S. taxpayers to spend tens of billions of dollars subsidizing junk that harms the health of low-income Americans." In response, several high-profile accounts began posting nearly identical messages criticizing the proposed reforms. Independent reporter Nick Sortor revealed that these posts were orchestrated by Influenceable, a public relations firm offering influencers up to $1,000 per post to oppose SNAP reforms. Sortor published text messages documenting these solicitations. This incident highlights a longstanding pattern in the beverage industry's approach to policy debates over sugary drinks. For more than two decades, soda companies have quietly funded scientists, advocacy groups, journalists and community organizations to counter proposals limiting sugary beverage consumption.

Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on corruption in government and in the food system.


‘I'm the new Oppenheimer!': my soul-destroying day at Palantir's first-ever AI warfare conference
2025-05-17, The Guardian (One of the UK's Leading Newspapers)
Posted: 2025-04-22 16:37:11
https://www.theguardian.com/technology/article/2024/may/17/ai-weapons-palanti...

The inaugural "AI Expo for National Competitiveness" [was] hosted by the Special Competitive Studies Project – better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir's booth titled Civilian Harm Mitigation. It was led by two "privacy and civil liberties engineers" [who] described how Palantir's Gaia map tool lets users "nominate targets of interest" for "the target nomination process". It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. "Let's say you're operating in a place with a lot of civilian areas, like Gaza," I asked the engineers afterward. "Does Palantir prevent you from ‘nominating a target' in a civilian location?" Short answer, no.

Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.


Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.

Kindly donate here to support this inspiring work.

Subscribe to our free email list of underreported news.

newsarticles.media is a PEERS empowerment website

"Dedicated to the greatest good of all who share our beautiful world"