Corporate Corruption News StoriesExcerpts of Key Corporate Corruption News Stories in Major Media
Below are key excerpts of revealing news articles on corporate corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: This comprehensive list of news stories is usually updated once a week. Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
In 2009, Pennsylvania's Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as "remote learning tools" and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children's digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student's data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today's kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
In recent years, Israeli security officials have boasted of a "ChatGPT-like" arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas's bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military's policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered "Catch and Revoke" initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to "overthrow or replace the culture on which our constitutional Republic stands."
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
In July 2012, a renegade American businessman, Russ George, took a ship off the coast of British Columbia and dumped 100 tons of iron sulfate dust into the Pacific Ocean. He had unilaterally, and some suggest illegally, decided to trigger an algae bloom to absorb some carbon dioxide from the atmosphere–an attempt at geoengineering. Now a startup called Stardust seeks something more ambitious: developing proprietary geoengineering technology that would help block sun rays from reaching the planet. Stardust formed in 2023 and is based in Israel but incorporated in the United States. Geoengineering projects, even those led by climate scientists at major universities, have previously drawn the ire of environmentalists and other groups. Such a deliberate transformation of the atmosphere has never been done, and many uncertainties remain. If a geoengineering project went awry, for example, it could contribute to air pollution and ozone loss, or have dramatic effects on weather patterns, such as disrupting monsoons in populous South and East Asia. Stardust ... has not publicly released details about its technology, its business model, or exactly who works at its company. But the company appears to be positioning itself to develop and sell a proprietary geoengineering technology to governments that are considering making modifications to the global climate–acting like a kind of defense contractor for climate alteration.
Note: Regenerative farming is far safer and more promising than geoengineering for stabilizing the climate. For more along these lines, read our concise summaries of news articles on geoengineering and science corruption.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon's startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military's embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it's also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: "We're already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision." Khlaaf adds that even if humans are "double-checking" the work of AI, there's little reason to think they're capable of catching every mistake. "â€Human-in-the-loop' is not always a meaningful mitigation," she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
American police departments ... are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, suspected drug and human traffickers ... with the hopes of generating evidence that can be used against them. Massive Blue, the New York–based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." 404 Media obtained a presentation showing some of these AI characters. These include a "radicalized AI" "protest persona," which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and "body positivity." Other personas are a 14-year-old boy "child trafficking AI persona," an "AI pimp persona," "college protestor," "external recruiter for protests," "escorts," and "juveniles." After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don't explain how Massive Blue determines who is a potential suspect based on their social media activity. "This idea of having an AI pretending to be somebody, a youth looking for pedophiles to talk online, or somebody who is a fake terrorist, is an idea that goes back a long time," Dave Maass, who studies border surveillance technologies for the Electronic Frontier Foundation. "The problem with all these things is that these are ill-defined problems. What problem are they actually trying to solve? One version of the AI persona is an escort. I'm not concerned about escorts. I'm not concerned about college protesters. What is it effective at, violating protesters' First Amendment rights?"
Note: Academic and private sector researchers have been engaged in a race to create undetectable deepfakes for the Pentagon. Historically, government informants posing as insiders have been used to guide, provoke, and even arm the groups they infiltrate. In terrorism sting operations, informants have encouraged or orchestrated plots to entrap people, even teenagers with development issues. These tactics misrepresent the threat of terrorism to justify huge budgets and to inflate arrest and prosecution statistics for PR purposes.
More than 500 social media creators were part of a covert electioneering effort by Democratic donors to shape the presidential election in favor of Kamala Harris. Payments went to party members with online followings but also to non-political influencers – people known for comedy posts, travel vlogs or cooking YouTubes – in exchange for "positive, specific pro-Kamala content" meant to create the appearance of a groundswell of support. Meanwhile, a similar pay-to-post effort among conservative influencers publicly unraveled. The goal was to publish messages in opposition to Health and Human Services Secretary Robert F. Kennedy Jr.'s push to remove sugary soda beverages from eligible SNAP food stamp benefits. Influencers were allegedly offered money to denounce soda restrictions as "an overreach that unfairly targets consumer choice" and encouraged to post pictures of President Trump enjoying Coca-Cola products. In both schemes, on the left and the right, those creating the content made little to no effort to disclose that payments could be involved. For ordinary users stumbling on the posts and videos, what they saw would have seemed entirely organic. If genuine public sentiment becomes indistinguishable from manufactured opinion, we lose our collective ability to recognize the truth and make informed decisions. The entire social media landscape [is] vulnerable to hidden manipulation, where money from interest groups or corporations or even rich individuals can silently shape what appears to be authentic discourse. Transparency in political influencing requires regulatory action.
Note: For more along these lines, read our concise summaries of news articles on corporate corruption and media manipulation.
Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."
Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.
Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers' exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of "connected cars," with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car's systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.
Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people's "sexual activities" when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
The Environmental Protection Agency just hid data that mapped out the locations of thousands of dangerous chemical facilities, after chemical industry lobbyists demanded that the Trump administration take down the public records. The webpage was quietly shut down late Friday ... stripping away what advocates say was critical information on the secretive chemical plants at highest risk of disaster across the United States. The data was made public last year through the Environmental Protection Agency (EPA)'s Risk Management Program, which oversees the country's highest-risk chemical facilities. These chemical plants deal with dangerous, volatile chemicals – like those used to make pesticides, fertilizers, and plastics – and are responsible for dozens of chemical disasters every year. The communities near these chemical facilities suffer high rates of pollution and harmful chemical exposure. There are nearly 12,000 Risk Management Program facilities across the country. For decades, it was difficult to find public data on where the high-risk facilities were located, not to mention information on the plants' safety records and the chemicals they were processing. But the chemical lobby fiercely opposed making the data public – and has been fighting for the EPA to take it down. After President Donald Trump's victory in November, chemical companies donated generously to his inauguration fund.
Note: For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.
U.S. Secretary of Agriculture Brooke Rollins, in a brief announcement unveiling new staff hires on Monday, released a blurb about Kelsey Barnes, her recently appointed senior advisor. Barnes is a former lobbyist for Syngenta, the Chinese state-owned giant that manufactures and sells a number of controversial pesticide products. Syngenta's atrazine-based herbicides, for instance, is banned in much of the world yet is widely used in American agriculture. It is linked to birth defects, low sperm quality, irregular menstrual cycles, and other fertility problems. The leadership of USDA is filled with personnel with similar backgrounds. Scott Hutchins, the undersecretary for research, is a former Dow Chemical executive at the firm's pesticide division. Kailee Tkacz Buller, Rollins's chief of staff, previously worked as the president of the National Oilseed Processors Association and Edible Oil Producers Association, groups that lobby for corn and other seed oil subsidies. Critics have long warned that industry influence at the USDA creates inherent conflicts of interest, undermining the agency's regulatory mission and public health mandates. The revolving door hires also highlight renewed tension with the "Make America Healthy Again" agenda promised by Health and Human Services Secretary Robert F. Kennedy, Jr. The 2025-2030 Dietary Guidelines for Americans may serve as a test of whether establishment industry influence at the agencies will undermine MAHA promises.
Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.
The wellness industry wouldn't be as lucrative if it didn't prey on our insecurities. Young people, disillusioned by polarized politics, saddled with astronomical student loan debt, and burned out by hustle culture, turned to skin care, direct-to-consumer home goods, and food and alcohol delivery – aggressively peddled by companies eager to capitalize on consumers' stressors. While these practices may be restorative in the short term, they fail to address the systemic problems at the heart of individual despair. A certain kind of self-care has come to dominate the past decade, as events like the 2016 election and the Covid pandemic spurred collective periods of anxiety layered on top of existing societal harms. As the self-care industry hit its stride in America, so too did interest in the seemingly dire state of social connectedness. In 2015, a study was published linking loneliness to early mortality. In the years that followed, a flurry of other research illuminated further deleterious effects of loneliness. There is no singular driver of collective loneliness globally. But one practice designed to relieve us from the ills of the world – self-care, in its current form – has pulled us away from one another, encouraging solitude over connection. America's loneliness epidemic is multifaceted, but the rise of consumerist self-care that immediately preceded it seems to have played a crucial role in kicking the crisis into high gear – and now, in perpetuating it. You see, the me-first approach that is a hallmark of today's faux self-care doesn't just contribute to loneliness, it may also be a product of it. Research shows self-centeredness is a symptom of loneliness.
Note: Our latest Substack, Lonely World, Failing Systems: Inspiring Stories Reveal What Sustains Us, dives into the loneliness crisis exacerbated by the digital world and polarizing media narratives, along with inspiring solutions and remedies that remind us of the true democratic values that bring us all together. For more along these lines, read our concise summaries of news articles on corporate corruption and mental health.
Conservative social media influencers have been caught posting coordinated messages opposing proposed nutritional guidelines for SNAP benefits–the government assistance program formerly known as food stamps–after receiving payments from public relations firms. The campaign emerged as Agriculture Secretary Robert F. Kennedy Jr. explores limitations on using SNAP benefits for sugary beverages. During fiscal year 2021, the program disbursed over $121 billion in benefits, with a significant portion spent on ultra-sugary drinks that provide minimal nutritional value. Kennedy previously argued in an opinion column that it is "nonsensical for U.S. taxpayers to spend tens of billions of dollars subsidizing junk that harms the health of low-income Americans." In response, several high-profile accounts began posting nearly identical messages criticizing the proposed reforms. Independent reporter Nick Sortor revealed that these posts were orchestrated by Influenceable, a public relations firm offering influencers up to $1,000 per post to oppose SNAP reforms. Sortor published text messages documenting these solicitations. This incident highlights a longstanding pattern in the beverage industry's approach to policy debates over sugary drinks. For more than two decades, soda companies have quietly funded scientists, advocacy groups, journalists and community organizations to counter proposals limiting sugary beverage consumption.
Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on corruption in government and in the food system.
The inaugural "AI Expo for National Competitiveness" [was] hosted by the Special Competitive Studies Project – better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir's booth titled Civilian Harm Mitigation. It was led by two "privacy and civil liberties engineers" [who] described how Palantir's Gaia map tool lets users "nominate targets of interest" for "the target nomination process". It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. "Let's say you're operating in a place with a lot of civilian areas, like Gaza," I asked the engineers afterward. "Does Palantir prevent you from â€nominating a target' in a civilian location?" Short answer, no.
Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.
Skydio, with more than $740m in venture capital funding and a valuation of about $2.5bn, makes drones for the military along with civilian organisations such as police forces and utility companies. The company moved away from the consumer market in 2020 and is now the largest US drone maker. Military uses touted on its website include gaining situational awareness on the battlefield and autonomously patrolling bases. Skydio is one of a number of new military technology unicorns – venture capital-backed startups valued at more than $1bn – many led by young men aiming to transform the US and its allies' military capabilities with advanced technology, be it straight-up software or software-imbued hardware. The rise of startups doing defence tech is a "big trend", says Cynthia Cook, a defence expert at the Center for Strategic and International Studies, a Washington-based-thinktank. She likens it to a contagion – and the bug is going around. According to financial data company PitchBook, investors funnelled nearly $155bn globally into defence tech startups between 2021 and 2024, up from $58bn over the previous four years. The US has more than 1,000 venture capital-backed companies working on "smarter, faster and cheaper" defence, says Dale Swartz from consultancy McKinsey. The types of technologies the defence upstarts are working on are many and varied, though autonomy and AI feature heavily.
Note: For more, watch our 9-min video on the militarization of Big Tech.
Palantir is profiting from a "revolving door" of executives and officials passing between the $264bn data intelligence company and high level positions in Washington and Westminster, creating an influence network who have guided its extraordinary growth. The US group, whose billionaire chair Peter Thiel has been a key backer of Donald Trump, has enjoyed an astonishing stock price rally on the back of strong rise of sales from government contracts and deals with the world's largest corporations. Palantir has hired extensively from government agencies critical to its sales. Palantir has won more than $2.7bn in US contracts since 2009, including over $1.3bn in Pentagon contracts, according to federal records. In the UK, Palantir has been awarded more than Ł376mn in contracts, according to Tussell, a data provider. Thiel threw a celebration party for Trump's inauguration at his DC home last month, attended by Vance as well as Silicon Valley leaders like Meta's Mark Zuckerberg and OpenAI's Sam Altman. After the US election in November, Trump began tapping Palantir executives for key government roles. At least six individuals have moved between Palantir and the Pentagon's Chief Digital and Artificial Intelligence Office (CDAO), an office that oversees the defence department's adoption of data, analytics and AI. Meanwhile, [Palantir co-founder] Joe Lonsdale ... has played a central role in setting up and staffing Musk's Department of Government Efficiency.
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the military and in the corporate world.
The US spy tech company Palantir has been in talks with the Ministry of Justice about using its technology to calculate prisoners' "reoffending risks", it has emerged. The prisons minister, James Timpson, received a letter three weeks after the general election from a Palantir executive who said the firm was one of the world's leading software companies, and was working at the forefront of artificial intelligence (AI). Palantir had been in talks with the MoJ and the Prison Service about how "secure information sharing and data analytics can alleviate prison challenges and enable a granular understanding of reoffending and associated risks", the executive added. The discussions ... are understood to have included proposals by Palantir to analyse prison capacity, and to use data held by the state to understand trends relating to reoffending. This would be based on aggregating data to identify and act on trends, factoring in drivers such as income or addiction problems. However, Amnesty International UK's business and human rights director, Peter Frankental, has expressed concern. "It's deeply worrying that Palantir is trying to seduce the new government into a so-called brave new world where public services may be run by unaccountable bots at the expense of our rights," he said. "Ministers need to push back against any use of artificial intelligence in the criminal justice, prison and welfare systems that could lead to people being discriminated against."
Note: Read about Palantir's growing influence in law enforcement and the war machine. For more, read our concise summaries of news articles on corruption in the prison system and in the corporate world.
Outer space is no longer just for global superpowers and large multinational corporations. Developing countries, start-ups, universities, and even high schools can now gain access to space. In 2024, a record 2,849 objects were launched into space. The commercial satellite industry saw global revenue rise to $285 billion in 2023, driven largely by the growth of SpaceX's Starlink constellation. While the democratization of space is a positive development, it has introduced ... an ethical quandary that I call the "double dual-use dilemma." The double dual-use dilemma refers to how private space companies themselves–not just their technologies–can become militarized and integrated into national security while operating commercially. Space companies fluidly shift between civilian and military roles. Their expertise in launch systems, satellites, and surveillance infrastructure allows them to serve both markets, often without clear regulatory oversight. Companies like Walchandnagar Industries in India, SpaceX in the United States, and the private Chinese firms that operate under a national strategy of the Chinese Communist Party called Military-Civil Fusion exemplify this trend, maintaining commercial identities while actively supporting defense programs. This blurring of roles, including the possibility that private space companies may develop their own weapons, raises concerns over unchecked militarization and calls for stronger oversight.
Note: For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
On March 21, Treasury Secretary Scott Bessent announced that U.S. shell companies and their owners can once again conceal their identities – a move critics warn could weaken national security and spur illicit financial activity that puts the American public at risk. Treasury's initial beneficial ownership information (BOI) disclosure requirement for all companies with less than 20 employees garnered bipartisan support and Trump's approval during his first administration, but it was short-lived. Officially brought into force last January 2024, and then stymied by lawsuits, the requirement passed its final legal roadblock in February 2025 – only to be shelved a month later by the administration. Now, when a U.S. citizen sets up a shell company in the U.S., they do not have to disclose their identity or the identities of the company's "beneficial owners," or the individuals who profit from the company or control its activities. American beneficial owners of foreign shell companies that register in the U.S. have been granted the same anonymity. Under the latest limited regulation, only non-American owners will be required to register with the U.S. government. U.S. shell companies have been successfully used as cover for illegal arms sales for decades. Hints of a business's true breadth and depth only emerge when a trafficker is apprehended, such as the case of Pierre Falcone who used secret accounts in Arizona to hide his proceeds from arms trafficking to Angola.
Note: For more along these lines, read our concise summaries of news articles on corruption in government and in the corporate world.
The U.S. Food and Drug Administration on Thursday launched an online searchable database listing contaminant levels in human foods, reflecting Health Secretary Robert F. Kennedy Jr.'s ongoing efforts to reduce chemicals in food since taking office. The FDA said if a food product has contaminants exceeding established levels, the agency may find the food to be unsafe. However, it added these levels do not represent "permissible levels of contamination". The Health Secretary has often stressed reducing chemicals in food and, in the previous week, directed the FDA to revise safety rules to help eliminate a provision allowing companies to self-affirm food ingredient safety. RFK Jr. also told food companies ... that the Trump administration wanted artificial dyes out of the food supply. The FDA said it is establishing an online database called "Chemical Contaminants Transparency Tool" to provide a list of contaminant levels called "tolerances, action levels and guidance levels" to evaluate the potential health risks of these contaminants in human foods. "Ideally there would be no contaminants in our food supply, but chemical contaminants may occur in food when they are present in the growing, storage or processing environments," said Acting FDA Commissioner Sara Brenner. The online database also provides information such as the contaminant name, commodity and contaminant level type.
Note: Read more about the growing list of toxic chemicals banned in other countries but not the US. For more along these lines, read our concise summaries of news articles on food system corruption and toxic chemicals.
Consultants assessing Covid vaccine damage claims on behalf of the NHS have been paid millions more than the victims, it has emerged. Freedom of Information requests made by The Telegraph show that US-based Crawford and Company has carried out nearly 13,000 medical assessments, but dismissed more than 98 per cent of cases. Just 203 claimants have been notified they are entitled to a one-off payment of Ł120,000 through the Vaccine Damage Payment Scheme (VDPS) amounting to Ł24,360,000. Yet Crawford and Company has received Ł27,264,896 for its services. Prof Richard Goldberg, chairman in law at Durham University, with a special interest in vaccine liability and compensation, said: "The idea that this would be farmed out to a private company to make a determination is very odd. It's taxpayers money and money is tight at the moment. "The lack of transparency is not helpful and there is a terrible sense of secrecy about all of this. One gets the sense that their main objective is for these cases not to succeed. "There are no stats available so we don't know the details about how these claims are being decided or whether previous judgments are being taken into account." The Hart (Health Advisory and Recovery Team) group, which was set up by medical professionals and scientists during the pandemic, has warned that Crawford and Company has a "troubling reputation with numerous reports of mismanagement and claims denials across various sectors".
Note: COVID vaccine manufacturers have total immunity from liability if people die or become injured as a result of the vaccine. Our Substack dives into the complex world of COVID vaccines with nuance and balanced investigation. For more along these lines, read our concise summaries of news articles on COVID vaccine problems.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.