Corporate Corruption News ArticlesExcerpts of key news articles on
Below are key excerpts of revealing news articles on corporate corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to revealing excerpts of key major media news articles on dozens of engaging topics. And read excerpts from 20 of the most revealing news articles ever published.
A new study of defense department spending previewed exclusively to the Guardian shows that most of the Pentagon's discretionary spending from 2020 to 2024 has gone to outside military contractors, providing a $2.4tn boon in public funds to private firms in what was described as a "continuing and massive transfer of wealth from taxpayers to fund war and weapons manufacturing". The report from the Quincy Institute for Responsible Statecraft and Costs of War project at Brown University said that the Trump administration's new Pentagon budget will push annual US military spending past the $1tn mark. That will deliver a projected windfall of more than half a trillion dollars that will be shared among top arms firms such as Lockheed Martin and Raytheon as well as a growing military tech sector with close allies in the administration such as JD Vance, the report said. The US military budget will have nearly doubled this century, increasing 99% since 2000. "The US withdrawal from Afghanistan in September 2021 did not result in a peace dividend," the authors of the report wrote. "Instead, President Biden requested, and Congress authorized, even higher annual budgets for the Pentagon, and President Trump is continuing that same trajectory of escalating military budgets." The growth in spending will increasingly benefit firms in the "military tech" sector who represent tech companies like SpaceX, Palantir and Anduril.
Note: Learn more about arms industry corruption in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
Unhealthy food and beverage companies powerfully undermine the eating habits of young people by deploying ubiquitous ads that encourage poor dietary choices and increase the risk of serious disease and premature death, according to a sweeping new study published in Obesity Reviews. The first-of-its-kind summary highlights a clear cumulative pattern: The more high-fat, high-sugar, and salty food ads young people see, the more of those products they consume–and the higher the risk that they may develop obesity, type 2 diabetes, and other diet-related diseases. Companies also disproportionately target adolescents, lower-income communities, and Black and Latino youth with the marketing of health-harming food and beverages. The review summarizes 25 years of scientific evidence and findings from 108 empirical studies and 19 systematic reviews of unhealthy food marketing to adolescents (13-17) and young adults (18-25). One study showed that children who watched just five minutes of food ads ate about 130 more calories that day. Only 19% of studies examined health impacts, but most of those found links between unhealthy food marketing and higher BMI, weight gain, or increased obesity risk–especially from ultra-processed foods and sugary drinks. One U.S. study ... found that children who could recall more food ads chose more food items and consumed more calories after exposure.
Note: For more along these lines, read our concise summaries of news articles on health and food system corruption.
Trust in academic research is crucial. This trust, however, could be affected by the presence of conflicts of interest (CoIs), situations where a specific interest of the researcher could compromise the researcher's impartiality. Academic research in fields such as economics, medicine, and many others is becoming more costly and often depends on funding or access to databases controlled by private parties. To what extent do these relationships undermine trust in research? In our new NBER working paper, we address this ... by examining how different types of CoIs shape perceptions of the trustworthiness of economic research. Trust in the results declined across all groups (on average by 30%) following the disclosure of a CoI, despite the research being peer-reviewed and published in a prestigious academic journal. This decline was moderated by expertise, with average Americans experiencing greater declines in trust than "elite" economists (who publish in the top journals). Nonetheless, even elite economists experienced a drop in trust. Financial incentives (such as funding) were not the sole or the most significant factor influencing trust. Instead, privileged access to data had the most pronounced effect. When research utilized private data aligned with the interests of the data provider, trust in the results decreased by over 20%. Trust dropped by approximately 50% if the data provider retained review rights over the research outcomes.
Note: "Trust the science" sounds noble–until you realize that even top editors of world-renowned journals have warned that much of published medical research is unreliable, distorted by fraud, corporate influence, and conflicts of interest. For more along these lines, read our concise summaries of news articles on corruption in science.
In 2022, three U.S. inspectors showed up unannounced at a massive pharmaceutical plant. For two weeks, they scrutinized humming production lines and laboratories spread across the dense industrial campus, peering over the shoulders of workers. Much of the factory was supposed to be as sterile as an operating room. But the inspectors discovered what appeared to be metal shavings on drugmaking equipment, and records that showed vials of medication that were "blackish" from contamination had been sent to the United States. Quality testing in some cases had been put off for more than six months, according to their report, and raw materials tainted with unknown "extraneous matter" were used anyway, mixed into batches of drugs. Sun Pharma's transgressions were so egregious that the Food and Drug Administration [banned] the factory from exporting drugs to the United States. But ... a secretive group inside the FDA gave the global manufacturer a special pass to continue shipping more than a dozen drugs to the United States even though they were made at the same substandard factory that the agency had officially sanctioned. Pills and injectable medications that otherwise would have been banned went to unsuspecting patients. The same small cadre at the FDA granted similar exemptions to more than 20 other factories that had violated critical standards in drugmaking, nearly all in India.
Note: For more along these lines, read our concise summaries of news articles on Big Pharma corruption.
Haiti could be Erik Prince's deadliest gambit yet. Prince's Blackwater reigned during the Global War on Terror, but left a legacy of disastrous mishaps, most infamously the 2007 Nisour massacre in Iraq, where Blackwater mercenaries killed 17 civilians. This, plus his willingness in recent years to work for foreign governments in conflicts and for law enforcement across the globe, have made Prince one of the world's most controversial entrepreneurs. A desperate Haiti has now hired him to "conduct lethal operations" against armed groups, who control about 85% of Haitian capital Port-Au-Prince. Prince will send about 150 private mercenaries to Haiti over the summer. He will advise Haiti's police force on countering Haiti's armed groups, where some Prince-hired mercenaries are already operating attack drones. The Prince deal is occurring within the context of extensive ongoing American intervention in Haiti. Currently the U.S.-backed, Kenyan-led multinational police force operating in Haiti to combat the armed groups is largely seen as a failure. Previously, a U.N. peacekeeping mission aimed at stabilizing Haiti from 2004 through 2017 was undermined by scandal, where U.N. officials were condemned for killing civilians during efforts aimed at armed groups, sexually assaulting Haitians, and introducing cholera to Haiti. Before that, the U.S. was accused of ousting Haitian leader Jean-Bertrand Aristide after he proved obstructive to U.S. foreign policy goals, in 2004.
Note: This article doesn't mention the US-backed death squads that recently terrorized Haiti. For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
Palantir has long been connected to government surveillance. It was founded in part with CIA money, it has served as an Immigration and Customs Enforcement (ICE) contractor since 2011, and it's been used for everything from local law enforcement to COVID-19 efforts. But the prominence of Palantir tools in federal agencies seems to be growing under President Trump. "The company has received more than $113 million in federal government spending since Mr. Trump took office, according to public records, including additional funds from existing contracts as well as new contracts with the Department of Homeland Security and the Pentagon," reports The New York Times, noting that this figure "does not include a $795 million contract that the Department of Defense awarded the company last week, which has not been spent." Palantir technology has largely been used by the military, the intelligence agencies, the immigration enforcers, and the police. But its uses could be expanding. Representatives of Palantir are also speaking to at least two other agencies–the Social Security Administration and the Internal Revenue Service. Along with the Trump administration's efforts to share more data across federal agencies, this signals that Palantir's huge data analysis capabilities could wind up being wielded against all Americans. Right now, the Trump administration is using Palantir tools for immigration enforcement, but those tools could easily be applied to other ... targets.
Note: Read about Palantir's recent, first-ever AI warfare conference. For more along these lines, read our concise summaries of news articles on Big Tech and intelligence agency corruption.
As a sales rep for drug manufacturers Questcor, Lisa Pratta always suspected the company's business practices weren't just immoral but illegal, too, as she explains in "False Claims – One Insider's Impossible Battle Against Big Pharma Corruption." Pratta began working for Questcor in 2010 as the sales rep in the Northeast region for Acthar, a drug which helped relieve autoimmune and inflammatory disorders. "If prescribed correctly, Acthar could help people walk again. And talk again," writes Pratta. But, she adds, "Questcor made more money when it was prescribed incorrectly." They would do anything to sell Acthar. From paying doctors to prescribe it to using bogus research studies proclaiming its miraculous efficacy, they were so successful that Achtar's price rose from $40 per vial in 2000 to nearly $39,000 in 2019 – an increase of 97,000%. Some sales reps were making up to $4 million a year and, in turn, kept the physicians doing their bidding in a life of luxury. "They took them on scuba diving trips and bought clothes and shoes for their wives. One guy bought his doctor a brand new Armani suit and expensed it to Questcor," she recalls. In March 2019, the Department of Justice served a 100-page lawsuit against Mallinckrodt, alleging illegal marketing of Acthar, bribing doctors to boost sales and defrauding government health care programs. It also mentioned Pratta's role in the case, meaning her long-held anonymity was now public knowledge.
Note: For more along these lines, read our concise summaries of news articles on corruption in science and Big Pharma profiteering.
If there is one thing that Ilya Sutskever knows, it is the opportunities–and risks–that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. "We're definitely going to build a bunker before we release AGI," Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. "Of course, it's going to be optional whether you want to get into the bunker," he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.
Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.
According to recent research by the Office of the eSafety Commissioner, "nearly 1 in 5 young people believe it's OK to track their partner whenever they want". Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you're not doing anything wrong, they're asking (well, demanding) that we trust them. But it's not about trust, it's about control and disciplining behaviour. "Nothing to hide; nothing to fear" is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what's considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The inaugural "AI Expo for National Competitiveness" [was] hosted by the Special Competitive Studies Project – better known as the "techno-economic" thinktank created by the former Google CEO and current billionaire Eric Schmidt. The conference's lead sponsor was Palantir, a software company co-founded by Peter Thiel that's best known for inspiring 2019 protests against its work with Immigration and Customs Enforcement (Ice) at the height of Trump's family separation policy. Currently, Palantir is supplying some of its AI products to the Israel Defense Forces. I ... went to a panel in Palantir's booth titled Civilian Harm Mitigation. It was led by two "privacy and civil liberties engineers" [who] described how Palantir's Gaia map tool lets users "nominate targets of interest" for "the target nomination process". It helps people choose which places get bombed. After [clicking] a few options on an interactive map, a targeted landmass lit up with bright blue blobs. These blobs ... were civilian areas like hospitals and schools. Gaia uses a large language model (something like ChatGPT) to sift through this information and simplify it. Essentially, people choosing bomb targets get a dumbed-down version of information about where children sleep and families get medical treatment. "Let's say you're operating in a place with a lot of civilian areas, like Gaza," I asked the engineers afterward. "Does Palantir prevent you from â€nominating a target' in a civilian location?" Short answer, no.
Note: "Nominating a target" is military jargon that means identifying a person, place, or object to be attacked with bombs, drones, or other weapons. Palantir's Gaia map tool makes life-or-death decisions easier by turning human lives and civilian places into abstract data points on a screen. Read about Palantir's growing influence in law enforcement and the war machine. For more, watch our 9-min video on the militarization of Big Tech.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that "threaten our personal safety and undermine America's national security." The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information–often without individuals' knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
BlackRock Inc.'s annual proxy statement devotes more than 50 pages to executive pay. How many of those are useful in understanding why Chief Executive Officer Larry Fink was compensated to the tune of $37 million for 2024? Not enough. The asset manager's latest remuneration report has heightened significance because BlackRock's shareholders delivered a rare and large protest vote against its pay framework at last year's annual meeting. That followed recommendations ... to withhold support for the so-called say-on-pay motion. In the wake of the rebuke, a board committee responsible for pay and perks took to the phones and hit the road to hear shareholders' gripes. Investors wanted more explanation of how the committee members used their considerable discretion in arriving at awards. There was also an aversion to one-time bonuses absent tough conditions. Incentive pay is 50% tied to BlackRock's financial performance, with the remainder split equally between objectives for "business strength" and "organizational strength." That financial piece was previously described using a non-exhaustive list of seven financial metrics. Now there are eight, gathered under three priorities: "drive shareholder value creation," "accelerate organic revenue growth" and "enhance operating leverage." There's no weighting given to the three financial priorities. The pay committee says Fink "far exceeded" expectations, but those expectations weren't quantified.
Note: For more along these lines, read our concise summaries of news articles on financial industry corruption.
Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers' exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of "connected cars," with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car's systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.
Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people's "sexual activities" when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
U.S. Secretary of Agriculture Brooke Rollins, in a brief announcement unveiling new staff hires on Monday, released a blurb about Kelsey Barnes, her recently appointed senior advisor. Barnes is a former lobbyist for Syngenta, the Chinese state-owned giant that manufactures and sells a number of controversial pesticide products. Syngenta's atrazine-based herbicides, for instance, is banned in much of the world yet is widely used in American agriculture. It is linked to birth defects, low sperm quality, irregular menstrual cycles, and other fertility problems. The leadership of USDA is filled with personnel with similar backgrounds. Scott Hutchins, the undersecretary for research, is a former Dow Chemical executive at the firm's pesticide division. Kailee Tkacz Buller, Rollins's chief of staff, previously worked as the president of the National Oilseed Processors Association and Edible Oil Producers Association, groups that lobby for corn and other seed oil subsidies. Critics have long warned that industry influence at the USDA creates inherent conflicts of interest, undermining the agency's regulatory mission and public health mandates. The revolving door hires also highlight renewed tension with the "Make America Healthy Again" agenda promised by Health and Human Services Secretary Robert F. Kennedy, Jr. The 2025-2030 Dietary Guidelines for Americans may serve as a test of whether establishment industry influence at the agencies will undermine MAHA promises.
Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.
Skydio, with more than $740m in venture capital funding and a valuation of about $2.5bn, makes drones for the military along with civilian organisations such as police forces and utility companies. The company moved away from the consumer market in 2020 and is now the largest US drone maker. Military uses touted on its website include gaining situational awareness on the battlefield and autonomously patrolling bases. Skydio is one of a number of new military technology unicorns – venture capital-backed startups valued at more than $1bn – many led by young men aiming to transform the US and its allies' military capabilities with advanced technology, be it straight-up software or software-imbued hardware. The rise of startups doing defence tech is a "big trend", says Cynthia Cook, a defence expert at the Center for Strategic and International Studies, a Washington-based-thinktank. She likens it to a contagion – and the bug is going around. According to financial data company PitchBook, investors funnelled nearly $155bn globally into defence tech startups between 2021 and 2024, up from $58bn over the previous four years. The US has more than 1,000 venture capital-backed companies working on "smarter, faster and cheaper" defence, says Dale Swartz from consultancy McKinsey. The types of technologies the defence upstarts are working on are many and varied, though autonomy and AI feature heavily.
Note: For more, watch our 9-min video on the militarization of Big Tech.
In July 2012, a renegade American businessman, Russ George, took a ship off the coast of British Columbia and dumped 100 tons of iron sulfate dust into the Pacific Ocean. He had unilaterally, and some suggest illegally, decided to trigger an algae bloom to absorb some carbon dioxide from the atmosphere–an attempt at geoengineering. Now a startup called Stardust seeks something more ambitious: developing proprietary geoengineering technology that would help block sun rays from reaching the planet. Stardust formed in 2023 and is based in Israel but incorporated in the United States. Geoengineering projects, even those led by climate scientists at major universities, have previously drawn the ire of environmentalists and other groups. Such a deliberate transformation of the atmosphere has never been done, and many uncertainties remain. If a geoengineering project went awry, for example, it could contribute to air pollution and ozone loss, or have dramatic effects on weather patterns, such as disrupting monsoons in populous South and East Asia. Stardust ... has not publicly released details about its technology, its business model, or exactly who works at its company. But the company appears to be positioning itself to develop and sell a proprietary geoengineering technology to governments that are considering making modifications to the global climate–acting like a kind of defense contractor for climate alteration.
Note: Regenerative farming is far safer and more promising than geoengineering for stabilizing the climate. For more along these lines, read our concise summaries of news articles on geoengineering and science corruption.
Consultants assessing Covid vaccine damage claims on behalf of the NHS have been paid millions more than the victims, it has emerged. Freedom of Information requests made by The Telegraph show that US-based Crawford and Company has carried out nearly 13,000 medical assessments, but dismissed more than 98 per cent of cases. Just 203 claimants have been notified they are entitled to a one-off payment of Ł120,000 through the Vaccine Damage Payment Scheme (VDPS) amounting to Ł24,360,000. Yet Crawford and Company has received Ł27,264,896 for its services. Prof Richard Goldberg, chairman in law at Durham University, with a special interest in vaccine liability and compensation, said: "The idea that this would be farmed out to a private company to make a determination is very odd. It's taxpayers money and money is tight at the moment. "The lack of transparency is not helpful and there is a terrible sense of secrecy about all of this. One gets the sense that their main objective is for these cases not to succeed. "There are no stats available so we don't know the details about how these claims are being decided or whether previous judgments are being taken into account." The Hart (Health Advisory and Recovery Team) group, which was set up by medical professionals and scientists during the pandemic, has warned that Crawford and Company has a "troubling reputation with numerous reports of mismanagement and claims denials across various sectors".
Note: COVID vaccine manufacturers have total immunity from liability if people die or become injured as a result of the vaccine. Our Substack dives into the complex world of COVID vaccines with nuance and balanced investigation. For more along these lines, read our concise summaries of news articles on COVID vaccine problems.
A WIRED investigation into the inner workings of Google's advertising ecosystem reveals that a wealth of sensitive information on Americans is being openly served up to some of the world's largest brands despite the company's own rules against it. Experts say that when combined with other data, this information could be used to identify and target specific individuals. Display & Video 360 (DV360), one of the dominant marketing platforms offered by the search giant, is offering companies globally the option of targeting devices in the United States based on lists of internet users believed to suffer from chronic illnesses and financial distress, among other categories of personal data that are ostensibly banned under Google's public policies. Among a list of 33,000 audience segments obtained by the ICCL, WIRED identified several that aimed to identify people working sensitive government jobs. One, for instance, targets US government employees who are considered "decision makers" working "specifically in the field of national security." Another targets individuals who work at companies registered with the State Department to manufacture and export defense-related technologies, from missiles and space launch vehicles to cryptographic systems that house classified military and intelligence data. In the wrong hands, sensitive insights gained through [commercially available information] could facilitate blackmail, stalking, harassment, and public shaming.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Meta CEO Mark Zuckerberg announced Tuesday that his social media platforms – which include Facebook and Instagram – will be getting rid of fact-checking partners and replacing them with a "community notes" model like that found on X. For a decade now, liberals have wrongly treated Trump's rise as a problem of disinformation gone wild, and one that could be fixed with just enough fact-checking. Disinformation, though, has been a convenient narrative for a Democratic establishment unwilling to reckon with its own role in upholding anti-immigrant narratives, or repeating baseless fearmongering over crime rates, and failing to support the multiracial working class. Long dead is the idea that social media platforms like X or Instagram are either trustworthy news publishers, sites for liberatory community building, or hubs for digital democracy. "The internet may once have been understood as a commons of information, but that was long ago," wrote media theorist Rob Horning in a recent newsletter. "Now the main purpose of the internet is to place its users under surveillance, to make it so that no one does anything without generating data, and to assure that paywalls, rental fees, and other sorts of rents can be extracted for information that may have once seemed free but perhaps never wanted to be." Social media platforms are huge corporations for which we, as users, produce data to be mined as a commodity to sell to advertisers – and government agencies. The CEOs of these corporations are craven and power-hungry.
Note: Read a former senior NPR editor's nuanced take on how challenging official narratives became so politicized that "politics were blotting out the curiosity and independence that should have been guiding our work." Opportunities for award winning journalism were lost on controversial issues like COVID, the Hunter Biden laptop story, and more. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
When Bank of America alerted financial regulators in 2020 to potentially suspicious payments from Leon Black, the billionaire investor, to Jeffrey Epstein, the disgraced financier, the bank was following a routine practice. The bank filed two "suspicious activity reports," or SARs, which are meant to alert law enforcement to potential criminal activities like money laundering, terrorism financing or sex trafficking. One was filed in February 2020 and the other eight months later, according to a congressional memorandum. SARs are expected to be filed within 60 days of a bank spotting a questionable transaction. But the warnings in this case ... were not filed until several years after the payments, totaling $170 million, had been made. By the time of the first filing, Mr. Epstein had already been dead for six months. The delayed filings have led congressional investigators to question if Bank of America violated federal laws against money laundering. Bank of America is not the only big bank to have been questioned about suspicious transactions involving Mr. Epstein. In litigation involving hundreds of Mr. Epstein's sexual abuse victims, it was disclosed that JPMorgan Chase had filed several SARs after the bank kicked him out as a client in 2013. Deutsche Bank, which subsequently became Mr. Epstein's primary banker, paid a $150 million fine to New York bank regulators, in part because of its due diligence failures in monitoring Mr. Epstein's financial affairs.
Note: Read about the connection between Epstein's child sex trafficking ring and intelligence agency sexual blackmail operations. For more along these lines, read our concise summaries of news articles on financial industry corruption and Jeffrey Epstein's trafficking and blackmail ring.
Important Note: Explore our full index to revealing excerpts of key major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.