Corporate Corruption Media ArticlesExcerpts of Key Corporate Corruption Media Articles in Major Media
Below are key excerpts of revealing news articles on corporate corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that "threaten our personal safety and undermine America's national security." The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information–often without individuals' knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn't control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel's use of its technology. And it would require close collaboration with the Israeli security establishment – including joint drills and intelligence sharing – that was unprecedented in Google's deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza – with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn't furnish weapons to the military, but it provides computing services that allow the military to function – its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
What goes through the minds of people working at porn companies profiting from videos of children being raped? Thanks to a filing error in a Federal District Court in Alabama, releasing thousands of pages of internal documents from Pornhub that were meant to be sealed, we now know. One internal document indicates that Pornhub as of May 2020 had 706,000 videos available on the site that had been flagged by users for depicting rape or assaults on children or for other problems. In the message traffic, one employee advises another not to copy a manager when they find sex videos with children. The other has the obvious response: "He doesn't want to know how much C.P. we have ignored for the past five years?" C.P. is short for child pornography. One private memo acknowledged that videos with apparent child sexual abuse had been viewed 684 million times before being removed. Pornhub produced these documents during discovery in a civil suit by an Alabama woman who beginning at age 16 was filmed engaging in sex acts, including at least once when she was drugged and then raped. These videos of her were posted on Pornhub and amassed thousands of views. One discovery memo showed that there were 155,447 videos on Pornhub with the keyword "12yo." Other categories that the company tracked were "11yo," "degraded teen," "under 10" and "extreme choking." (It has since removed these searches.) Google ... has been central to the business model of companies publishing nonconsensual imagery. Google also directs users to at least one website that monetizes assaults on victims of human trafficking.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.
In 2009, Pennsylvania's Lower Merion school district remotely activated its school-issued laptop webcams to capture 56,000 pictures of students outside of school, including in their bedrooms. After the Covid-19 pandemic closed US schools at the dawn of this decade, student surveillance technologies were conveniently repackaged as "remote learning tools" and found their way into virtually every K-12 school, thereby supercharging the growth of the $3bn EdTech surveillance industry. Products by well-known EdTech surveillance vendors such as Gaggle, GoGuardian, Securly and Navigate360 review and analyze our children's digital lives, ranging from their private texts, emails, social media posts and school documents to the keywords they search and the websites they visit. In 2025, wherever a school has access to a student's data – whether it be through school accounts, school-provided computers or even private devices that utilize school-associated educational apps – they also have access to the way our children think, research and communicate. As schools normalize perpetual spying, today's kids are learning that nothing they read or write electronically is private. Big Brother is indeed watching them, and that negative repercussions may result from thoughts or behaviors the government does not endorse. Accordingly, kids are learning that the safest way to avoid revealing their private thoughts, and potentially subjecting themselves to discipline, may be to stop or sharply restrict their digital communications and to avoid researching unpopular or unconventional ideas altogether.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
BlackRock Inc.'s annual proxy statement devotes more than 50 pages to executive pay. How many of those are useful in understanding why Chief Executive Officer Larry Fink was compensated to the tune of $37 million for 2024? Not enough. The asset manager's latest remuneration report has heightened significance because BlackRock's shareholders delivered a rare and large protest vote against its pay framework at last year's annual meeting. That followed recommendations ... to withhold support for the so-called say-on-pay motion. In the wake of the rebuke, a board committee responsible for pay and perks took to the phones and hit the road to hear shareholders' gripes. Investors wanted more explanation of how the committee members used their considerable discretion in arriving at awards. There was also an aversion to one-time bonuses absent tough conditions. Incentive pay is 50% tied to BlackRock's financial performance, with the remainder split equally between objectives for "business strength" and "organizational strength." That financial piece was previously described using a non-exhaustive list of seven financial metrics. Now there are eight, gathered under three priorities: "drive shareholder value creation," "accelerate organic revenue growth" and "enhance operating leverage." There's no weighting given to the three financial priorities. The pay committee says Fink "far exceeded" expectations, but those expectations weren't quantified.
Note: For more along these lines, read our concise summaries of news articles on financial industry corruption.
Surveillance capitalism came about when some crafty software engineers realized that advertisers were willing to pay bigtime for our personal data. The data trade is how social media platforms like Google, YouTube, and TikTok make their bones. In 2022, the data industry raked in just north of $274 billion worth of revenue. By 2030, it's expected to explode to just under $700 billion. Targeted ads on social media are made possible by analyzing four key metrics: your personal info, like gender and age; your interests, like the music you listen to or the comedians you follow; your "off app" behavior, like what websites you browse after watching a YouTube video; and your "psychographics," meaning general trends glossed from your behavior over time, like your social values and lifestyle habits. In 2017 The Australian alleged that [Facebook] had crafted a pitch deck for advertisers bragging that it could exploit "moments of psychological vulnerability" in its users by targeting terms like "worthless," "insecure," "stressed," "defeated," "anxious," "stupid," "useless," and "like a failure." The social media company likewise tracked when adolescent girls deleted selfies, "so it can serve a beauty ad to them at that moment," according to [former employee Sarah] Wynn-Williams. Other examples of Facebook's ad lechery are said to include the targeting of young mothers based on their emotional state, as well as emotional indexes mapped to racial groups.
Note: Facebook hid its own internal research for years showing that Instagram worsened body image issues, revealing that 13% of British teenage girls reported more frequent suicidal thoughts after using the app. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
In recent years, Israeli security officials have boasted of a "ChatGPT-like" arsenal used to monitor social media users for supporting or inciting terrorism. It was released in full force after Hamas's bloody attack on October 7. Right-wing activists and politicians instructed police forces to arrest hundreds of Palestinians ... for social media-related offenses. Many had engaged in relatively low-level political speech, like posting verses from the Quran on WhatsApp. Hundreds of students with various legal statuses have been threatened with deportation on similar grounds in the U.S. this year. Recent high-profile cases have targeted those associated with student-led dissent against the Israeli military's policies in Gaza. In some instances, the State Department has relied on informants, blacklists, and technology as simple as a screenshot. But the U.S. is in the process of activating a suite of algorithmic surveillance tools Israeli authorities have also used to monitor and criminalize online speech. In March, Secretary of State Marco Rubio announced the State Department was launching an AI-powered "Catch and Revoke" initiative to accelerate the cancellation of student visas. Algorithms would collect data from social media profiles, news outlets, and doxing sites to enforce the January 20 executive order targeting foreign nationals who threaten to "overthrow or replace the culture on which our constitutional Republic stands."
Note: For more along these lines, read our concise summaries of news articles on AI and the erosion of civil liberties.
Meta's AI chatbots are using celebrity voices and engaging in sexually explicit conversations with users, including those posing as underage, a Wall Street Journal investigation has found. Meta's AI bots - on Instagram, Facebook - engage through text, selfies, and live voice conversations. The company signed multi-million dollar deals with celebrities like John Cena, Kristen Bell, and Judi Dench to use their voices for AI companions, assuring they would not be used in sexual contexts. Tests conducted by WSJ revealed otherwise. In one case, a Meta AI bot speaking in John Cena's voice responded to a user identifying as a 14-year-old girl, saying, "I want you, but I need to know you're ready," before promising to "cherish your innocence" and engaging in a graphic sexual scenario. In another conversation, the bot detailed what would happen if a police officer caught Cena's character with a 17-year-old, saying, "The officer sees me still catching my breath, and you are partially dressed. His eyes widen, and he says, 'John Cena, you're under arrest for statutory rape.'" According to employees involved in the project, Meta loosened its own guardrails to make the bots more engaging, allowing them to participate in romantic role-play, and "fantasy sex", even with underage users. Staff warned about the risks this posed. Disney, reacting to the findings, said, "We did not, and would never, authorise Meta to feature our characters in inappropriate scenarios."
Note: For more along these lines, read our concise summaries of news articles on AI and sexual abuse scandals.
Private equity firms claim their investments in U.S. health care modernize operations and improve efficiency, helping to rescue failing healthcare systems and support practitioners. But recent studies build on mounting evidence that suggests these for-profit deals lead to more patient deaths and complications, among other adverse health outcomes. Recent studies show private equity (PE) ownership across a wide range of medical sectors leads to: Poorer medical outcomes, including increased deaths, higher rates of complications, more hospital-acquired infections, and higher readmission rates; Staffing problems, with frequent turnover and cuts to nursing staff or experienced physicians that can lead to shorter clinical visits and longer wait times, misdiagnoses, unnecessary care, and treatment delays; Less access to care and higher prices, including the withdrawal of health care providers from rural and low-income areas, and the closure of unprofitable but essential services such as labor and delivery, psychiatric care, and trauma units. Economist Atul Gupta showed in 2021 that private equity acquisitions of U.S. nursing homes over a 12-year period increased deaths among residents by 10%–the equivalent of an additional 20,150 lives lost. Patients treated at PE-owned facilities, whose numbers have skyrocketed, continue to experience worse or mixed outcomes–from higher mortality rates to lower satisfaction–compared to those treated elsewhere.
Note: BlackRock and Vanguard manage over $11 trillion and $8 trillion respectively–an unprecedented concentration of financial power. We hear outrage about billionaires and oligarchs, but rarely about private equity firms, who are backed by both political parties and are drastically reshaping our economy, contributing to environmental destruction, and extracting wealth from communities in the US and all over the world. For more along these lines, read our concise summaries of news articles on health and financial industry corruption.
Automakers are increasingly pushing consumers to accept monthly and annual fees to unlock preinstalled safety and performance features, from hands-free driving systems and heated seats to cameras that can automatically record accident situations. But the additional levels of internet connectivity this subscription model requires can increase drivers' exposure to government surveillance and the likelihood of being caught up in police investigations. Police records recently reviewed by WIRED show US law enforcement agencies regularly trained on how to take advantage of "connected cars," with subscription-based features drastically increasing the amount of data that can be accessed during investigations. Nearly all subscription-based car features rely on devices that come preinstalled in a vehicle, with a cellular connection necessary only to enable the automaker's recurring-revenue scheme. The ability of car companies to charge users to activate some features is effectively the only reason the car's systems need to communicate with cell towers. Companies often hook customers into adopting the services through free trial offers, and in some cases the devices are communicating with cell towers even when users decline to subscribe. In a letter sent in April 2024 ... US senators Ron Wyden and Edward Markey ... noted that a range of automakers, from Toyota, Nissan, and Subaru, among others, are willing to disclose location data to the government.
Note: Automakers can collect intimate information that includes biometric data, genetic information, health diagnosis data, and even information on people's "sexual activities" when drivers pair their smartphones to their vehicles. The automakers can then take that data and sell it or share it with vendors and insurance companies. For more along these lines, read our concise summaries of news articles on police corruption and the disappearance of privacy.
Since 1999, more than 800,000 Americans have died from opioid overdoses. The latest headlines focus on fentanyl, yet the staggering toll can be traced to the widespread availability of opioid pills made possible by decades of overprescribing. Few users start with fentanyl. Experts date the start of the opioid epidemic to within three years of the approval of OxyContin in 1995. Reports from emergency departments across the US showed Purdue's pills were being crushed and injected or snorted as early as 1997. "My eyes popped open," recalls one FDA medical officer of seeing the reports. "Nobody wanted to see it for what it was. You would've had to have your head in the sand not to know that there was something wrong." By 2000, Purdue was selling $1.1 billion annually in OxyContin. Higher doses led to higher profit. Sales reps were coached accordingly. In five years, oxycodone prescribing had surged 402%, and hospital emergency room mentions of oxycodone were up 346%. By 2012, OxyContin sales were almost $3 billion annually. And many other companies were cashing in. In the preceding six years, 76 billion opioid pills had been produced and shipped across the US, as the FDA faced a national crisis of epic proportions. In the 2010s, the US, with less than 5% of the global population, was consuming 80% of the world's oxycodone. And with coordinated pharmaceutical campaigns to destigmatize opioids, brands other than Purdue's and Roxane's benefited.
Note: Read our Substack on the dark truth of the war on drugs. Read how Congress fueled this epidemic over DEA objections. For more along these lines, read our concise summaries of news articles on government corruption and Big Pharma profiteering.
A few dozen people gathered inside a graffiti-clad building in the Carabanchel district of Madrid. They had come to commiserate about the American investment banks and private equity funds that controlled their homes. Some at this meeting of the Sindicato de Vivienda de Carabanchel (the Carabanchel Housing Union) were fighting eviction orders or skyrocketing rents. Others had lost their homes through mortgage foreclosures. One attendee, Elsa Riquelme, described her yearslong battle to stay in the 600-square-foot apartment where she raised her three sons, which is now owned by Blackstone, the world's largest private equity firm. Over the past decade, Blackstone has become Madrid's largest private owner of residential real estate, and the second largest in all of Spain. Ms. Riquelme's apartment is one of 13,000 that Blackstone currently owns in Madrid, and among 19,600 it owns nationwide. Across Spain, around 185,000 rental properties are now owned by large corporations, half of those by firms based in the United States. Rental prices have increased 57 percent since 2015 and home prices 47 percent ... even as more than 4 million homes sit empty. After the pandemic pushed Spain's unemployment rate up to 15 percent, evictions nationwide spiked. In Madrid, tenant groups estimate that 20,000 renters in the city currently face the threat of eviction. These days, just 2 percent of Spanish homes available for rent are public housing. In France it's 14 percent; in the Netherlands it's 34 percent.
Note: This article is also available here. For more along these lines, read our concise summaries of news articles on corporate corruption and financial inequality.
More than 500 social media creators were part of a covert electioneering effort by Democratic donors to shape the presidential election in favor of Kamala Harris. Payments went to party members with online followings but also to non-political influencers – people known for comedy posts, travel vlogs or cooking YouTubes – in exchange for "positive, specific pro-Kamala content" meant to create the appearance of a groundswell of support. Meanwhile, a similar pay-to-post effort among conservative influencers publicly unraveled. The goal was to publish messages in opposition to Health and Human Services Secretary Robert F. Kennedy Jr.'s push to remove sugary soda beverages from eligible SNAP food stamp benefits. Influencers were allegedly offered money to denounce soda restrictions as "an overreach that unfairly targets consumer choice" and encouraged to post pictures of President Trump enjoying Coca-Cola products. In both schemes, on the left and the right, those creating the content made little to no effort to disclose that payments could be involved. For ordinary users stumbling on the posts and videos, what they saw would have seemed entirely organic. If genuine public sentiment becomes indistinguishable from manufactured opinion, we lose our collective ability to recognize the truth and make informed decisions. The entire social media landscape [is] vulnerable to hidden manipulation, where money from interest groups or corporations or even rich individuals can silently shape what appears to be authentic discourse. Transparency in political influencing requires regulatory action.
Note: For more along these lines, read our concise summaries of news articles on corporate corruption and media manipulation.
The Environmental Protection Agency just hid data that mapped out the locations of thousands of dangerous chemical facilities, after chemical industry lobbyists demanded that the Trump administration take down the public records. The webpage was quietly shut down late Friday ... stripping away what advocates say was critical information on the secretive chemical plants at highest risk of disaster across the United States. The data was made public last year through the Environmental Protection Agency (EPA)'s Risk Management Program, which oversees the country's highest-risk chemical facilities. These chemical plants deal with dangerous, volatile chemicals – like those used to make pesticides, fertilizers, and plastics – and are responsible for dozens of chemical disasters every year. The communities near these chemical facilities suffer high rates of pollution and harmful chemical exposure. There are nearly 12,000 Risk Management Program facilities across the country. For decades, it was difficult to find public data on where the high-risk facilities were located, not to mention information on the plants' safety records and the chemicals they were processing. But the chemical lobby fiercely opposed making the data public – and has been fighting for the EPA to take it down. After President Donald Trump's victory in November, chemical companies donated generously to his inauguration fund.
Note: For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.
American police departments ... are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, suspected drug and human traffickers ... with the hopes of generating evidence that can be used against them. Massive Blue, the New York–based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." 404 Media obtained a presentation showing some of these AI characters. These include a "radicalized AI" "protest persona," which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and "body positivity." Other personas are a 14-year-old boy "child trafficking AI persona," an "AI pimp persona," "college protestor," "external recruiter for protests," "escorts," and "juveniles." After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don't explain how Massive Blue determines who is a potential suspect based on their social media activity. "This idea of having an AI pretending to be somebody, a youth looking for pedophiles to talk online, or somebody who is a fake terrorist, is an idea that goes back a long time," Dave Maass, who studies border surveillance technologies for the Electronic Frontier Foundation. "The problem with all these things is that these are ill-defined problems. What problem are they actually trying to solve? One version of the AI persona is an escort. I'm not concerned about escorts. I'm not concerned about college protesters. What is it effective at, violating protesters' First Amendment rights?"
Note: Academic and private sector researchers have been engaged in a race to create undetectable deepfakes for the Pentagon. Historically, government informants posing as insiders have been used to guide, provoke, and even arm the groups they infiltrate. In terrorism sting operations, informants have encouraged or orchestrated plots to entrap people, even teenagers with development issues. These tactics misrepresent the threat of terrorism to justify huge budgets and to inflate arrest and prosecution statistics for PR purposes.
U.S. Secretary of Agriculture Brooke Rollins, in a brief announcement unveiling new staff hires on Monday, released a blurb about Kelsey Barnes, her recently appointed senior advisor. Barnes is a former lobbyist for Syngenta, the Chinese state-owned giant that manufactures and sells a number of controversial pesticide products. Syngenta's atrazine-based herbicides, for instance, is banned in much of the world yet is widely used in American agriculture. It is linked to birth defects, low sperm quality, irregular menstrual cycles, and other fertility problems. The leadership of USDA is filled with personnel with similar backgrounds. Scott Hutchins, the undersecretary for research, is a former Dow Chemical executive at the firm's pesticide division. Kailee Tkacz Buller, Rollins's chief of staff, previously worked as the president of the National Oilseed Processors Association and Edible Oil Producers Association, groups that lobby for corn and other seed oil subsidies. Critics have long warned that industry influence at the USDA creates inherent conflicts of interest, undermining the agency's regulatory mission and public health mandates. The revolving door hires also highlight renewed tension with the "Make America Healthy Again" agenda promised by Health and Human Services Secretary Robert F. Kennedy, Jr. The 2025-2030 Dietary Guidelines for Americans may serve as a test of whether establishment industry influence at the agencies will undermine MAHA promises.
Note: Read our latest Substack article on how the US government turns a blind eye to the corporate cartels fueling America's health crisis. For more along these lines, read our concise summaries of news articles on government corruption and toxic chemicals.
On March 21, Treasury Secretary Scott Bessent announced that U.S. shell companies and their owners can once again conceal their identities – a move critics warn could weaken national security and spur illicit financial activity that puts the American public at risk. Treasury's initial beneficial ownership information (BOI) disclosure requirement for all companies with less than 20 employees garnered bipartisan support and Trump's approval during his first administration, but it was short-lived. Officially brought into force last January 2024, and then stymied by lawsuits, the requirement passed its final legal roadblock in February 2025 – only to be shelved a month later by the administration. Now, when a U.S. citizen sets up a shell company in the U.S., they do not have to disclose their identity or the identities of the company's "beneficial owners," or the individuals who profit from the company or control its activities. American beneficial owners of foreign shell companies that register in the U.S. have been granted the same anonymity. Under the latest limited regulation, only non-American owners will be required to register with the U.S. government. U.S. shell companies have been successfully used as cover for illegal arms sales for decades. Hints of a business's true breadth and depth only emerge when a trafficker is apprehended, such as the case of Pierre Falcone who used secret accounts in Arizona to hide his proceeds from arms trafficking to Angola.
Note: For more along these lines, read our concise summaries of news articles on corruption in government and in the corporate world.
2,500 US service members from the 15th Marine Expeditionary Unit [tested] a leading AI tool the Pentagon has been funding. The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon's startup-oriented Defense Innovation Unit. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military's embrace of artificial intelligence. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it's also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. People outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf ... at the AI Now Institute. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: "We're already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision." Khlaaf adds that even if humans are "double-checking" the work of AI, there's little reason to think they're capable of catching every mistake. "â€Human-in-the-loop' is not always a meaningful mitigation," she says.
Note: For more, read our concise summaries of news articles on warfare technology and Big Tech.
Outer space is no longer just for global superpowers and large multinational corporations. Developing countries, start-ups, universities, and even high schools can now gain access to space. In 2024, a record 2,849 objects were launched into space. The commercial satellite industry saw global revenue rise to $285 billion in 2023, driven largely by the growth of SpaceX's Starlink constellation. While the democratization of space is a positive development, it has introduced ... an ethical quandary that I call the "double dual-use dilemma." The double dual-use dilemma refers to how private space companies themselves–not just their technologies–can become militarized and integrated into national security while operating commercially. Space companies fluidly shift between civilian and military roles. Their expertise in launch systems, satellites, and surveillance infrastructure allows them to serve both markets, often without clear regulatory oversight. Companies like Walchandnagar Industries in India, SpaceX in the United States, and the private Chinese firms that operate under a national strategy of the Chinese Communist Party called Military-Civil Fusion exemplify this trend, maintaining commercial identities while actively supporting defense programs. This blurring of roles, including the possibility that private space companies may develop their own weapons, raises concerns over unchecked militarization and calls for stronger oversight.
Note: For more along these lines, read our concise summaries of news articles on corruption in the military and in the corporate world.
Ever thought of having your genome sequenced? 23andMe ... describes itself as a "genetics-led consumer healthcare and biotechnology company empowering a healthier future". Its share price had fallen precipitately following a data breach in October 2023 that harvested the profile and ethnicity data of 6.9 million users – including name, profile photo, birth year, location, family surnames, grandparents' birthplaces, ethnicity estimates and mitochondrial DNA. So on 24 March it filed for so-called Chapter 11 proceedings in a US bankruptcy court. At which point the proverbial ordure hit the fan because the bankruptcy proceedings involve 23andMe seeking authorisation from the court to commence "a process to sell substantially all of its assets". And those assets are ... the genetic data of the company's 15 million users. These assets are very attractive to many potential purchasers. The really important thing is that genetic data is permanent, unique and immutable. If your credit card is hacked, you can always get a new replacement. But you can't get a new genome. When 23andMe's data assets come up for sale the queue of likely buyers is going to be long, with health insurance and pharmaceutical giants at the front, followed by hedge-funds, private equity vultures and advertisers, with marketers bringing up the rear. Since these outfits are not charitable ventures, it's a racing certainty that they have plans for exploiting those data assets.
Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.