News StoriesExcerpts of Key News Stories in Major Media
Note: This comprehensive list of news stories is usually updated once a week. Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
Uncle Sam conducted several pointless and destructive experiments on his own people during the Cold War. The most infamous was MKUltra, the CIA's project to develop procedures for mind control using psychedelic drugs and psychological torture. During Operation Sea-Spray, the U.S. Navy secretly sprayed San Francisco with bacteria to simulate a biological attack. San Francisco was also the site of a series of radiation experiments by the U.S. Navy. A 2024 investigation by the San Francisco Public Press and The Guardian revealed that the city's U.S. Naval Radiological Defense Laboratory had exposed at least 1,073 people to radiation over 24 experiments between 1946 and 1963. The tests came during a time when the effects of nuclear radiation were a pressing concern, and were conducted without ethical safeguards. Conscripted soldiers and civilian volunteers were sent into radioactive conditions or purposely dosed with radiation without their informed consent. The lab didn't bother following up. The Radiological Defense Laboratory ... closed in 1969. In 2013, whistleblowers brought a lawsuit against a decontamination contractor for cutting corners and faking results; in January 2025, the contractor agreed to pay a $97 million settlement. Scientists [there had] developed "synthetic fallout"–dirt laced with radioactive isotopes to simulate the waste created by a nuclear war. They had test subjects practice cleaning it up, rub it on their skin, or crawl around in it.
Note: Read about the long history of humans being treated like guinea pigs in science experiments. Learn more about the MKUltra Program in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on military corruption.
The US has been accused of hiding evidence of UFOs from the public during a bipartisan congressional hearing on Thursday (May 1). A number of experts spoke to government officials ... at the briefing on unidentified anomalous phenomena (UAP) - as it is becoming more commonly referred to these days - about the possibility of alien life. The House Committee on Oversight and Accountability ... held the event called 'Understanding UAP: Science, National Security & Innovation.' Scientists told the members of Congress that they required a bigger part in the role of investigating UAPs and other unexplained phenomena. Key speakers included former Pentagon official turned UAP whistleblower Luis Elizondo and Harvard astrophysicist Avi Loeb. Research physicist Dr Eric Davis said the US government has been operating a secret program recovering crashed UFOs since the 1950s when Dwight Eisenhower was president. Dr Davis worked as a subcontractor and then a consultant for the Pentagon UFO program since 2007. He claimed that the program began after the discovery of a crashed UAP in 1944. Since then he said that a lot of the technology recovered over the years from these wreckages have been secretly moved to Wright Patterson Air Force Base in Ohio, without any congressional oversight or approval. He once concluded that "we couldn't make it ourselves" after seeing some of of the recovered materials himself.
Note: A 2019 leak revealed a top secret conversation in 2002 between astrophysicist and consultant for the Pentagon UFO program Dr. Eric Davis and Director of Defense Intelligence Agency (DIA) Admiral Thomas R. Wilson. They discuss the existence of deeply classified black budget programs dealing with technology of non-human origin. For more, explore the comprehensive resources provided in our UFO Information Center.
Matthew Brown, who claims the Pentagon has a secret program to collect footage of UFOs, is the first of several whistleblowers who will soon come forward to release new information about unexplained technology, podcaster and journalist Jeremy Corbell says. Brown, a former national security professional, appeared this week on Corbell and George Knapp's "Weaponized" podcast to say the Defense Department has covertly amassed an archive of UFO videos and pictures from military sources under a program called "Immaculate Constellation." Secure government servers purportedly contain UFO images and lengthy clips the public has not seen. Brown says the objects he saw in the videos aren't necessarily extraterrestrial or nonhuman, but he described them as "exotic and unexplainable" and beyond conventional technology. "We are talking thousands of videos and photos ... that the American public hasn't seen of these advanced craft we call UAP that are of unknown origin," Corbell said. Corbell said he hopes Brown will be allowed to testify before Congress about what he discovered. In the meantime, the podcaster said, he and Knapp have already interviewed several other whistleblowers who have new information about the craft. "We have already recorded with other firsthand whistleblowers that we have yet to release. So, I can promise you, people are coming forward, an army of people are coming forward," he said.
Note: For more along these lines, read our concise summaries of news articles on UFOs. Then explore the comprehensive resources provided in our UFO Information Center.
Amber Scorah knows only too well that powerful stories can change society–and that powerful organizations will try to undermine those who tell them. While working at a media outlet that connects whistleblowers with journalists, she noticed parallels in the coercive tactics used by groups trying to suppress information. "There is a sort of playbook that powerful entities seem to use over and over again," she says. "You expose something about the powerful, they try to discredit you, people in your community may ostracize you." In September 2024, Scorah cofounded Psst, a nonprofit that helps people in the tech industry or the government share information of public interest with extra protections–with lots of options for specifying how the information gets used and how anonymous a person stays. Psst's main offering is a "digital safe"–which users access through an anonymous end-to-end encrypted text box hosted on Psst.org, where they can enter a description of their concerns. What makes Psst unique is something it calls its "information escrow" system–users have the option to keep their submission private until someone else shares similar concerns about the same company or organization. Combining reports from multiple sources defends against some of the isolating effects of whistleblowing and makes it harder for companies to write off a story as the grievance of a disgruntled employee, says Psst cofounder Jennifer Gibson.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and media manipulation.
The Consumer Financial Protection Bureau (CFPB) has canceled plans to introduce new rules designed to limit the ability of US data brokers to sell sensitive information about Americans, including financial data, credit history, and Social Security numbers. The CFPB proposed the new rule in early December under former director Rohit Chopra, who said the changes were necessary to combat commercial surveillance practices that "threaten our personal safety and undermine America's national security." The agency quietly withdrew the proposal on Tuesday morning. Data brokers operate within a multibillion-dollar industry built on the collection and sale of detailed personal information–often without individuals' knowledge or consent. These companies create extensive profiles on nearly every American, including highly sensitive data such as precise location history, political affiliations, and religious beliefs. Common Defense political director Naveed Shah, an Iraq War veteran, condemned the move to spike the proposed changes, accusing Vought of putting the profits of data brokers before the safety of millions of service members. Investigations by WIRED have shown that data brokers have collected and made cheaply available information that can be used to reliably track the locations of American military and intelligence personnel overseas, including in and around sensitive installations where US nuclear weapons are reportedly stored.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
The U.S. intelligence community is now buying up vast volumes of sensitive information that would have previously required a court order, essentially bypassing the Fourth Amendment. But the surveillance state has encountered a problem: There's simply too much data on sale from too many corporations and brokers. So the government has a plan for a one-stop shop. The Office of the Director of National Intelligence is working on a system to centralize and "streamline" the use of commercially available information, or CAI, like location data derived from mobile ads, by American spy agencies, according to contract documents reviewed by The Intercept. The data portal will include information deemed by the ODNI as highly sensitive, that which can be "misused to cause substantial harm, embarrassment, and inconvenience to U.S. persons." The "Intelligence Community Data Consortium" will provide a single convenient web-based storefront for searching and accessing this data, along with a "data marketplace" for purchasing "the best data at the best price," faster than ever before. It will be designed for the 18 different federal agencies and offices that make up the U.S. intelligence community, including the National Security Agency, CIA, FBI Intelligence Branch, and Homeland Security's Office of Intelligence and Analysis – though one document suggests the portal will also be used by agencies not directly related to intelligence or defense.
Note: For more along these lines, read our concise summaries of intelligence agency corruption and the disappearance of privacy.
According to recent research by the Office of the eSafety Commissioner, "nearly 1 in 5 young people believe it's OK to track their partner whenever they want". Many constantly share their location with their partner, or use apps like Life360 or Find My Friends. Some groups of friends all do it together, and talk of it as a kind of digital closeness where physical distance and the busyness of life keeps them apart. Others use apps to keep familial watch over older relatives – especially when their health may be in decline. When government officials or tech industry bigwigs proclaim that you should be OK with being spied on if you're not doing anything wrong, they're asking (well, demanding) that we trust them. But it's not about trust, it's about control and disciplining behaviour. "Nothing to hide; nothing to fear" is a frustratingly persistent fallacy, one in which we ought to be critical of when its underlying (lack of) logic creeps into how we think about interacting with one another. When it comes to interpersonal surveillance, blurring the boundary between care and control can be dangerous. Just as normalising state and corporate surveillance can lead to further erosion of rights and freedoms over time, normalising interpersonal surveillance seems to be changing the landscape of what's considered to be an expression of love – and not necessarily for the better. We ought to be very critical of claims that equate surveillance with safety.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
What goes through the minds of people working at porn companies profiting from videos of children being raped? Thanks to a filing error in a Federal District Court in Alabama, releasing thousands of pages of internal documents from Pornhub that were meant to be sealed, we now know. One internal document indicates that Pornhub as of May 2020 had 706,000 videos available on the site that had been flagged by users for depicting rape or assaults on children or for other problems. In the message traffic, one employee advises another not to copy a manager when they find sex videos with children. The other has the obvious response: "He doesn't want to know how much C.P. we have ignored for the past five years?" C.P. is short for child pornography. One private memo acknowledged that videos with apparent child sexual abuse had been viewed 684 million times before being removed. Pornhub produced these documents during discovery in a civil suit by an Alabama woman who beginning at age 16 was filmed engaging in sex acts, including at least once when she was drugged and then raped. These videos of her were posted on Pornhub and amassed thousands of views. One discovery memo showed that there were 155,447 videos on Pornhub with the keyword "12yo." Other categories that the company tracked were "11yo," "degraded teen," "under 10" and "extreme choking." (It has since removed these searches.) Google ... has been central to the business model of companies publishing nonconsensual imagery. Google also directs users to at least one website that monetizes assaults on victims of human trafficking.
Note: For more along these lines, read our concise summaries of news articles on Big Tech and sexual abuse scandals.
If there is one thing that Ilya Sutskever knows, it is the opportunities–and risks–that stem from the advent of artificial intelligence. An AI safety researcher and one of the top minds in the field, he served for years as the chief scientist of OpenAI. There he had the explicit goal of creating deep learning neural networks so advanced they would one day be able to think and reason just as well as, if not better than, any human. Artificial general intelligence, or simply AGI, is the official term for that goal. According to excerpts published by The Atlantic ... part of those plans included a doomsday shelter for OpenAI researchers. "We're definitely going to build a bunker before we release AGI," Sutskever told his team in 2023. Sutskever reasoned his fellow scientists would require protection at that point, since the technology was too powerful for it not to become an object of intense desire for governments globally. "Of course, it's going to be optional whether you want to get into the bunker," he assured fellow OpenAI scientists. Sutskever knows better than most what the awesome capabilities of AI are. He was part of an elite trio behind the 2012 creation of AlexNet, often dubbed by experts as the Big Bang of AI. Recruited by Elon Musk personally to join OpenAI three years later, he would go on to lead its efforts to develop AGI. But the launch of its ChatGPT bot accidentally derailed his plans by unleashing a funding gold rush the safety-minded Sutskever could no longer control.
Note: Watch a conversation on the big picture of emerging technology with Collective Evolution founder Joe Martino and WTK team members Amber Yang and Mark Bailey. For more along these lines, read our concise summaries of news articles on AI.
When Rodolfo "Rudy" Reyes went diving in the Cayman Islands in 2015, the experience changed his life. The highly decorated veteran had logged thousands of dives as a Special Ops Force Recon Marine in 18 years of service. But, as Reyes recalls, "As combat divers we operate at night, pushing 200 pounds of equipment, carrying massive weapons. It's very stressful and we focus on the mission – taking on the enemy." In the Caribbean, Reyes dove for the first time during daytime at his own pace, guided by his friend Jim Ritterhoff, who worked with the Central Caribbean Marine Institute. At the time, Reyes was struggling with depression, post-traumatic stress and substance abuse. "I had a really hard drug habit after all these intense combat tours," he admits, but diving in the Caymans, surrounded by vibrant marine life, reignited a sense of wonder. "It brought me back to life. It inspired the same kind of protective spirit and willingness to go fight in the battlefield that I used in the Marine Corps, but now I wanted to use that passion to fight for ocean conservation." In 2016, Reyes, Ritterhoff and Keith Sahm co-founded Force Blue, a nonprofit that recruits veterans – especially Navy SEALs and Special Operations divers with military dive training – to channel their skills into marine conservation. "We're learning to transfer combat diving expertise into protecting and providing refuge for this incredible aquatic environment," Reyes explains.
Note: Explore more positive human interest stories and stories on healing the Earth.
Rappler, founded by a group of journalists in 2012, has evolved over time to become one of the leading, most trusted news outlets in the Philippines. In December, the organization launched Rappler Communities, a trailblazing mobile app which it had built and connected directly to their news feed. Built on the open source, secure, decentralized Matrix protocol, the app has the potential to become a global independent news distribution outlet, and promises to pave the way for a "shared reality" – a call [founder and Nobel laureate Maria] Ressa has been making to counter "the cascading failures of a corrupted public information ecosystem." "At this moment, if news journalism doesn't come together with communities and civil society that cares about a shared reality, democracy cannot survive," [said Ressa]. "Most important, is that we really have a shared reality. Once our community is set at the matrix protocol chat app, it can then work with other news organizations and become a trusted news distributor. So we will own our distribution, and we could strengthen our communities. Before the end of the year, we aim to have four other new sites, in different parts of the world, federated on this protocol. The matrix protocol is end-to-end encrypted; it is decentralized, similar to the Internet Governance Forum – like having a co-op, it isn't individually owned, or profit-driven. It's, literally, a place where we have a shared reality."
Note: Explore more positive stories like this on people-powered alternative systems.
In the final bay of an old, mustard-colored mechanic's garage in the middle of the Hoopa Valley Reservation's main settlement is the headquarters of Acorn Wireless. This small, relatively young Internet service provider is owned and operated by the tribe's public utilities department–an unusual arrangement in the United States, where Internet service is more often the purview of predatory corporations like AT&T and Verizon, whose regional monopolies enable them to charge exorbitant rates for uneven service. Before the launch of Acorn, residents had to choose between a HughesNet satellite connection (more than $100 per month), a bare-bones Starlink kit ($600), unreliable wireless hot spots–or, as was often the case, nothing. Download speeds are nearly 75 percent slower in tribal areas, yet the lowest price for basic Internet service is, on average, 11 percent higher. Acorn's operation is based on the idea that local, democratic ownership can help address the coverage disparity by eliminating the profit motive. Because it is owned by the tribe and administered by the tribe's public utilities department, Acorn can focus on equity instead of revenue. Its premium service package is set at $75 a month, [but] most Acorn customers can get service at no personal cost. Hoopa's experiment in public broadband remains a work in progress, embodying hopes (and facing hurdles) that are shared on tribal lands all over the country.
Note: Explore more positive stories like this on people-powered alternative systems.
In April of 1972, Russell Targ, a Columbia-trained physicist with an unusual interest in the paranormal, met with the Office of Scientific Intelligence, a secretive branch of the CIA that monitored biological warfare, nuclear weapons and guided missiles during the Cold War. Their Soviet enemies, who had likely been experimenting with drugs, hypnotism, yoga and black magic, were now reportedly moving inanimate objects with their minds. From a military standpoint, the implications were horrifying. So the U.S. government brokered a deal: For an initial investment of $874, or just under $7,000 in today's dollars, Targ and his colleague, fellow physicist Harold Puthoff, would test the feasibility of using psychic spies at their Menlo Park lab. The operation, called Stargate, would go on to explore whether ordinary civilians could locate clandestine military facilities across the world using their hidden third eye. According to archived news reports, in total, officials spent $20 million on the secret program. Almost immediately, "curious" data started to emerge: Subjects began describing secret locations thousands of miles away with frightening accuracy. Others reportedly levitated small weights with their minds, while some allegedly controlled temperatures and read information inside sealed envelopes. One man, Patrick Price, who later became known as the SRI's "psychic treasure," was especially prolific. "We want to make it clear," [Targ] told reporters in 1976, "that the functioning is ordinary, rather than extraordinary. It is a regular human capability."
Note: Read more about the CIA's psychic spies. For more along these lines, read our concise summaries of news articles on intelligence agency corruption and the mysterious nature of reality.
Department of Defense spending is increasingly going to large tech companies including Microsoft, Google parent company Alphabet, Oracle, and IBM. Open AI recently brought on former U.S. Army general and National Security Agency Director Paul M. Nakasone to its Board of Directors. The U.S. military discreetly, yet frequently, collaborated with prominent tech companies through thousands of subcontractors through much of the 2010s, obfuscating the extent of the two sectors' partnership from tech employees and the public alike. The long-term, deep-rooted relationship between the institutions, spurred by massive Cold War defense and research spending and bound ever tighter by the sectors' revolving door, ensures that advances in the commercial tech sector benefit the defense industry's bottom line. Military, tech spending has manifested myriad landmark inventions. The internet, for example, began as an Advanced Research Projects Agency (ARPA, now known as Defense Advanced Research Projects Agency, or DARPA) research project called ARPANET, the first network of computers. Decades later, graduate students Sergey Brin and Larry Page received funding from DARPA, the National Science Foundation, and U.S. intelligence community-launched development program Massive Digital Data Systems to create what would become Google. Other prominent DARPA-funded inventions include transit satellites, a precursor to GPS, and the iPhone Siri app, which, instead of being picked up by the military, was ultimately adapted to consumer ends by Apple.
Note: Watch our latest video on the militarization of Big Tech. For more, read our concise summaries of news articles on AI, warfare technology, and Big Tech.
The US military may soon have an army of faceless suicide bombers at their disposal, as an American defense contractor has revealed their newest war-fighting drone. AeroVironment unveiled the Red Dragon in a video on their YouTube page, the first in a new line of 'one-way attack drones.' This new suicide drone can reach speeds up to 100 mph and can travel nearly 250 miles. The new drone takes just 10 minutes to set up and launch and weighs just 45 pounds. Once the small tripod the Red Dragon takes off from is set up, AeroVironment said soldiers would be able to launch up to five per minute. Since the suicide robot can choose its own target in the air, the US military may soon be taking life-and-death decisions out of the hands of humans. Once airborne, its AVACORE software architecture functions as the drone's brain, managing all its systems and enabling quick customization. Red Dragon's SPOTR-Edge perception system acts like smart eyes, using AI to find and identify targets independently. Simply put, the US military will soon have swarms of bombs with brains that don't land until they've chosen a target and crash into it. Despite Red Dragon's ability to choose a target with 'limited operator involvement,' the Department of Defense (DoD) has said it's against the military's policy to allow such a thing to happen. The DoD updated its own directives to mandate that 'autonomous and semi-autonomous weapon systems' always have the built-in ability to allow humans to control the device.
Note: Drones create more terrorists than they kill. For more, read our concise summaries of news articles on warfare technology and Big Tech.
In 2003 [Alexander Karp] – together with Peter Thiel and three others – founded a secretive tech company called Palantir. And some of the initial funding came from the investment arm of – wait for it – the CIA! The lesson that Karp and his co-author draw [in their book The Technological Republic: Hard Power, Soft Belief and the Future of the West] is that "a more intimate collaboration between the state and the technology sector, and a closer alignment of vision between the two, will be required if the United States and its allies are to maintain an advantage that will constrain our adversaries over the longer term. The preconditions for a durable peace often come only from a credible threat of war." Or, to put it more dramatically, maybe the arrival of AI makes this our "Oppenheimer moment". For those of us who have for decades been critical of tech companies, and who thought that the future for liberal democracy required that they be brought under democratic control, it's an unsettling moment. If the AI technology that giant corporations largely own and control becomes an essential part of the national security apparatus, what happens to our concerns about fairness, diversity, equity and justice as these technologies are also deployed in "civilian" life? For some campaigners and critics, the reconceptualisation of AI as essential technology for national security will seem like an unmitigated disaster – Big Brother on steroids, with resistance being futile, if not criminal.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and intelligence agency corruption.
Before signing its lucrative and controversial Project Nimbus deal with Israel, Google knew it couldn't control what the nation and its military would do with the powerful cloud-computing technology, a confidential internal report obtained by The Intercept reveals. The report makes explicit the extent to which the tech giant understood the risk of providing state-of-the-art cloud and machine learning tools to a nation long accused of systemic human rights violations. Not only would Google be unable to fully monitor or prevent Israel from using its software to harm Palestinians, but the report also notes that the contract could obligate Google to stonewall criminal investigations by other nations into Israel's use of its technology. And it would require close collaboration with the Israeli security establishment – including joint drills and intelligence sharing – that was unprecedented in Google's deals with other nations. The rarely discussed question of legal culpability has grown in significance as Israel enters the third year of what has widely been acknowledged as a genocide in Gaza – with shareholders pressing the company to conduct due diligence on whether its technology contributes to human rights abuses. Google doesn't furnish weapons to the military, but it provides computing services that allow the military to function – its ultimate function being, of course, the lethal use of those weapons. Under international law, only countries, not corporations, have binding human rights obligations.
Note: For more along these lines, read our concise summaries of news articles on AI and government corruption.
Campaigners have accused Facebook parent Meta of inflicting "potentially lifelong trauma" on hundreds of content moderators in Kenya, after more than 140 were diagnosed with PTSD and other mental health conditions. The diagnoses were made by Dr. Ian Kanyanya, the head of mental health services at Kenyatta National hospital in Kenya's capital Nairobi, and filed with the city's employment and labor relations court on December 4. Content moderators help tech companies weed out disturbing content on their platforms and are routinely managed by third party firms, often in developing countries. For years, critics have voiced concerns about the impact this work can have on moderators' mental well-being. Kanyanya said the moderators he assessed encountered "extremely graphic content on a daily basis which included videos of gruesome murders, self-harm, suicides, attempted suicides, sexual violence, explicit sexual content, child physical and sexual abuse ... just to name a few." Of the 144 content moderators who volunteered to undergo psychological assessments – out of 185 involved in the legal claim – 81% were classed as suffering from "severe" PTSD, according to Kanyanya. The class action grew out of a previous suit launched in 2022 by a former Facebook moderator, which alleged that the employee was unlawfully fired by Samasource Kenya after organizing protests against unfair working conditions.
Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and mental health.
Careless People [is] a whistleblowing book by a former [Meta] senior employee, Sarah Wynn-Williams. In the 78-page document that Wynn-Williams filed to the SEC ... it was alleged that Meta had for years been making numerous efforts to get into the biggest market in the world. These efforts included: developing a censorship system for China in 2015 that would allow a "chief editor" to decide what content to remove, and the ability to shut down the entire site during "social unrest"; assembling a "China team" in 2014 for a project to develop China-compliant versions of Meta's services; considering the weakening of privacy protections for Hong Kong users; building a specialised censorship system for China with automatic detection of restricted terms; and restricting the account of Guo Wengui, a Chinese government critic. In her time at Meta, Wynn-Williams observed many of these activities at close range. Clearly, nobody in Meta has heard of the Streisand effect, "an unintended consequence of attempts to hide, remove or censor information, where the effort instead increases public awareness of the information". What strikes the reader is that Meta and its counterparts are merely the digital equivalents of the oil, mining and tobacco conglomerates of the analogue era.
Note: A former Meta insider revealed that the company's policy on banning hate groups and terrorists was quietly reshaped under political pressure, with US government agencies influencing what speech is permitted on the platform. Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on censorship and Big Tech.
Ever thought of having your genome sequenced? 23andMe ... describes itself as a "genetics-led consumer healthcare and biotechnology company empowering a healthier future". Its share price had fallen precipitately following a data breach in October 2023 that harvested the profile and ethnicity data of 6.9 million users – including name, profile photo, birth year, location, family surnames, grandparents' birthplaces, ethnicity estimates and mitochondrial DNA. So on 24 March it filed for so-called Chapter 11 proceedings in a US bankruptcy court. At which point the proverbial ordure hit the fan because the bankruptcy proceedings involve 23andMe seeking authorisation from the court to commence "a process to sell substantially all of its assets". And those assets are ... the genetic data of the company's 15 million users. These assets are very attractive to many potential purchasers. The really important thing is that genetic data is permanent, unique and immutable. If your credit card is hacked, you can always get a new replacement. But you can't get a new genome. When 23andMe's data assets come up for sale the queue of likely buyers is going to be long, with health insurance and pharmaceutical giants at the front, followed by hedge-funds, private equity vultures and advertisers, with marketers bringing up the rear. Since these outfits are not charitable ventures, it's a racing certainty that they have plans for exploiting those data assets.
Note: Watch our new video on the risks and promises of emerging technologies. For more along these lines, read our concise summaries of news articles on Big Tech and the disappearance of privacy.
Important Note: Explore our full index to revealing excerpts of key major media news stories on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.