Media ArticlesExcerpts of Key Media Articles in Major Media
Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
The Pentagon is in the midst of a massive $2 trillion multiyear plan to build a new generation of nuclear-armed missiles, bombers and submarines. A large chunk of that funding will go to major nuclear weapons contractors like Bechtel, General Dynamics, Honeywell, Lockheed Martin and Northrop Grumman. And they will do everything in their power to keep that money flowing. This January, a review of the Sentinel intercontinental ballistic missile program under the Nunn-McCurdy Act – a congressional provision designed to rein in cost overruns of Pentagon weapons programs – found that the missile, the crown jewel of the nuclear overhaul plan involving 450 missile-holding silos spread across five states, is already 81% over its original budget. It is now estimated that it will cost a total of nearly $141 billion to develop and purchase, a figure only likely to rise in the future. That Pentagon review had the option of canceling the Sentinel program because of such a staggering cost increase. Instead, it doubled down on the program, asserting that it would be an essential element of any future nuclear deterrent and must continue. Considering the rising tide of nuclear escalation globally, is it really the right time for this country to invest a fortune of taxpayer dollars in a new generation of devastating "use them or lose them" weapons? The American public has long said no, according to a 2020 poll by the University of Maryland's Program for Public Consultation.
Note: Learn more about unaccountable military spending in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
If you rent your home, there's a good chance your landlord uses RealPage to set your monthly payment. The company describes itself as merely helping landlords set the most profitable price. But a series of lawsuits says it's something else: an AI-enabled price-fixing conspiracy. The late Justice Antonin Scalia once called price-fixing the "supreme evil" of antitrust law. Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine. Property owners feed RealPage's "property management software" their data, including unit prices and vacancy rates, and the algorithm–which also knows what competitors are charging–spits out a rent recommendation. If enough landlords use it, the result could look the same as a traditional price-fixing cartel: lockstep price increases instead of price competition, no secret handshake or clandestine meeting needed. Algorithmic price-fixing appears to be spreading to more and more industries. And existing laws may not be equipped to stop it. In more than 40 housing markets across the United States, 30 to 60 percent of multifamily-building units are priced using RealPage. The plaintiffs suing RealPage, including the Arizona and Washington, D.C., attorneys general, argue that this has enabled a critical mass of landlords to raise rents in concert, making an existing housing-affordability crisis even worse. The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
There is a massive battery right under your feet. Unlike a flammable lithium ion battery, though, this one is perfectly stable, free to use, and ripe for sustainable exploitation: the Earth itself. While temperatures above ground fluctuate throughout the year, the ground stays a stable temperature, meaning that it is humming with geothermal energy. "Every building sits on a thermal asset," said Cameron Best, director of business development at Brightcore Energy in New York, which deploys geothermal systems. "I really don't think there's any more efficient or better way to heat and cool our homes." A couple of months ago Eversource Energy commissioned the US's first networked geothermal neighbourhood operated by a utility, in Framingham, Massachusetts. Pipes run down boreholes 600-700ft (about 180-215 metres) deep, where the temperature of the rock is consistently 55F (13C). A mixture of water and propylene glycol ... pumps through the piping, absorbing that geothermal energy. Heat pumps use the liquid to either heat or cool a space. If deployed across the country, these geothermal systems could go a long way in helping decarbonise buildings, which are responsible for about a third of total greenhouse gas emissions in the US. Once a system is in place, buildings can draw heat from water pumped from below their foundations, instead of burning natural gas. The networks ... can be set up almost anywhere.
Note: Explore more positive stories like this on technology for good.
The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn't the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn't dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.
Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child's face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children's school as part of a campaign called "Share to Protect." With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as "age assurance" has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children – and everyone else – to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that's created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said "most age verification systems range from â€somewhat privacy violating' to â€authoritarian nightmare.'" Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
My insurance broker left a frantic voicemail telling me that my homeowner's insurance had lapsed. When I finally reached my insurance broker, he told me the reason Travelers revoked my policy: AI-powered drone surveillance. My finances were imperiled, it seemed, by a bad piece of code. As my broker revealed, the ominous threat that canceled my insurance was nothing more than moss. Travelers not only uses aerial photography and AI to monitor its customers' roofs, but also wrote patents on the technology – nearly 50 patents actually. And it may not be the only insurer spying from the skies. No one can use AI to know the future; you're training the technology to make guesses based on changes in roof color and grainy aerial images. But even the best AI models will get a lot of predictions wrong, especially at scale and particularly where you're trying to make guesses about the future of radically different roof designs across countless buildings in various environments. For the insurance companies designing the algorithms, that means a lot of questions about when to put a thumb on the scale in favor of, or against, the homeowner. And insurance companies will have huge incentives to choose against the homeowner every time. When Travelers flew a drone over my house, I never knew. When it decided I was too much of a risk, I had no way of knowing why or how. As more and more companies use more and more opaque forms of AI to decide the course of our lives, we're all at risk.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
The 43,000 tons of radioactive waste and soil came from a top-secret initiative: The Manhattan Project, which built the atomic bombs America dropped on Japan in 1945. In 1973, that waste ended up in an unlined landfill in Bridgeton, Missouri, a St. Louis suburb. Workers spread it to cover trash and construction debris. In 1990, the U.S. Environmental Protection Agency declared the West Lake Landfill one of the nation's most contaminated sites requiring cleanup. [In 2012], residents mobilized, spotlighting stories of children dying from cancer. And they pressed waste-management giant Republic Services, the dump's owner, to remove the radioactive waste. In refuting neighbors' complaints, Republic tapped an unlikely ally that U.S. corporations have leaned on for decades: a federal health agency set up to protect people from environmental hazards just like the West Lake dump. A 2015 report by that small bureaucracy, the Agency for Toxic Substances and Disease Registry (ATSDR) ... declared that the landfill posed no health risk to the community. Deborah Mitchell grew up ... less than a mile from the dump. She lost both parents to cancer and battled the disease herself. Dozens of neighbors have similar stories. Three cancer researchers told Reuters the number of cases in the neighborhood is worrisome. "You just feel like you're being gaslighted by your own government," Mitchell said of the ATSDR's role.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and toxic chemicals from reliable major media sources.
The Pentagon is in the midst of a massive $2 trillion, multiyear plan to build a new generation of nuclear-armed missiles, bombers, and submarines. A large chunk of that funding will go to major nuclear weapons contractors like Bechtel, General Dynamics, Honeywell, Lockheed Martin, and Northrop Grumman. And they will do everything in their power to keep that money flowing. This January, a review of the Sentinel Intercontinental Ballistic Missile (ICBM) program under the Nunn-McCurdy Act ... found that the missile, the crown jewel of the nuclear overhaul plan involving 450 missile-holding silos spread across five states, is already 81 percent over its original budget. It is now estimated that it will cost a total of nearly $141 billion to develop and purchase, a figure only likely to rise in the future. The Bulletin of Atomic Scientists' "Doomsday Clock" – an estimate of how close the world may be at any moment to a nuclear conflict – is now set at ninety seconds to midnight, the closest it's been since that tracker was first created in 1947. Considering the rising tide of nuclear escalation globally, is it really the right time for this country to invest a fortune of taxpayer dollars in a new generation of devastating "use them or lose them" weapons? The American public has long said no, according to a 2020 poll by the University of Maryland's Program for Public Consultation, which showed that 61 percent of us actually support phasing out ICBM systems like the Sentinel.
Note: Learn more about arms industry corruption in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Liquid capital, growing market dominance, slick ads, and fawning media made it easy for giants like Google, Microsoft, Apple, and Amazon to expand their footprint and grow their bottom lines. Yet ... these companies got lazy, entitled, and demanding. They started to care less about the foundations of their business – like having happy customers and stable products – and more about making themselves feel better by reinforcing their monopolies. Big Tech has decided the way to keep customers isn't to compete or provide them with a better service but instead make it hard to leave, trick customers into buying things, or eradicate competition so that it can make things as profitable as possible, even if the experience is worse. After two decades of consistent internal innovation, Big Tech got addicted to acquisitions in the 2010s: Apple bought Siri; Meta bought WhatsApp, Instagram, and Oculus; Amazon bought Twitch; Google bought Nest and Motorola's entire mobility division. Over time, the acquisitions made it impossible for these companies to focus on delivering the features we needed. Google, Meta, Amazon, and Apple are simply no longer forces for innovation. Generative AI is the biggest, dumbest attempt that tech has ever made to escape the fallout of building companies by acquiring other companies, taking their eyes off actually inventing things, and ignoring the most important part of their world: the customer.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency. And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no. Human control sits at the heart of governments' pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI's capacity to faithfully execute a human's orders could cause them to err when rejecting an order. Militaries will therefore need to either demonstrate that it's possible to build ethical, responsible autonomous weapons that don't say no, or show that they can engineer a safe and reliable right-to-refuse that's compatible with the principle of always keeping a human "in the loop." If they can't do one or the other ... their promises of ethical and yet controllable killer robots should be treated with caution. The killer robots that countries are likely to use will only ever be as ethical as their imperfect human commanders. They would only promise a cleaner mode of warfare if those using them seek to hold themselves to a higher standard.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, read our concise summaries of news articles on AI and military corruption.
The bedrock of Google's empire sustained a major blow on Monday after a judge found its search and ad businesses violated antitrust law. The ruling, made by the District of Columbia's Judge Amit Mehta, sided with the US Justice Department and a group of states in a set of cases alleging the tech giant abused its dominance in online search. "Google is a monopolist, and it has acted as one to maintain its monopoly," Mehta wrote in his ruling. The findings, if upheld, could outlaw contracts that for years all but assured Google's dominance. Judge Mehta ruled that Google violated antitrust law in the markets for "general search" and "general search text" ads, which are the ads that appear at the top of the search results page. Apple, Amazon, and Meta are defending themselves against a series of other federal- and state-led antitrust suits, some of which make similar claims. Google's disputed behavior revolved around contracts it entered into with manufacturers of computer devices and mobile devices, as well as with browser services, browser developers, and wireless carriers. These contracts, the government claimed, violated antitrust laws because they made Google the mandatory default search provider. Companies that entered into those exclusive contracts have included Apple, LG, Samsung, AT&T, T-Mobile, Verizon, and Mozilla. Those deals are why smartphones ... come preloaded with Google's various apps.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
In 2017, hundreds of artificial intelligence experts signed the Asilomar AI Principles for how to govern artificial intelligence. I was one of them. So was OpenAI CEO Sam Altman. The signatories committed to avoiding an arms race on the grounds that "teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards." The stated goal of OpenAI is to create artificial general intelligence, a system that is as good as expert humans at most tasks. It could have significant benefits. It could also threaten millions of lives and livelihoods if not developed in a provably safe way. It could be used to commit bioterrorism, run massive cyberattacks or escalate nuclear conflict. Given these dangers, a global arms race to unleash artificial general intelligence AGI serves no one's interests. The true power of AI lies ... in its potential to bridge divides. AI might help us identify fundamental patterns in global conflicts and human behavior, leading to more profound solutions. AI's ability to process vast amounts of data could help identify patterns in global conflicts by suggesting novel approaches to resolution that human negotiators might overlook. Advanced natural language processing could break down communication barriers, allowing for more nuanced dialogue between nations and cultures. Predictive AI models could identify early signs of potential conflicts, allowing for preemptive diplomatic interventions.
Note: Learn more about emerging warfare technology in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on AI from reliable major media sources.
The average American today spends nearly 90 percent of their time indoors. Yet research indicates that children benefit greatly from time spent in nature; that not only does it improve their cognition, mood, self-esteem and social skills, but it can also make them physically healthier and less anxious. "Outdoor time for children is beneficial not just for physical health but also mental health for a multitude of reasons," says Janine Domingues, a senior psychologist in the Anxiety Disorders Center at the Child Mind Institute. "It fosters curiosity and independence. It helps kids get creative about what they can do … and then just moving around and expending energy has a lot of physical health benefits." [A] 2022 systematic review found that time outdoors can improve prosocial behaviors, including sharing, cooperating and comforting others. Research has found that nature can be particularly helpful for those who've had adverse childhood experiences. Such experiences can include growing up with poverty, abuse or violence. One 2023 study published in the Journal of Environmental Psychology looked at how making art in nature affected about 100 children in a low-income neighborhood in England. Their confidence, self-esteem and agency all improved. For all these reasons, it's important for even very young children to have access to nature where they already are, says Nilda Cosco, a research professor.
Note: Explore more positive stories like this about reimagining education.
More than half of Americans believe the First Amendment can go too far in the rights it guarantees, according to a new survey from the Foundation for Individual Rights and Expression (FIRE), a First Amendment–focused nonprofit. The survey, released on Thursday, asked 1,000 American adults a range of questions about the First Amendment, free speech, and the security of those rights. Fifty-three percent of respondents agreed with the statement "The First Amendment goes too far in the rights it guarantees" to at least some degree, with 28 percent reporting that it "mostly" or "completely" describes their thoughts. Americans were further divided along partisan lines. Over 60 percent of Democrats thought the First Amendment could go too far, compared to 52 percent of Republicans. "Evidently, one out of every two Americans wishes they had fewer civil liberties," Sean Stevens, FIRE's chief research adviser, said. "Many of them reject the right to assemble, to have a free press, and to petition the government. This is a dictator's fantasy." Further, 1 in 5 respondents said they were "somewhat" or "very" worried about losing their job if someone complains about something they said. Eighty-three percent reported self-censoring in the past month, with 23 percent doing so "fairly" or "very" often. Just 22 percent of respondents said they believed the right to free speech was "very" or "completely" secure.
Note: For more along these lines, see concise summaries of deeply revealing news articles on censorship and the erosion of civil liberties from reliable major media sources.
Mr. Chen Si, known as the Angel of Nanjing, has volunteered to patrol the Yangtze Bridge every day, and over a 21-year career, he has saved 469 people from committing suicide. One of the most famous bridges in the country, it is also the world's most popular location to commit suicide. Almost daily there are people lingering alone or wandering aimlessly along its sidewalk, and Chen engages them in conversation to test whether or not they are prospective jumpers. It started for Chen back in 2000, when he saw a desperate-looking girl wandering on the bridge. He was worried something might happen to her so he brought lunch for them to share and started to chat with her. He eventually paid for a bus ticket for her to go home, but realized that this was something that must happen all the time. For the past 21 years, he's crossed the bridge 10 times a day on his electric scooter wearing his red jacket with the words "cherish all life" written across the back, he's charismatic, he's determined, he can be almost rude, in a certain Chinese way, in his efforts saving people's life, and he's become an expert. "People with an extreme internal struggle don't have relaxed body movements, their bodies look heavy," Chen [said]. He's caught suicidal people who've been cheated on by their spouses, those who can't afford school, and many other reasons. He has spare rooms in his house to keep those he pulls off the bridge in a safe environment.
Note: Watch the trailer for a 2015 documentary about Chen. Explore more positive human interest stories focused on solutions and bridging divides.
On July 16, the S&P 500 index, one of the most widely cited benchmarks in American capitalism, reached its highest-ever market value: $47 trillion. 1.4 percent of those companies were worth more than $16 trillion, the greatest concentration of capital in the smallest number of companies in the history of the U.S. stock market. The names are familiar: Microsoft, Apple, Amazon, Nvidia, Meta, Alphabet, and Tesla. All of them, too, have made giant bets on artificial intelligence. For all their similarities, these trillion-dollar-plus companies have been grouped together under a single banner: the Magnificent Seven. In the past month, though, these giants of the U.S. economy have been faltering. A recent rout led to a collapse of $2.6 trillion in their market value. Earlier this year, Goldman Sachs issued a deeply skeptical report on the industry, calling it too expensive, too clunky, and just simply not as useful as it has been chalked up to be. "There's not a single thing that this is being used for that's cost-effective at this point," Jim Covello, an influential Goldman analyst, said on a company podcast. AI is not going away, and it will surely become more sophisticated. This explains why, even with the tempering of the AI-investment thesis, these companies are still absolutely massive. When you talk with Silicon Valley CEOs, they love to roll their eyes at their East Coast skeptics. Banks, especially, are too cautious, too concerned with short-term goals, too myopic to imagine another world.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The Federal Bureau of Investigation has a long and checkered history of letting confidential informants run wild. Joshua Caleb Sutter firmly fits into this framework. A longtime occultist and neo-Nazi, Sutter became an FBI informant roughly 20 years ago. Since then, he's earned at least $140,000 infiltrating a range of far-right organizations, most notoriously the Atomwaffen Division (AWD) starting in 2017. WIRED found evidence of Sutter's extensive influence on and promotion of an international child abuse network that goes alternatively by "com" or "764." 764, as WIRED reported in March along with The Washington Post ... is the target of an international law enforcement investigation, with more than a dozen members arrested in the United States, Europe, and Brazil. Participants in 764 and its affiliated splinter groups like CLVT, 7997, H3ll, and Harm Nation extort minors into sexually exploiting or harming themselves. They find minors via Instagram, Roblox, Minecraft, and other popular games and social media apps where children congregate online. "The informant market is run on this tacit, uncomfortable understanding that the cure sometimes might be worse than the disease," [Harvard Law School professor Alexandra] Natapoff tells WIRED. By utilizing people with criminal or extremist histories to infiltrate hard-to-penetrate milieus like gangs, organized crime, or terrorist groups ... the US government rewards such people for continuing to swim in the same waters. "Baked into that arrangement is the well-understood, avoidable phenomenon that these individuals are going to commit criminal acts," Natapoff says. According to a New York University Law School study, 41 percent of all federal terrorism cases after 9/11 involve the use of a confidential source.
Note: US agencies used at least 1,000 ex-Nazis as spies and informants during the Cold War. Nazi doctors were also used to teach mind control methods to the CIA. For more along these lines, a Human Rights Watch report found that the nearly all of the highest-profile domestic terrorism plots in the US since 9/11 featured the direct involvement of government agents or informants. The FBI has even targeted vulnerable minors, some of them with brain development issues.
Dr. Amit Goswami [is the] founder of the Center for Quantum Activism and former professor at the University of Oregon. the quantum revolution, which started at the beginning of the last century, has put us in the position of an unfinished revolution, otherwise known as a kind of suspended paradigm shift where the world is shifting from a situation of materialist reductionism, as it's called, where everything is regarded as based on material particles, to a world where everything is based on energy cannot be understood apart from consciousness. And that's important for us because nonviolence does not operate materially. Nonviolence operates spiritually in the domain of consciousness. "Matter is just a possibility in consciousness," [said Gaswami]. "So, consciousness chooses out of the matter waves the actual events that we experience. In the process, consciousness identifies with our brain, the observer's brain. We can talk about nonviolence in a very scientific way. If we are all originating from the same source, if we are ultimately the same consciousness that works through us, then it is complete ignorance to be violent to each other. So, the issue of nonviolence is basically a challenge of transformation. How do we transform using creativity, using the archetype of goodness, bring that into the equation of power, and learn to be nonviolent with each other?" There's a lot of mental violence going on. But the point is that the way we approach it as violence begets more violence. So, it never stops. The answer, of course, is that nonviolence has to grow from inside of us. It has to be an intuition that often happens not just once or twice during the day, but becomes a conviction, a faith that I cannot be violent to my fellow human.
Note: Explore more positive stories like this about healing social division.
Google and a few other search engines are the portal through which several billion people navigate the internet. Many of the world's most powerful tech companies, including Google, Microsoft, and OpenAI, have recently spotted an opportunity to remake that gateway with generative AI, and they are racing to seize it. Nearly two years after the arrival of ChatGPT, and with users growing aware that many generative-AI products have effectively been built on stolen information, tech companies are trying to play nice with the media outlets that supply the content these machines need. The start-up Perplexity ... announced revenue-sharing deals with Time, Fortune, and several other publishers. These publishers will be compensated when Perplexity earns ad revenue from AI-generated answers that cite partner content. The site does not currently run ads, but will begin doing so in the form of sponsored "related follow-up questions." OpenAI has been building its own roster of media partners, including News Corp, Vox Media, and The Atlantic. Google has purchased the rights to use Reddit content to train future AI models, and ... appears to be the only major search engine that Reddit is permitting to surface its content. The default was once that you would directly consume work by another person; now an AI may chew and regurgitate it first, then determine what you see based on its opaque underlying algorithm. Many of the human readers whom media outlets currently show ads and sell subscriptions to will have less reason to ever visit publishers' websites. Whether OpenAI, Perplexity, Google, or someone else wins the AI search war might not depend entirely on their software: Media partners are an important part of the equation. AI search will send less traffic to media websites than traditional search engines. The growing number of AI-media deals, then, are a shakedown. AI is scraping publishers' content whether they want it to or not: Media companies can be chumps or get paid.
Note: The AI search war has nothing to do with journalists and content creators getting paid and acknowledged for their work. It's all about big companies doing deals with each other to control our information environment and capture more consumer spending. For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable sources.
Texas Attorney General Ken Paxton has won a $1.4 billion settlement from Facebook parent Meta over charges that it captured users' facial and biometric data without properly informing them it was doing so. Paxton said that starting in 2011, Meta, then known as Facebook, rolled out a "tag" feature that involved software that learned how to recognize and sort faces in photos. In doing so, it automatically turned on the feature without explaining how it worked, Paxton said – something that violated a 2009 state statute governing the use of biometric data, as well as running afoul of the state's deceptive trade practices act. "Unbeknownst to most Texans, for more than a decade Meta ran facial recognition software on virtually every face contained in the photographs uploaded to Facebook, capturing records of the facial geometry of the people depicted," he said in a statement. As part of the settlement, Meta did not admit to wrongdoing. Facebook discontinued how it had previously used face-recognition technology in 2021, in the process deleting the face-scan data of more than one billion users. The settlement amount, which Paxton said is the largest ever obtained by a single state against a business, will be paid out over five years. "This historic settlement demonstrates our commitment to standing up to the world's biggest technology companies and holding them accountable for breaking the law and violating Texans' privacy rights," Paxton said.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.