Corporate Corruption Media ArticlesExcerpts of Key Corporate Corruption Media Articles in Major Media
Below are key excerpts of revealing news articles on corporate corruption from reliable news media sources. If any link fails to function, a paywall blocks full access, or the article is no longer available, try these digital tools.
Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.
Tech companies have outfitted classrooms across the U.S. with devices and technologies that allow for constant surveillance and data gathering. Firms such as Gaggle, Securly and Bark (to name a few) now collect data from tens of thousands of K-12 students. They are not required to disclose how they use that data, or guarantee its safety from hackers. In their new book, Surveillance Education: Navigating the Conspicuous Absence of Privacy in Schools, Nolan Higdon and Allison Butler show how all-encompassing surveillance is now all too real, and everything from basic privacy rights to educational quality is at stake. The tech industry has done a great job of convincing us that their platforms – like social media and email – are "free." But the truth is, they come at a cost: our privacy. These companies make money from our data, and all the content and information we share online is basically unpaid labor. So, when the COVID-19 lockdowns hit, a lot of people just assumed that using Zoom, Canvas and Moodle for online learning was a "free" alternative to in-person classes. In reality, we were giving up even more of our labor and privacy to an industry that ended up making record profits. Your data can be used against you ... or taken out of context, such as sarcasm being used to deny you a job or admission to a school. Data breaches happen all the time, which could lead to identity theft or other personal information becoming public.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
A MintPress News investigation into the funding sources of U.S. foreign policy think tanks has found that they are sponsored to the tune of millions of dollars every year by weapons contractors. Arms manufacturing companies donated at least $7.8 million last year to the top fifty U.S. think tanks, who, in turn, pump out reports demanding more war and higher military spending, which significantly increase their sponsors' profits. The only losers in this closed, circular system are the American public, saddled with higher taxes, and the tens of millions of people around the world who are victims of the U.S. war machine. The think tanks receiving the most tainted cash were, in order, the Atlantic Council, CSIS, CNAS, the Hudson Institute, and the Council on Foreign Relations, while the weapons manufacturers most active on K-Street were Northrop Grumman, Lockheed Martin, and General Atomics. There is obviously a massive conflict of interest if groups advising the U.S. government on military policy are awash with cash from the arms industry. The Atlantic Council alone is funded by 22 weapons companies, totaling at least $2.69 million last year. Even a group like the Carnegie Endowment for Peace, established in 1910 as an organization dedicated to reducing global conflict, is sponsored by corporations making weapons of war, including Boeing and Leonardo, who donate tens of thousands of dollars annually.
Note: Learn more about arms industry corruption in our comprehensive Military-Intelligence Corruption Information Center. For more, see concise summaries of deeply revealing news articles on military corruption from reliable major media sources.
Drug ads have been ubiquitous on TV since the late 1990s and have spilled onto the internet and social media. The United States and New Zealand are the only countries that legally allow direct-to-consumer pharmaceutical advertising. Manufacturers have spent more than $1 billion a month on ads in recent years. Last year, three of the top five spenders on TV advertising were drug companies. A 2023 study found that, among top-selling drugs, those with the lowest levels of added benefit tended to spend more on advertising to patients than doctors. "I worry that direct-to-consumer advertising can be used to drive demand for marginally effective drugs or for drugs with more affordable or more cost-effective alternatives," the study's author, Michael DiStefano ... said. Indeed, more than 50% of what Medicare spent on drugs from 2016 through 2018 was for drugs that were advertised. The government has, in recent years, tried to ensure that prescription-drug advertising gives a more accurate and easily understood picture of benefits and harms. But the results have been disappointing. When President Donald Trump's administration tried to get drugmakers to list the price of any treatments costing over $35 on TV ads, for example, the industry took it to federal court, saying the mandate violated drugmakers' First Amendment rights. Big Pharma won. With a bit of commonsense, truth-in-advertising enforcement, many of the ads would disappear.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Pharma corruption from reliable major media sources.
The pharmaceutical industry, as a whole and by its nature, is conflicted and significantly driven by the mighty dollar, rather than altruism. A 2020 peer-reviewed article published in the Journal of the American Medical Association outlines the extent of the problem. The group studied both the type of illegal activity and financial penalties imposed on pharma companies between the years 2003 and 2016. Of the companies studied, 85 percent (22 of 26) had received financial penalties for illegal activities with a total combined dollar value of $33 billion. The illegal activities included manufacturing and distributing adulterated drugs, misleading marketing, failure to disclose negative information about a product (i.e. significant side effects including death), bribery to foreign officials, fraudulently delaying market entry of competitors, pricing and financial violations, and kickbacks. The highest penalties were awarded to Schering-Plough, GlaxoSmithKline (GSK), Allergan, and Wyeth. The biggest overall fines have been paid by GSK (almost $10 billion), Pfizer ($2.9 billion), Johnson & Johnson ($2.6 billion), and other familiar names including AstraZeneca, Novartis, Merck, Eli Lilly, Schering-Plough, Sanofi Aventis, and Wyeth. Five US states – Texas, Kansas, Mississippi, Louisiana, and Utah – are taking Pfizer to court for withholding information, and misleading and deceiving the public through statements made in marketing its Covid-19 injection.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Pharma corruption from reliable major media sources.
Big tech companies have spent vast sums of money honing algorithms that gather their users' data and scour it for patterns. One result has been a boom in precision-targeted online advertisements. Another is a practice some experts call "algorithmic personalized pricing," which uses artificial intelligence to tailor prices to individual consumers. The Federal Trade Commission uses a more Orwellian term for this: "surveillance pricing." In July the FTC sent information-seeking orders to eight companies that "have publicly touted their use of AI and machine learning to engage in data-driven targeting," says the agency's chief technologist Stephanie Nguyen. Consumer surveillance extends beyond online shopping. "Companies are investing in infrastructure to monitor customers in real time in brick-and-mortar stores," [Nguyen] says. Some price tags, for example, have become digitized, designed to be updated automatically in response to factors such as expiration dates and customer demand. Retail giant Walmart–which is not being probed by the FTC–says its new digital price tags can be remotely updated within minutes. When personalized pricing is applied to home mortgages, lower-income people tend to pay more–and algorithms can sometimes make things even worse by hiking up interest rates based on an inadvertently discriminatory automated estimate of a borrower's risk rating.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
Ford Motor Company is just one of many automakers advancing technology that weaponizes cars for mass surveillance. The ... company is currently pursuing a patent for technology that would allow vehicles to monitor the speed of nearby cars, capture images, and transmit data to law enforcement agencies. This would effectively turn vehicles into mobile surveillance units, sharing detailed information with both police and insurance companies. Ford's initiative is part of a broader trend among car manufacturers, where vehicles are increasingly used to spy on drivers and harvest data. In today's world, a smartphone can produce up to 3 gigabytes of data per hour, but recently manufactured cars can churn out up to 25 gigabytes per hour–and the cars of the future will generate even more. These vehicles now gather biometric data such as voice, iris, retina, and fingerprint recognition. In 2022, Hyundai patented eye-scanning technology to replace car keys. This data isn't just stored locally; much of it is uploaded to the cloud, a system that has proven time and again to be incredibly vulnerable. Toyota recently announced that a significant amount of customer information was stolen and posted on a popular hacking site. Imagine a scenario where hackers gain control of your car. As cybersecurity threats become more advanced, the possibility of a widespread attack is not far-fetched.
Note: FedEx is helping the police build a large AI surveillance network to track people and vehicles. Michael Hastings, a journalist investigating U.S. military and intelligence abuses, was killed in a 2013 car crash that may have been the result of a hack. For more along these lines, explore summaries of news articles on the disappearance of privacy from reliable major media sources.
In December of 2002, Sharyl Attkisson, an Emmy-winning investigative reporter for CBS News, had an unsettling interview with smallpox expert Jonathan Tucker. In a post-9/11 world, with fears of terrorists using a long-eradicated disease like smallpox as a bioweapon, the US was preparing to bring back the smallpox inoculation program. But to Tucker, the very idea was "agonizing," writes Attkisson. Why? Because it involved "weighing the risk of a possible terrorist use of smallpox ... against the known risks of the vaccine," Tucker told the author. "A â€toxic' vaccine?" She writes. "Didn't the smallpox vaccine save the world?" But as she soon discovered, it had serious side effects, including a surprisingly high possibility of death. Attkisson witnessed firsthand how deadly the vaccine could be in April of 2003, when a colleague at NBC, journalist David Bloom, died from deep vein thrombosis while on assignment in Iraq. He'd also recently been vaccinated for smallpox, and ... thrombosis was a possible side effect of the inoculation. The majority of scientific studies are funded and even dictated by drug companies. "Studies that could stand to truly solve our most consequential health problems aren't done if they don't ultimately advance a profitable pill or injection," Attkisson writes. "These aren't necessarily drugs designed to make us well, but ones we'll â€need' for life," writes Attkisson. Some [drug companies] hire "ghostwriters" to author studies promoting a new drug, exaggerating benefits and downplaying risks, and then paying a doctor or medical expert to sign their name to it. "We exist largely in an artificial reality brought to you by the makers of the latest pill or injection," she writes. "It's a reality where invisible forces work daily to hype fears about certain illnesses, and exaggerate the supposed benefits of treatments and cures."
Note: Top leaders in the field of medicine and science have spoken out about the rampant corruption and conflicts of interest in those industries. For more along these lines, see concise summaries of deeply revealing news articles on Big Pharma corruption from reliable major media sources.
[Don] Poldermans was a prolific medical researcher at Erasmus Medical Center in the Netherlands, where he analyzed the standards of care for cardiac events after surgery, publishing a series of definitive studies from 1999 until the early 2010s. One crucial question he studied: Should you give patients a beta blocker, which lowers blood pressure, before certain surgeries? Poldermans's research said yes. European medical guidelines (and to a lesser extent US guidelines) recommended it accordingly. The problem? Poldermans's data was reportedly fake. A 2012 inquiry by Erasmus Medical School, his employer, into allegations of misconduct found that he "used patient data without written permission, used fictitious data and ... submitted to conferences [reports] which included knowingly unreliable data." Poldermans admitted the allegations and apologized. After the revelations, a new meta-analysis was published in 2014, evaluating whether to use beta blockers before non-cardiac surgery. It found that a course of beta blockers made it 27 percent more likely that someone would die within 30 days of their surgery. Millions of surgeries were conducted across the US and Europe during the years from 2009 to 2013 when those misguided guidelines were in place. One provocative analysis ... estimated that there were 800,000 deaths compared to if the best practices had been established five years sooner.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corruption in science and in Big Pharma from reliable major media sources.
Nearly half of the AI-based medical devices approved by the US Food and Drug Administration (FDA) have not been trained on real patient data, according to a new study. The study, published in Nature Medicine, finds that 226 of the 521 devices authorised by the FDA lack published clinical validation data. "Although AI device manufacturers boast of the credibility of their technology with FDA authorisation, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data," says first author Sammy Chouffani El Fassi. The US team of researchers examined the FDA's official "Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices" database. "Using these hundreds of devices in this database, we wanted to determine what it really means for an AI medical device to be FDA-authorised," says Professor Gail Henderson, a researcher at the University of North Carolina's Department of Social Medicine. Of the 521 devices in this database, just 22 were validated using the "gold standard" – randomised controlled trials, while 43% (226) didn't have any published clinical validation. Some of these devices used "phantom images" instead – computer-generated images that didn't come from real patients. The rest of the devices used retrospective or prospective validation – tests based on patient data from the past or in real-time, respectively.
Note: For more along these lines, see concise summaries of deeply revealing news articles on health and artificial intelligence from reliable major media sources.
Almost two-thirds of supermarket baby food is unhealthy while nearly all baby food labels contain misleading marketing claims designed to "trick" parents. Those are the conclusions of an eyebrow-raising study in which researchers at Australia's George Institute for Global Health analyzed 651 foods marketed for children ages 6 months to 36 months at 10 supermarket chains in the United States. The study ... found that 60% of the foods failed to meet nutritional standards set by the World Health Organization. In addition, 70% of the baby food failed to meet protein requirements, 44% exceeded total sugar recommendations, 25% failed to meet calorie recommendations, and 20% exceeded recommended sodium limits set by the WHO. The most concerning products were snack foods and pouches. "Research shows 50% of the sugar consumed from infant foods comes from pouches, and we found those were some of the worst offenders," said Dr. Elizabeth Dunford, senior study author. Sales of such convenient baby food pouches soared 900% in the U.S. in the past 13 years. Consumption of processed foods in early childhood can set lifelong habits of poor eating that could lead to obesity, diabetes, and some cancers. The study also found that 99.4% of the baby food analyzed had misleading marketing claims on the labels that violated the WHO's promotional guidelines. On average, products contained four misleading marketing claims; some had as many as eleven.
Note: Big Food profits immensely as American youth face a growing health crisis, with close to 30% prediabetic, one in six youth obese, and over half of children facing a chronic illness. Nearly 40% of conventional baby food contains toxic pesticides. For more along these lines, explore concise summaries of news articles on food system corruption from reliable major media sources.
In almost every country on Earth, the digital infrastructure upon which the modern economy was built is owned and controlled by a small handful of monopolies, based largely in Silicon Valley. This system is looking more and more like neo-feudalism. Just as the feudal lords of medieval Europe owned all of the land ... the US Big Tech monopolies of the 21st century act as corporate feudal lords, controlling all of the digital land upon which the digital economy is based. A monopolist in the 20th century would have loved to control a country's supply of, say, refrigerators. But the Big Tech monopolists of the 21st century go a step further and control all of the digital infrastructure needed to buy those fridges – from the internet itself to the software, cloud hosting, apps, payment systems, and even the delivery service. These corporate neo-feudal lords don't just dominate a single market or a few related ones; they control the marketplace. They can create and destroy entire markets. Their monopolistic control extends well beyond just one country, to almost the entire world. If a competitor does manage to create a product, US Big Tech monopolies can make it disappear. Imagine you are an entrepreneur. You develop a product, make a website, and offer to sell it online. But then you search for it on Google, and it does not show up. Instead, Google promotes another, similar product in the search results. This is not a hypothetical; this already happens.
Note: For more along these lines, see concise summaries of deeply revealing news articles on Big Tech from reliable major media sources.
Surveillance technologies have evolved at a rapid clip over the last two decades – as has the government's willingness to use them in ways that are genuinely incompatible with a free society. The intelligence failures that allowed for the attacks on September 11 poured the concrete of the surveillance state foundation. The gradual but dramatic construction of this surveillance state is something that Republicans and Democrats alike are responsible for. Our country cannot build and expand a surveillance superstructure and expect that it will not be turned against the people it is meant to protect. The data that's being collected reflect intimate details about our closely held beliefs, our biology and health, daily activities, physical location, movement patterns, and more. Facial recognition, DNA collection, and location tracking represent three of the most pressing areas of concern and are ripe for exploitation. Data brokers can use tens of thousands of data points to develop a detailed dossier on you that they can sell to the government (and others). Essentially, the data broker loophole allows a law enforcement agency or other government agency such as the NSA or Department of Defense to give a third party data broker money to hand over the data from your phone – rather than get a warrant. When pressed by the intelligence community and administration, policymakers on both sides of the aisle failed to draw upon the lessons of history.
Note: For more along these lines, see concise summaries of deeply revealing news articles on government corruption and the disappearance of privacy from reliable major media sources.
A US federal appeals court ruled last week that so-called geofence warrants violate the Fourth Amendment's protections against unreasonable searches and seizures. Geofence warrants allow police to demand that companies such as Google turn over a list of every device that appeared at a certain location at a certain time. The US Fifth Circuit Court of Appeals ruled on August 9 that geofence warrants are "categorically prohibited by the Fourth Amendment" because "they never include a specific user to be identified, only a temporal and geographic location where any given user may turn up post-search." In other words, they're the unconstitutional fishing expedition that privacy and civil liberties advocates have long asserted they are. Google ... is the most frequent target of geofence warrants, vowed late last year that it was changing how it stores location data in such a way that geofence warrants may no longer return the data they once did. Legally, however, the issue is far from settled: The Fifth Circuit decision applies only to law enforcement activity in Louisiana, Mississippi, and Texas. Plus, because of weak US privacy laws, police can simply purchase the data and skip the pesky warrant process altogether. As for the appellants in the case heard by the Fifth Circuit, well, they're no better off: The court found that the police used the geofence warrant in "good faith" when it was issued in 2018, so they can still use the evidence they obtained.
Note: Read more about the rise of geofence warrants and its threat to privacy rights. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
Data breaches are a seemingly endless scourge with no simple answer, but the breach in recent months of the background-check service National Public Data illustrates just how dangerous and intractable they have become. In April, a hacker known for selling stolen information, known as USDoD, began hawking a trove of data on cybercriminal forums for $3.5 million that they said included 2.9 billion records and impacted "the entire population of USA, CA and UK." As the weeks went on, samples of the data started cropping up as other actors and legitimate researchers worked to understand its source and validate the information. By early June, it was clear that at least some of the data was legitimate and contained information like names, emails, and physical addresses in various combinations. When information is stolen from a single source, like Target customer data being stolen from Target, it's relatively straightforward to establish that source. But when information is stolen from a data broker and the company doesn't come forward about the incident, it's much more complicated to determine whether the information is legitimate and where it came from. Typically, people whose data is compromised in a breach–the true victims–aren't even aware that National Public Data held their information in the first place. Every trove of information that attackers can get their hands on ultimately fuels scamming, cybercrime, and espionage.
Note: Clearview AI scraped billions of faces off of social media without consent. At least 600 law enforcement agencies were tapping into its database of 3 billion facial images. During this time, Clearview was hacked and its entire client list – which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments – was leaked to hackers.
Peregrine ... is essentially a super-powered Google for police data. Enter a name or address into its web-based app, and Peregrine quickly scans court records, arrest reports, police interviews, body cam footage transcripts – any police dataset imaginable – for a match. It's taken data siloed across an array of older, slower systems, and made it accessible in a simple, speedy app that can be operated from a web browser. To date, Peregrine has scored 57 contracts across a wide range of police and public safety agencies in the U.S., from Atlanta to L.A. Revenue tripled in 2023, from $3 million to $10 million. [That will] triple again to $30 million this year, bolstered by $60 million in funding from the likes of Friends & Family Capital and Founders Fund. Privacy advocates [are] concerned about indiscriminate surveillance. "We see a lot of police departments of a lot of different sizes getting access to Real Time Crime Centers now, and it's definitely facilitating a lot more general access to surveillance feeds for some of these smaller departments that would have previously found it cost prohibitive," said Beryl Lipton ... at the Electronic Frontier Foundation (EFF). "These types of companies are inherently going to have a hard time protecting privacy, because everything that they're built on is basically privacy damaging." Peregrine technology can also enable "predictive policing," long criticized for unfairly targeting poorer, non-white neighborhoods.
Note: Learn more about Palantir's involvement in domestic surveillance and controversial military technologies. For more along these lines, see concise summaries of deeply revealing news articles on police corruption and the disappearance of privacy from reliable major media sources.
If you appeared in a photo on Facebook any time between 2011 and 2021, it is likely your biometric information was fed into DeepFace – the company's controversial deep-learning facial recognition system that tracked the face scan data of at least a billion users. That's where Texas Attorney General Ken Paxton comes in. His office secured a $1.4 billion settlement from Meta over its alleged violation of a Texas law that bars the capture of biometric data without consent. Meta is on the hook to pay $275 million within the next 30 days and the rest over the next four years. Why did Paxton wait until 2022 – a year after Meta announced it would suspend its facial recognition technology and delete its database – to go up against the tech giant? If our AG truly prioritized privacy, he'd focus on the lesser-known companies that law enforcement agencies here in Texas are paying to scour and store our biometric data. In 2017, [Clearview AI] launched a facial recognition app that ... could identify strangers from a photo by searching a database of faces scraped without consent from social media. In 2020, news broke that at least 600 law enforcement agencies were tapping into a database of 3 billion facial images. Clearview was hit with lawsuit after lawsuit. That same year, the company was hacked and its entire client list – which included the Department of Justice, U.S. Immigration and Customs Enforcement, Interpol, retailers and hundreds of police departments – was leaked.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and Big Tech from reliable major media sources.
Once upon a time, you could have yourself a nice little Saturday of stocking up at Costco (using your sister's membership card, naturally), before hitting up a museum (free admission with your 15-year-old expired student ID) or settling into a reality TV binge sesh (streaming on your college roommate's ex-boyfriend's Netflix login). Thanks to the fine-tuning of the tech that Corporate America uses to police subscriptions, those freeloading days are over. Costco and Disney this month took a page from the Netflix playbook and announced they are cracking down on account sharers. Want to put on "Frozen" for the kids so you can have two hours to do literally anything else? You're going to need a Disney+ login associated with your household. The tech that tracks your IP address and can read your face has gotten more sophisticated. Retailers and streaming services are increasingly turning to status-verification tech that make it harder for folks to claim student discounts on services like Amazon Prime or Spotify beyond graduation. Cracking down on sharing was hugely successful for Netflix. For years, the streaming giant turned a blind eye to password sharing because doing so allowed more people to experience the product and, crucially, come to rely on it. Netflix kept growing and growing until 2022, when [it] cashed in on its brand loyalty, betting that it had made itself indispensable to enough viewers that they'd be willing to cough up $7-$15 a month to keep their access.
Note: For more along these lines, see concise summaries of deeply revealing news articles on corporate corruption from reliable major media sources.
If you rent your home, there's a good chance your landlord uses RealPage to set your monthly payment. The company describes itself as merely helping landlords set the most profitable price. But a series of lawsuits says it's something else: an AI-enabled price-fixing conspiracy. The late Justice Antonin Scalia once called price-fixing the "supreme evil" of antitrust law. Agreeing to fix prices is punishable with up to 10 years in prison and a $100 million fine. Property owners feed RealPage's "property management software" their data, including unit prices and vacancy rates, and the algorithm–which also knows what competitors are charging–spits out a rent recommendation. If enough landlords use it, the result could look the same as a traditional price-fixing cartel: lockstep price increases instead of price competition, no secret handshake or clandestine meeting needed. Algorithmic price-fixing appears to be spreading to more and more industries. And existing laws may not be equipped to stop it. In more than 40 housing markets across the United States, 30 to 60 percent of multifamily-building units are priced using RealPage. The plaintiffs suing RealPage, including the Arizona and Washington, D.C., attorneys general, argue that this has enabled a critical mass of landlords to raise rents in concert, making an existing housing-affordability crisis even worse. The lawsuits also argue that RealPage pressures landlords to comply with its pricing suggestions.
Note: For more along these lines, see concise summaries of deeply revealing news articles on AI and corporate corruption from reliable major media sources.
The eruption of racist violence in England and Northern Ireland raises urgent questions about the responsibilities of social media companies, and how the police use facial recognition technology. While social media isn't the root of these riots, it has allowed inflammatory content to spread like wildfire and helped rioters coordinate. The great elephant in the room is the wealth, power and arrogance of the big tech emperors. Silicon Valley billionaires are richer than many countries. That mature modern states should allow them unfettered freedom to regulate the content they monetise is a gross abdication of duty, given their vast financial interest in monetising insecurity and division. In recent years, [facial recognition] has been used on our streets without any significant public debate. We wouldn't dream of allowing telephone taps, DNA retention or even stop and search and arrest powers to be so unregulated by the law, yet this is precisely what has happened with facial recognition. Our facial images are gathered en masse via CCTV cameras, the passport database and the internet. At no point were we asked about this. Individual police forces have entered into direct contracts with private companies of their choosing, making opaque arrangements to trade our highly sensitive personal data with private companies that use it to develop proprietary technology. There is no specific law governing how the police, or private companies ... are authorised to use this technology. Experts at Big Brother Watch believe the inaccuracy rate for live facial recognition since the police began using it is around 74%, and there are many cases pending about false positive IDs.
Note: Many US states are not required to reveal that they used face recognition technology to identify suspects, even though misidentification is a common occurrence. For more along these lines, see concise summaries of deeply revealing news articles on Big Tech and the disappearance of privacy from reliable major media sources.
In 2021, parents in South Africa with children between the ages of 5 and 13 were offered an unusual deal. For every photo of their child's face, a London-based artificial intelligence firm would donate 20 South African rands, about $1, to their children's school as part of a campaign called "Share to Protect." With promises of protecting children, a little-known group of companies in an experimental corner of the tech industry known as "age assurance" has begun engaging in a massive collection of faces, opening the door to privacy risks for anyone who uses the web. The companies say their age-check tools could give parents ... peace of mind. But by scanning tens of millions of faces a year, the tools could also subject children – and everyone else – to a level of inspection rarely seen on the open internet and boost the chances their personal data could be hacked, leaked or misused. Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that's created a gold mine. But ... Alex Stamos, the former security chief of Facebook, which uses Yoti, said "most age verification systems range from â€somewhat privacy violating' to â€authoritarian nightmare.'" Some also fear that lawmakers could use the tools to bar teens from content they dislike, including First Amendment-protected speech.
Note: Learn about Proctorio, an AI surveillance anti-cheating software used in schools to monitor children through webcams–conducting "desk scans," "face detection," and "gaze detection" to flag potential cheating and to spot anybody "looking away from the screen for an extended period of time." For more along these lines, see concise summaries of deeply revealing news articles on AI and the disappearance of privacy from reliable major media sources.
Important Note: Explore our full index to key excerpts of revealing major media news articles on several dozen engaging topics. And don't miss amazing excerpts from 20 of the most revealing news articles ever published.