Ars Technica - All content

“Outrageously” priced weight-loss drugs could bankrupt US health care

by: Beth Mole

Packaging for Wegovy, manufactured by Novo Nordisk, is seen in this illustration photo.
Enlarge / Packaging for Wegovy, manufactured by Novo Nordisk, is seen in this illustration photo.

With the debut of remarkably effective weight-loss drugs, America's high obesity rate and its uniquely astronomical prescription drug pricing appear to be set on a catastrophic collision course—one that threatens to "bankrupt our entire health care system," according to a new Senate report that modeled the economic impact of the drugs in different uptake scenarios.

If just half of the adults in the US with obesity start taking a new weight-loss drug, such as Wegovy, the collective cost would total an estimated $411 billion per year, the analysis found. That's more than the $406 billion Americans spent in 2022 on all prescription drugs combined.

While the bulk of the spending on weight-loss drugs will occur in the commercial market—which could easily lead to spikes in health insurance premiums—taxpayer-funded Medicare and Medicaid programs will also see an extraordinary financial burden. In the scenario that half of adults with obesity go on the drug, the cost to those federal programs would total $166 billion per year, rivaling the programs' total 2022 drug costs of $175 billion.

In all, by 2031, total US spending on prescription drugs is poised to reach over $1 trillion per year due to weight-loss drugs. Without them, the baseline projected spending on all prescription drugs would be just under $600 billion.

The analysis was put together by the Senate's Health, Education, Labor, and Pensions (HELP) committee, chaired by staunch drug-pricing critic Bernie Sanders (I-Vt). And it's quick to knock down a common argument about the high prices for smash-hit weight-loss drugs. That is, with their high effectiveness, the drugs will improve people's health in wide-ranging ways, including controlling diabetes, improving cardiovascular health, and potentially more. And, with those improvements, people won't need as much health care, generally, lowering health care costs across the board.

But, while the drugs do appear to have wide-ranging, life-altering benefits for overall health, the prices of the drugs are still set too high to be entirely offset by any savings in health care use. The HELP committee analysis cited a March Congressional Budget Office (CBO) report that found: "at their current prices, [anti-obesity medicines] would cost the federal government more than it would save from reducing other health care spending—which would lead to an overall increase in the deficit over the next 10 years." Moreover, in April, the head of the CBO said that the drugmakers would have to slash prices of their weight-loss drugs by 90 percent to "get in the ballpark" of not increasing the national deficit.

The HELP committee report offered a relatively simple solution to the problem: Drugmakers should set their US prices to match the relatively low prices they've set in other countries. The report focused on Wegovy because it currently accounts for the most US prescriptions in the new class of weight-loss drugs (GLP-1 drugs). Wegovy is made by Denmark-based Novo Nordisk.

In the US, the estimated net price (after rebates) of Wegovy is $809 per month. In Denmark, the price is $186 per month. A study by researchers at Yale estimated that drugs like Wegovy can be profitably manufactured for less than $5 per month.

If Novo Nordisk set its US prices for Wegovy to match the Danish price, spending to treat half of US adults with obesity would drop from $411 billion to $94.5 billion, a roughly $316.5 billion savings.

Without a dramatic price cut, Americans will likely face either losing access to the drugs or shouldering higher overall health care costs, or some of both. The HELP committee report highlighted how this recently played out in North Carolina. In January, the board of trustees for the state employee health plan voted to end all coverage of Wegovy and other GLP-1 drugs due to the cost. Estimates found that if the plan continued to cover the drugs, the state would need to nearly double health insurance premiums to offset the costs.

The Apple TV is coming for the Raspberry Pi’s retro emulation box crown

by: Andrew Cunningham

The RetroArch app installed in tvOS.
Enlarge / The RetroArch app installed in tvOS.
Andrew Cunningham

Apple’s initial pitch for the tvOS and the Apple TV as it currently exists was centered around apps. No longer a mere streaming box, the Apple TV would also be a destination for general-purpose software and games, piggybacking off of the iPhone's vibrant app and game library.

That never really panned out, and the Apple TV is still mostly a box for streaming TV shows and movies. But the same App Store rule change that recently allowed Delta, PPSSPP, and other retro console emulators onto the iPhone and iPad could also make the Apple TV appeal to people who want a small, efficient, no-fuss console emulator for their TVs.

So far, few of the emulators that have made it to the iPhone have been ported to the Apple TV. But earlier this week, the streaming box got an official port of RetroArch, the sprawling collection of emulators that runs on everything from the PlayStation Portable to the Raspberry Pi. RetroArch could be sideloaded onto iOS and tvOS before this, but only using awkward workarounds that took a lot more work and know-how than downloading an app from the App Store.

Downloading and using RetroArch on the Apple TV is a lot like using it on any other platform it supports, for better or worse. ROM files can be uploaded using a browser connected to the Apple TV's IP address or hostname, which will pop up the first time you launch the RetroArch app. From there, you're only really limited by the list of emulators that the Apple TV version of the app supports.

The main benefit of using the Apple TV hardware for emulation is that even older models have substantially better CPU and GPU performance than any Raspberry Pi; the first-gen Apple TV 4K and its Apple A10X chip date back to 2017 and still do better than a Pi 5 released in 2023. Even these older models should be more than fast enough to support advanced video filters, like Run Ahead, to reduce wireless controller latency and higher-than-native-resolution rendering to make 3D games look a bit more modern.

Beyond the hardware, tvOS is also a surprisingly capable gaming platform. Apple has done a good job adding and maintaining support for new Bluetooth gamepads in recent releases, and even Nintendo's official Switch Online controllers for the NES, SNES, and N64 are all officially supported as of late 2022. Apple may have added this gamepad support primarily to help support its Apple Arcade service, but all of those gamepads work equally well with RetroArch.

At the risk of stating the obvious, another upside of using the Apple TV for retro gaming is that you can also still use it as a modern 4K video streaming box when you're finished playing your games. It has well-supported apps from just about every streaming provider, and it supports all the DRM that these providers insist on when you're trying to stream high-quality 4K video with modern codecs. Most Pi gaming distributions offer the Kodi streaming software, but it's frankly outside the scope of this article to talk about the long list of caveats and add-ons you'd need to use to attempt using the same streaming services the Apple TV can access.

Obviously, there are trade-offs. Pis have been running retro games for a decade, and the Apple TV is just starting to be able to do it now. Even with the loosened App Store restrictions, Apple still has other emulation limitations relative to a Raspberry Pi or a PC.

The biggest one is that emulators on Apple's platforms can’t use just-in-time (JIT) code compilation, needed for 3D console emulators like Dolphin. These restrictions make the Apple TV a less-than-ideal option for emulating newer consoles—the Nintendo 64, Nintendo DS, Sony PlayStation, PlayStation Portable, and Sega Saturn are the newest consoles RetroArch supports on the Apple TV, cutting out newer things like the GameCube and Wii, Dreamcast, and PlayStation 2 that are all well within the capabilities of Apple's chips. Apple also insists nebulously that emulators must be for "retro" consoles rather than modern ones, which could limit the types of emulators that are available.

With respect to RetroArch specifically, there are other limitations. Though RetroArch describes itself as a front-end for emulators, its user interface is tricky to navigate, and cluttered with tons of overlapping settings that make it easy to break things if you don't know what you're doing. Most Raspberry Pi gaming distros use RetroArch, but with a front-end-for-a-front-end like EmulationStation installed to make RetroArch a bit more accessible and easy to learn. A developer could release an app that included RetroArch plus a separate front-end, but Apple's sandboxing restrictions would likely prevent anyone from releasing an app that just served as a more user-friendly front-end for the RetroArch app.

Regardless, it's still pretty cool to be able to play retro games on an Apple TV's more advanced hardware. As more emulators make their way to the App Store, the Apple TV’s less-fussy software and the power of its hardware could make it a compelling alternative to a more effort-intensive Raspberry Pi setup.

OpenAI will use Reddit posts to train ChatGPT under new deal

by: Scharon Harding

An image of a woman holding a cell phone in front of the Reddit logo displayed on a computer screen, on April 29, 2024, in Edmonton, Canada.

Stuff posted on Reddit is getting incorporated into ChatGPT, Reddit and OpenAI announced on Thursday. The new partnership grants OpenAI access to Reddit’s Data API, giving the generative AI firm real-time access to Reddit posts.

Reddit content will be incorporated into ChatGPT "and new products," Reddit's blog post said. The social media firm claims the partnership will "enable OpenAI’s AI tools to better understand and showcase Reddit content, especially on recent topics." OpenAI will also start advertising on Reddit.

The deal is similar to one that Reddit struck with Google in February that allows the tech giant to make "new ways to display Reddit content" and provide "more efficient ways to train models," Reddit said at the time. Neither Reddit nor OpenAI disclosed the financial terms of their partnership, but Reddit's partnership with Google was reportedly worth $60 million.

Under the OpenAI partnership, Reddit also gains access to OpenAI large language models (LLMs) to create features for Reddit, including its volunteer moderators.

Reddit’s data licensing push

The news comes about a year after Reddit launched an API war by starting to charge for access to its data API. This resulted in many beloved third-party Reddit apps closing and a massive user protest. Reddit, which would soon become a public company and hadn't turned a profit yet, said one of the reasons for the sudden change was to prevent AI firms from using Reddit content to train their LLMs for free.

Earlier this month, Reddit published a Public Content Policy stating: "Unfortunately, we see more and more commercial entities using unauthorized access or misusing authorized access to collect public data in bulk, including Reddit public content. Worse, these entities perceive they have no limitation on their usage of that data, and they do so with no regard for user rights or privacy, ignoring reasonable legal, safety, and user removal requests.

In its blog post on Thursday, Reddit said that deals like OpenAI's are part of an "open" Internet. It added that "part of being open means Reddit content needs to be accessible to those fostering human learning and researching ways to build community, belonging, and empowerment online."

Reddit has been vocal about its interest in pursuing data licensing deals as a core part of its business. Its building of AI partnerships sparks discourse around the use of user-generated content to fuel AI models without users being compensated and some potentially not considering that their social media posts would be used this way. OpenAI and Stack Overflow faced pushback earlier this month when integrating Stack Overflow content with ChatGPT. Some of Stack Overflow's user community responded by sabotaging their own posts.

OpenAI is also challenged to work with Reddit data that, like much of the Internet, can be filled with inaccuracies and inappropriate content. Some of the biggest opponents of Reddit's API rule changes were volunteer mods. Some have exited the platform since, and following the rule changes, Ars Technica spoke with long-time Redditors who were concerned about Reddit content quality moving forward.

Regardless, generative AI firms are keen to tap into Reddit's access to real-time conversations from a variety of people discussing a nearly endless range of topics. And Reddit seems equally eager to license the data from its users' posts.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Cats playing with robots proves a winning combo in novel art installation

by: Jennifer Ouellette

Cat with the robot arm in the Cat Royale installation
Enlarge / A kitty named Clover prepares to play with a robot arm in the Cat Royale "multi-species" science/art installation .
Blast Theory - Stephen Daly

Cats and robots are a winning combination, as evidenced by all those videos of kitties riding on Roombas. And now we have Cat Royale, a "multispecies" live installation in which three cats regularly "played" with a robot over 12 days, carefully monitored by human operators. Created by computer scientists from the University of Nottingham in collaboration with artists from a group called Blast Theory, the installation debuted at the World Science Festival in Brisbane, Australia, last year and is now a touring exhibit. The accompanying YouTube video series recently won a Webby Award, and a paper outlining the insights gleaned from the experience was similarly voted best paper at the recent Computer-Human Conference (CHI’24).

"At first glance, the project is about designing a robot to enrich the lives of a family of cats by playing with them," said co-author Steve Benford of the University of Nottingham, who led the research, "Under the surface, however, it explores the question of what it takes to trust a robot to look after our loved ones and potentially ourselves." While cats might love Roombas, not all animal encounters with robots are positive: Guide dogs for the visually impaired can get confused by delivery robots, for example, while the rise of lawn mowing robots can have a negative impact on hedgehogs, per Benford et al.

Blast Theory and the scientists first held a series of exploratory workshops to ensure the installation and robotic design would take into account the welfare of the cats. "Creating a multispecies system—where cats, robots, and humans are all accounted for—takes more than just designing the robot," said co-author Eike Schneiders of Nottingham's Mixed Reality Lab about the primary takeaway from the project. "We had to ensure animal well-being at all times, while simultaneously ensuring that the interactive installation engaged the (human) audiences around the world. This involved consideration of many elements, including the design of the enclosure, the robot, and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats.”

Based on those discussions, the team set about building the installation: a bespoke enclosure that would be inhabited by three cats for six hours a day over 12 days. The lucky cats were named Ghostbuster, Clover, and Pumpkin—a parent and two offspring to ensure the cats were familiar with each other and comfortable sharing the enclosure. The enclosure was tricked out to essentially be a "utopia for cats," per the authors, with perches, walkways, dens, a scratching post, a water fountain, several feeding stations, a ball run, and litter boxes tucked away in secluded corners.

(l-r) Clover, Pumpkin, and Ghostbuster spent six hours a day for 12 days in the installation.
Enlarge / (l-r) Clover, Pumpkin, and Ghostbuster spent six hours a day for 12 days in the installation.
E. Schneiders et al., 2024

As for the robot, the team chose the Kino Gen3 lite robot arm, and the associated software was trained on over 7,000 videos of cats. A decision engine gave the robot autonomy and proposed activities for specific cats. Then a human operator used an interface control system to instruct the robot to execute the movements. The robotic arm's two-finger gripper was augmented with custom 3D-printed attachments so that the robot could manipulate various cat toys and accessories.

Each cat/robot interaction was evaluated for a "happiness score" based on the cat's level of engagement, body language, and so forth. Eight cameras monitored the cat and robot activities, and that footage was subsequently remixed and edited into daily YouTube highlight videos and, eventually, an eight-hour film.

A typical interaction looked something like this: On the ninth day, the decision engine directed the robot to engage with Ghostbuster, offering a "helicopter prey game" (a three-winged propeller toy with feathers at the end of a string). The robot removed the toy from the rack and began rotating it toward the center of the room while all three cats watched intensely. Pumpkin pounced first and was soon joined by Ghostbuster (the intended target) while Clover watched them play from an elevated platform. Pumpkin soon lost interest, but Ghostbuster continued to bat at the toy for several minutes. When Ghostbuster also lost interest, the robot returned the toy to the rack.

Overall, the experimental installation proved to be a success, although the authors cautioned that the size, cost, and need for humans in supporting roles means such installations are unlikely to end up in the average home. The cats stayed in the enclosure for the full 12 days without being injured or becoming so stressed that they had to be removed. The cats voluntarily chose to play with the robot for several minutes at a time when games were offered, and their body language indicated they enjoyed it—based on assessments by both an animal welfare officer and the cats' owner. And humans seemed to enjoy watching the cats play during the Brisbane installation.

Cat Royale public installation as presented during Curiocity Brisbane World Science Festival in 2023.
Enlarge / Cat Royale public installation as presented during Curiocity Brisbane World Science Festival in 2023.
E. Schneiders et al., 2024

There were a few wrinkles that demonstrated the importance of keeping human actors in the loop to intervene when necessary. For instance, the robot was instructed to offer a simple game with the feather boa targeted at Pumpkin, who was particularly fond of that toy. This involved the robot moving the boa counter-clockwise toward the center of the room by rotating on its base. The boa passed the ball run system and collided with one of the pipes. The human operator activated the kill switch since either the boa could break or the robot arm could break the tubes, possibly injuring the cat. The team shortened the feather boa string for future sessions to resolve the issue.

And as any cat owner will tell you, cats can learn new tricks and complicate matters in unpredictable ways. Notably, 10 days into the experiment, Clover figured out that she could physically overpower the robot's joints by pulling from a particular angle. When the robot tried to take away her favorite orange bird toy before she was ready to relinquish it, Clover pulled on the joint and unlocked it so that the human operator lost control. Clover then took the orange bird toy away from the robot and dragged it away. The string connecting the toy and the stick the robot was holding then got stuck in the water fountain, tipping the fountain over.

While the incident required human intervention to relock the joint and clean up the tipped water fountain, the authors noted that it was ultimately a good experience for Clover, whose ingenuity was rewarded by acquiring her favorite toy. "Unlike many digital interactions, the physical embodiment of the robot allowed the cats to 'disassemble' it by taking the toys from it," they wrote. "This allowed the interaction to fuel the cat's biological drive stimulated by the robot (i.e. hunting), allowing them to grab, manipulate, and drag objects (i.e., prey), this positively impacting Clover's welfare."

Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024. DOI: 10.1145/3613904.3642115  (About DOIs).

Leaks from Valve’s Deadlock look like a pressed sandwich of every game around

by: Kevin Purdy

Shelves at Valve's offices, as seen in 2018, with a mixture of artifacts from Half-Life, Portal, Dota 2, and other games.
Enlarge / Valve has its own canon of games full of artifacts and concepts worth emulating, as seen in a 2018 tour of its offices.
Sam Machkovech

"Basically, fast-paced interesting ADHD gameplay. Combination of Dota 2, Team Fortress 2, Overwatch, Valorant, Smite, Orcs Must Die."

That's how notable Valve leaker "Gabe Follower" describes Deadlock, a Valve game that is seemingly in playtesting at the moment, for which a few screenshots have leaked out.

The game has been known as "Neon Prime" and "Citadel" at prior points. It's a "Competitive third-person hero-based shooter," with six-on-six battles across a map with four "lanes." That allows for some of the "Tower defense mechanics" mentioned by Gabe Follower, along with "fast travel using floating rails, similar to Bioshock Infinite." The maps reference a "modern steampunk European city (little bit like Half-Life)," after "bad feedback" about a sci-fi theme pushed the development team toward fantasy.

Valve doesn't release games often, and the games it does release are often in development for long periods. Deadlock purportedly started development in 2018, two years before Half-Life: Alyx existed. That the game has now seemingly reached a closed (though not closed enough) "alpha" playtesting phase, with players in the "hundreds," could suggest release within a reasonable time. Longtime Valve watcher (and modder, and code examiner) Tyler McVicker suggests in a related video that Deadlock has hundreds of people playing in this closed test, and the release is "about to happen."

McVicker adds to the descriptor pile-on by noting that it's "team-based," "hero-based," "class-based," and "personality-driven." It's an attempt, he says, to "bring together all of their communities under one umbrella."

<‌iframe style="display:block" type="text/html" width="980" height="550" src="https://www.youtube.com/embed/k8MqhEz6N_k?start=0&wmode=transparent" frameborder="0" allowfullscreen>
Tyler McVicker's discussion of the leaked Deadlock content, featuring ... BioShock Infinite footage.

Many of Valve's games do something notable to push gaming technology and culture forward. Half-Life brought advanced scripting, physics, and atmosphere to the "Doom clones" field and forever changed it. Counter-Strike and Team Fortress 2 lead the way in team multiplayer dynamics. Dota 2 solidified and popularized MOBAs, and Half-Life: Alyx gave VR on PC its killer app. Yes, there are Artifact moments, but they're more exception than rule.

Following any of those games seems like a tall order, but Valve's track record speaks for itself. I think players like me, who never took to Valorant or Overwatch or the like, should reserve judgment until the game can be seen in its whole. I have to imagine that there's more to Deadlock than a pile of very familiar elements.

“Unprecedented” Google Cloud event wipes out customer account and its backups

by: Ron Amadeo

“Unprecedented” Google Cloud event wipes out customer account and its backups

Buried under the news from Google I/O this week is one of Google Cloud's biggest blunders ever: Google's Amazon Web Services competitor accidentally deleted a giant customer account for no reason. UniSuper, an Australian pension fund that manages $135 billion worth of funds and has 647,000 members, had its entire account wiped out at Google Cloud, including all its backups that were stored on the service. UniSuper thankfully had some backups with a different provider and was able to recover its data, but according to UniSuper's incident log, downtime started May 2, and a full restoration of services didn't happen until May 15.

UniSuper's website is now full of must-read admin nightmare fuel about how this all happened. First is a wild page posted on May 8 titled "A joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO, Thomas Kurian." This statement reads, "Google Cloud CEO, Thomas Kurian has confirmed that the disruption arose from an unprecedented sequence of events whereby an inadvertent misconfiguration during provisioning of UniSuper’s Private Cloud services ultimately resulted in the deletion of UniSuper’s Private Cloud subscription. This is an isolated, ‘one-of-a-kind occurrence’ that has never before occurred with any of Google Cloud’s clients globally. This should not have happened. Google Cloud has identified the events that led to this disruption and taken measures to ensure this does not happen again."

In the next section, titled "Why did the outage last so long?" the joint statement says, "UniSuper had duplication in two geographies as a protection against outages and loss. However, when the deletion of UniSuper’s Private Cloud subscription occurred, it caused deletion across both of these geographies." Every cloud service keeps full backups, which you would presume are meant for worst-case scenarios. Imagine some hacker takes over your server or the building your data is inside of collapses, or something like that. But no, the actual worst-case scenario is "Google deletes your account," which means all those backups are gone, too. Google Cloud is supposed to have safeguards that don't allow account deletion, but none of them worked apparently, and the only option was a restore from a separate cloud provider (shoutout to the hero at UniSuper who chose a multi-cloud solution).

UniSuper is an Australian "superannuation fund"—the US equivalent would be a 401(k). It's a retirement fund that employers pay into as part of an employee paycheck; in Australia, some amount of superfund payment is required by law for all employed people. Managing $135 billion worth of funds makes UniSuper a big enough company that, if something goes wrong, it gets the Google Cloud CEO on the phone instead of customer service.

A June 2023 press release touted UniSuper's big cloud migration to Google, with Sam Cooper, UniSuper's Head of Architecture, saying, “With Google Cloud VMware Engine, migrating to the cloud is streamlined and extremely easy. It’s all about efficiencies that help us deliver highly competitive fees for our members.”

The many stakeholders in the service meant service restoration wasn't just about restoring backups but also processing all the requests and payments that still needed to happen during the two weeks of downtime.

Highlights from the outage timeline

The second must-read document in this whole saga is the outage update page, which contains 12 statements as the cloud devs worked through this catastrophe. The first update is May 2 with the ominous statement, "You may be aware of a service disruption affecting UniSuper’s systems." UniSuper immediately seemed to have the problem nailed down, saying, "The issue originated from one of our third-party service providers, and we’re actively partnering with them to resolve this." On May 3, Google Cloud publicly entered the picture with a joint statement from UniSuper and Google Cloud saying that the outage was not the result of a cyberattack.

Monday, May 6, is when things started to heat up. First was the morning statement saying both teams worked through the weekend to try to fix this, but then the next two outage page updates were lengthy statements/apologies signed by Chun. The UniSuper CEO assured members that "member accounts are safe," "no data was exposed to unauthorized third parties," and that "pension payments have not been disrupted." When your service is close to being a bank, there's going to be a lot of panic out there when there are several days of unexplained downtime.

The CEO's update also stated that "While a full root cause analysis is ongoing, Google Cloud has confirmed this is an isolated one-of-a-kind issue that has not previously arisen elsewhere. Google Cloud has confirmed that they are taking measures to ensure this issue does not happen again." Chun also mentioned that UniSuper had a second cloud provider, and it would work to "minimize" data loss. On May 7, the CEO added, "Google Cloud has issued a statement today which confirms again that the fault originated within their service as a ‘one of its kind,’ unprecedented occurrence" and that "Google Cloud sincerely apologizes for the inconvenience this has caused."

Seven days after the outage, on May 9, we saw the first signs of life again for UniSuper. Logins started working for "online UniSuper accounts" (I think that only means the website), but the outage page noted that "account balances shown may not reflect transactions which have not yet been processed due to the outage." An earlier update pegged "April 29" as the planned data rollback for balances. The next seven days of updates log progressive restorations of various features of the website and app. May 13 is the first mention of the mobile app beginning to work again. This update noted that balances still weren't up to date and that "We are processing transactions as quickly as we can." The last update, on May 15, states, "UniSuper can confirm that all member-facing services have been fully restored, with our retirement calculators now available again."

The joint statement and the outage updates are still not a technical post-mortem of what happened, and it's unclear if we'll get one. Google PR confirmed in multiple places it signed off on the statement, but a great breakdown from software developer Daniel Compton points out that the statement is not just vague, it's also full of terminology that doesn't align with Google Cloud products. The imprecise language makes it seem like the statement was written entirely by UniSuper. It would be nice to see a real breakdown of what happened from Google Cloud's perspective, especially when other current or potential customers are going to keep a watchful eye on how Google handles the fallout from this.

Anyway, don't put all your eggs in one cloud basket.

Financial institutions have 30 days to disclose breaches under new rules

by: Dan Goodin

Financial institutions have 30 days to disclose breaches under new rules

The Securities and Exchange Commission (SEC) will require some financial institutions to disclose security breaches within 30 days of learning about them.

On Wednesday, the SEC adopted changes to Regulation S-P, which governs the treatment of the personal information of consumers. Under the amendments, institutions must notify individuals whose personal information was compromised “as soon as practicable, but not later than 30 days” after learning of unauthorized network access or use of customer data. The new requirements will be binding on broker-dealers (including funding portals), investment companies, registered investment advisers, and transfer agents.

"Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially," SEC Chair Gary Gensler said. "These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors."

Notifications must detail the incident, what information was compromised, and how those affected can protect themselves. In what appears to be a loophole in the requirements, covered institutions don’t have to issue notices if they establish that the personal information has not been used in a way to result in “substantial harm or inconvenience” or isn’t likely to.

The amendments will require covered institutions to “develop, implement, and maintain written policies and procedures” that are “reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information.” The amendments also:

• Expand and align the safeguards and disposal rules to cover both nonpublic personal information that a covered institution collects about its own customers and nonpublic personal information it receives from another financial institution about customers of that financial institution;
• Require covered institutions, other than funding portals, to make and maintain written records documenting compliance with the requirements of the safeguards rule and disposal rule;
• Conform Regulation S-P’s annual privacy notice delivery provisions to the terms of an exception added by the FAST Act, which provide that covered institutions are not required to deliver an annual privacy notice if certain conditions are met; and
• Extend both the safeguards rule and the disposal rule to transfer agents registered with the Commission or another appropriate regulatory agency.

The requirements also broaden the scope of nonpublic personal information covered beyond what the firm itself collects. The new rules will also cover personal information the firm has received from another financial institution.

SEC Commissioner Hester M. Peirce voiced concern that the new requirements may go too far.

"Today’s Regulation S-P modernization will help covered institutions appropriately prioritize safeguarding customer information," she https://www.sec.gov/news/statement/peirce-statement-reg-s-p-051624 wrote. "Customers will be notified promptly when their information has been compromised so they can take steps to protect themselves, like changing passwords or keeping a closer eye on credit scores. My reservations stem from the breadth of the rule and the likelihood that it will spawn more consumer notices than are helpful."

Regulation S-P hadn't been substantially updated since its adoption in 2000.

Last year, the SEC adopted new regulations requiring publicly traded companies to disclose security breaches that materially affect or are reasonably likely to materially affect business, strategy, or financial results or conditions.

The amendments take effect 60 days after publication in the Federal Register, the official journal of the federal government that publishes regulations, notices, orders, and other documents. Larger organizations will have 18 months to comply after modifications are published. Smaller organizations will have 24 months.

Public comments on the amendments are available here.

Using vague language about scientific facts misleads readers

by: John Timmer

Using vague language about scientific facts misleads readers

Anyone can do a simple experiment. Navigate to a search engine that offers suggested completions for what you type, and start typing "scientists believe." When I did it, I got suggestions about the origin of whales, the evolution of animals, the root cause of narcolepsy, and more. The search results contained a long list of topics, like "How scientists believe the loss of Arctic sea ice will impact US weather patterns" or "Scientists believe Moon is 40 million years older than first thought."

What do these all have in common? They're misleading, at least in terms of how most people understand the word "believe." In all these examples, scientists have become convinced via compelling evidence; these are more than just hunches or emotional compulsions. Given that difference, using "believe" isn't really an accurate description. Yet all these examples come from searching Google News, and so are likely to come from journalistic outlets that care about accuracy.

Does the difference matter? A recent study suggests that it does. People who were shown headlines that used subjective verbs like "believe" tended to view the issue being described as a matter of opinion—even if that issue was solidly grounded in fact.

Fact vs. opinion

The new work was done by three researchers at Stanford University: Aaron Chueya, Yiwei Luob, and Ellen Markman. "Media consumption is central to how we form, maintain, and spread beliefs in the modern world," they write. "Moreover, how content is presented may be as important as the content itself." The presentation they're interested in involves what they term "epistemic verbs," or those that convey information about our certainty regarding information. To put that in concrete terms, “'Know' presents [a statement] as a fact by presup­posing that it is true, 'believe' does not," they argue.

So, while it's accurate to say, "Scientists know the Earth is warming, and that warming is driven by human activity," replacing "know" with "believe" presents an inaccurate picture of the state of our knowledge. Yet, as noted above, "scientists believe" is heavily used in the popular press. Chueya, Luob, and Markman decided to see whether this makes a difference.

They were interested in two related questions. One is whether the use of verbs like believe and think influences how readers view whether the concepts they're associated with are subjective issues rather than objective, factual ones. The second is whether using that phrasing undercuts the readers' willingness to accept something as a fact.

To answer those questions, the researchers used a subject-recruiting service called Prolific to recruit over 2,700 participants who took part in a number of individual experiments focused on these issues. In each experiment, participants were given a series of headlines and asked about what inferences they drew about the information presented in them.

Beliefs vs. facts

All the experiments were variations on a basic procedure. Participants were given headlines about topics like climate change that differed in terms of their wording. Some of them used wording that implied factual content, like "know" or "understand." Others used terms that implied subjective opinion, like "believe" or "think." In some cases, the concepts were presented without attribution, using verbs like "are" (i.e., instead of "scientists think drought conditions are worsening," these sentences simply stated "drought conditions are worsening").

In the first experiment, the researchers asked participants to rate the factual truth of the statement in the headline and also assess whether the issue in question was a matter of opinion or a statement of fact. Both were rated on a 0–100 scale.

In the first experiment, participants were asked to rate both truthfulness and fact versus opinion for each headline. This showed two effects. One, using terms that didn't imply facts, like "believe," led to people rating the information as less likely to be true. Statements without attribution were rated as the most likely to be factual.

In addition, the participants rated issues in statements that implied facts, like "know" and "understand," as more likely to be objective conclusions rather than matters of opinion.

However, the design of the experiment made a difference to one of those outcomes. When participants were asked only one of these questions, the phrasing of the statements no longer had an impact on whether people rated the statements as true. Yet it still mattered in terms of whether they felt the issue was one of fact or opinion. So, it appeared that asking people to think about whether something is being stated as a fact influenced their rating of the statement's truthfulness.

In the remaining experiments, which used real headlines and examined the effect of preexisting ideas on the subject at issue, the impact of phrasing on people's ratings of truthfulness varied considerably. So, there's no indication that using terminology like "scientists believe" causes problems in understanding whether something is true. But it consistently caused people to rate the issue to be more likely to be a matter of opinion.

Opinionated

Overall, the researchers conclude that the use of fact-implying terminology had a limited effect on whether people actually did consider something a fact—the effect was "weak and varied between studies." So, using something like "scientists believe" doesn't consistently influence whether people think that those beliefs are true. But it does influence whether people view a subject as a matter where different opinions are reasonable, or one where facts limit what can be considered reasonable.

While this seems to be a minor issue here, it could be a problem in the long term. The more people feel that they can reject evidence as a matter of opinion, the more it opens the door to what the authors describe as "the rise of 'post-truth' politics and the dissemination of 'alternative facts.'" And that has the potential to undercut the acceptance of science in a wide variety of contexts.

Perhaps the worst part is that the press as a whole is an active participant, as reading science reporting regularly will expose you to countless instances of evidence-based conclusions being presented as beliefs.

PNAS, 2024.  DOI: 10.1073/pnas.2314091121

Slack users horrified to discover messages used for AI training

by: Ashley Belanger

Slack users horrified to discover messages used for AI training

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers' data—including messages, content, and files—to train Slack's global AI models.

According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack's policy may need updating "to explain more carefully how these privacy principles play with Slack AI," Maurer wrote on Threads, partly because the policy "was originally written about the search/recommendation work we've been doing for years prior to Slack AI."

Maurer was responding to a Threads post from engineer and writer Gergely Orosz, who called for companies to opt out of data sharing until the policy is clarified, not by a blog, but in the actual policy language.

"An ML engineer at Slack says they don’t use messages to train LLM models," Orosz wrote. "My response is that the current terms allow them to do so. I’ll believe this is the policy when it’s in the policy. A blog post is not the privacy policy: every serious company knows this."

The tension for users becomes clearer if you compare Slack's privacy principles with how the company touts Slack AI.

Slack's privacy principles specifically say that "Machine Learning (ML) and Artificial Intelligence (AI) are useful tools that we use in limited ways to enhance our product mission. To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as other information (including usage information) as defined in our privacy policy and in your customer agreement."

Meanwhile, Slack AI's page says, "Work without worry. Your data is your data. We don't use it to train Slack AI."

Because of this incongruity, users called on Slack to update the privacy principles to make it clear how data is used for Slack AI or any future AI updates. According to a Salesforce spokesperson, the company has agreed an update is needed.

"Yesterday, some Slack community members asked for more clarity regarding our privacy principles," Salesforce's spokesperson told Ars. "We’ll be updating those principles today to better explain the relationship between customer data and generative AI in Slack."

The spokesperson told Ars that the policy updates will clarify that Slack does not "develop LLMs or other generative models using customer data," "use customer data to train third-party LLMs" or "build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data." The update will also clarify that "Slack AI uses off-the-shelf LLMs where the models don't retain customer data," ensuring that "customer data never leaves Slack's trust boundary, and the providers of the LLM never have any access to the customer data."

These changes, however, do not seem to address a key concern for users who never explicitly consented to sharing chats and other Slack content for use in AI training.

Users opting out of sharing chats with Slack

This controversial policy is not new. Wired warned about it in April, and TechCrunch reported that the policy has been in place since at least September 2023.

But widespread backlash began swelling last night on Hacker News, where Slack users called out the chat service for seemingly failing to notify users about the policy change, instead quietly opting them in by default. To critics, it felt like there was no benefit to opting in for anyone but Slack.

From there, the backlash spread to social media, where SlackHQ hastened to clarify Slack's terms with explanations that did not seem to address all the criticism.

"I'm sorry Slack, you're doing fucking WHAT with user DMs, messages, files, etc?" Corey Quinn, the chief cloud economist for a cost management company called Duckbill Group, posted on X. "I'm positive I'm not reading this correctly."

SlackHQ responded to Quinn after the economist declared, "I hate this so much," and confirmed that he had opted out of data sharing in his paid workspace.

"To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results," SlackHQ posted. "And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the Slack AI—which is our generative AI experience natively built in Slack—[and] is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data."

Opting out is not necessarily straightforward, and individuals currently cannot opt out unless their entire organization opts out.

"You can always quit your job, right?" a Hacker News commenter joked.

And rather than adding a button to immediately turn off the firehose, Slack instructs customers to use a very specific subject line and contact Slack directly to stop sharing data:

Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. To opt out, please have your org, workspace owners or primary owner contact our Customer Experience team at feedback@slack.com with your workspace/org URL and the subject line ‘Slack global model opt-out request’. We will process your request and respond once the opt-out has been completed.

"Where is the opt-out button?" one Threads user asked Maurer.

Many commenters on Hacker News, Threads, and X confirmed that they were opting out after reading Slack's policy, as well as urging their organizations to consider using other chat services. Ars also chose to opt out today.

However, it remains unclear what exactly happens when users opt out. Commenters on Hacker News slammed Slack for failing to explain whether opting out deletes data from the models or "what exactly does the customer support rep do on their end to opt you out."

"You can't exactly go into the model and 'erase' parts of the corpus post-hoc," one commenter suggested.

All Slack's privacy principles state that "if you opt out, Customer Data on your workspace will only be used to improve the experience on your own workspace and you will still enjoy all of the benefits of our globally trained AI/ML models without contributing to the underlying models."

Slack’s consent model seems to conflict with GDPR

Slack's privacy policy, terms, and security documentation supposedly spell out how it uses customer data. However, The Stack reported that none of those legal documents mention AI or machine learning, despite Slack debuting machine-learning features in 2016.

There's no telling yet if Slack will make any additional changes as more customers opt out. What is clear from Slack's documents is that Slack knows that its customers "have high expectations around data ownership" and that it has "an existential interest in protecting" that data.

It's possible that lawmakers will force Slack to be more transparent about changes in its data collection as the chat service continues experimenting with AI.

It's also possible that Slack already doesn't default some customers to opt into data collection for ML training. The European Union's General Data Protection Regulation (GDPR) requires informed and specific consent before companies can collect data.

"Consent cannot be implied and must always be given through an opt-in," the strict privacy law says. And companies must be prepared to demonstrate that they've received consent through opt-ins, the law says.

In the United Kingdom, the Information Commissioner's Office (ICO) requires explicit consent, specifically directing companies to note that "consent requires a positive opt-in."

"Don’t use pre-ticked boxes or any other method of default consent," ICO said. "Keep your consent requests separate from other terms and conditions."

Salesforce's spokesperson declined to comment on how Slack's policy complies with the GDPR. But Slack has said that it's committed to complying with the GDPR, promising to "update our product features and contractual commitments accordingly." That did not seem to happen when Slack AI was launched in February.

Orosz warned that any chief technology officer (CTO) or chief information officer (CIO) letting Slack slide for defaulting customers into AI training data sharing should recognize that Slack setting that precedent could quickly become a slippery slope that other companies take advantage of.

"If you are a CTO or a CIO at your company and paying for Slack: why are you still opted in?" Orosz asked on Threads. "This is the type of thing where Slack should collect this data from free customers. Paying would be the perk that your messages don’t end up in AI training data. What company will try to pull this next with customers trusting them with confidential information/data?"

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name

by: Jon Brodkin

An app icon and logo for Elon Musk's X service.
Getty Images | Kirill Kudryavtsev

Twitter.com links are now redirecting to the x.com domain as Elon Musk gets closer to wiping out the Twitter brand name over a year and half after buying the company.

"All core systems are now on X.com," Musk wrote in an X post today. X also displayed a message to users that said, "We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same."

Musk bought Twitter in October 2022 and turned it into X Corp. in April 2023, but the social network continued to use Twitter.com as its primary domain for more than another year. X.com links redirected to Twitter.com during that time.

There were still remnants of Twitter after today's change. This morning, I noticed a support link took me to a help.twitter.com page. The link subsequently redirected to a help.x.com page after I sent a message to X's public relations email, though the timing could be coincidence. After sending that message to press@x.com, I got the standard auto-reply from press+noreply@twitter.com, just as I have in the past.

You might still encounter Twitter links that don't redirect to x.com, depending on which browser you use. The Verge said it is "seeing a mix of results depending upon browser choice and whether you're logged in or not."

I had no trouble accessing x.com on desktop browsers today. But in Safari on iPhone, I received error messages when trying to access either twitter.com or x.com without first logging in. I eventually succeeded in logging in and was able to view content, but I remained at twitter.com in the iPhone browser instead of being redirected to x.com.

This will presumably be sorted out, but the awkward Twitter-to-X transition has previously been accompanied by technical problems. In early April, Musk's service started automatically changing "twitter.com" to "x.com" in links posted by users in the iOS app. But the automatic text replacement initially applied to any URL ending in "twitter.com" even if it wasn't actually a twitter.com link, which meant that phishers could have taken advantage by registering misleading domain names.