*follows
Clickbaity or genius? 'BF cheated on you' QR codes pop up across UK

A new wave of QR codes has popped up across UK claiming to share a video of a boyfriend who "cheated" on a girl named Emily last night. Clickbaity or genius?...

Crime and Courts Read on Bleeping Computer
Microsoft taps Three Mile Island nuclear plant to power AI

The data centers that train the large language models behind AI consume unimaginable amounts of energy, and the stakes are high for big tech companies to ensure they have enough power to run those plants. That’s why Microsoft is now throwing its weight behind nuclear power. The tech giant on Friday signed a major deal […]

Environment Read on TechCrunch
Enkhuizen man accused of victimizing hundreds of boys for online child sex abuse content

This week, the police arrested a 26-year-old man from Enkhuizen for sexually abusing a child and making pornography of hundreds of minor boys spread across the Netherlands.

Crime and Courts Read on NL Times
News Image Arcane‘s Final Season Will Be Split Into Three Parts

The second and final chapter of Netflix and Fortiche's excellent League of Legends prequel will spread itself across November.

Entertainment Read on Gizmodo
News Image RFK Jr. Was Having a ‘Digital and Emotional’ Affair With a New York Magazine Writer

The magazine placed star reporter Olivia Nuzzi on leave pending a review of her work.

Crime and Courts Read on Gizmodo
News Image Unlock the Secret of a Gravity-Defying Parkour Stunt—With Physics!

Yes, you really can climb a building by jumping back and forth between two opposing walls. Thank you, Isaac Newton.

Entertainment Read on WIRED Science
News Image Movie Theater Chains Will Invest $2.2 Billion to Bring You Ziplines and Pickleball

Your local movie theater is about to get a whole lot nicer.

Business Read on Gizmodo
News Image Re-opened Three Mile Island will power AI data centers under new deal

The Three Mile Island Nuclear Plant is seen in the early morning hours March 28, 2011, in Middletown, Penn. Microsoft and Constellation Energy have announced a deal that would re-open Pennsylvania's shuttered Three Mile Island nuclear plant. The agreement would let Microsoft purchase the entirety of the plant's roughly 835 megawatts of energy generation—enough to power approximately 800,000 homes—for a span of 20 years starting in 2028, pending regulatory approval. The actual electricity from the Three Mile Island plant—which would be renamed Crane Clean Energy Center—wouldn't be earmarked for any specific use and would go to local interconnections rather than directly to Microsoft facilities. But the deal comes as Microsoft and large swaths of the tech industry seek new energy sources for data centers that power everything from generative AI models to cloud computing and streaming services. Pennsylvania's Three Mile Island plant rose to infamy in 1979 when a partial meltdown in Unit 2 helped ignite panic over nuclear safety across the country. The new Microsoft deal would re-open the adjacent Unit 1, which was shuttered in 2019 "due to poor economics," according to Constellation. If and when the plant reaches its planned 2028 re-opening, it would be among the first wave of shuttered nuclear plants being put back into service.

Politics Read on Ars Technica
News Image Best Apple Watch (2024): Which Model Should You Buy?

Should you splurge for the new Series 10 or stick with the SE? Let us help you figure out which version to get (and which to avoid).

Business Read on WIRED Top Stories
News Image Elon Musk is navigating Brazil’s X ban — and flirting with its far right

For more than two weeks, Brazilians have been without access to X. Brazil’s Supreme Court blocked the platform after Elon Musk failed to comply with court rulings. As X evades the ban and Musk’s companies work slowly toward a resolution, the real concern for many isn’t just the absence of social media. It’s Musk’s power play over the government as he backs Brazil’s far right. X was banned on August 30th after months of back-and-forth between Musk and Supreme Court Justice Alexandre de Moraes. The conflict began in April when Musk publicized government requests for information and then removed all restrictions imposed on X profiles by Brazilian court orders. Moraes responded by including Musk in an investigation over organized political...

Crime and Courts Read on The Verge Tech
Suspect reportedly shouted "Allahu akbar" in Erasmus Bridge stabbing; One dead, one hurt

A man attacked two people with knives at the Erasmus Bridge in Rotterdam on Thursday evening, killing one and leaving the other seriously injured. The police have arrested the man.

Crime and Courts Read on NL Times
News Image United Nations wants to treat AI with same urgency as climate change

Enlarge A United Nations report released Thursday proposes having the international body oversee the first truly global effort for monitoring and governing artificial intelligence. The report, produced by the UN secretary general’s High Level Advisory Body on AI, recommends the creation of a body similar to the Intergovernmental Panel on Climate Change to gather up-to-date information on AI and its risks. The report calls for a new policy dialog on AI so that the UN’s 193 members can discuss risks and agree upon actions. It further recommends that the UN take steps to empower poorer nations, especially those in the global south, to benefit from AI and contribute to its governance. These should include, it says, creating an AI fund to back projects in these nations, establishing AI standards and data-sharing systems, and creating resources such as training to help nations with AI governance. Some of the report’s recommendations could be facilitated by the Global Digital Compact, an existing plan to address digital and data divides between nations. It finally suggests creating an AI office within the UN dedicated to coordinating existing efforts within the UN to meet the report’s goals.

Environment Read on Ars Technica
News Image Updates From Nosferatu, Spider-Noir, and More

Plus, Superman & Lois' final season gears up for its big death moment.

Entertainment Read on Gizmodo
News Image What it means that new AIs can “reason”

An underappreciated fact about large language models (LLMs) is that they produce “live” answers to prompts. You prompt them and they start talking in response, and they talk until they’re done. The result is like asking a person a question and getting a monologue back in which they improv their answer sentence by sentence. This explains several of the ways in which large language models can be so frustrating. The model will sometimes contradict itself even within a paragraph, saying something and then immediately following up with the exact opposite because it’s just “reasoning aloud” and sometimes adjusts its impression on the fly. As a result, AIs need a lot of hand-holding to do any complex reasoning. Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week. One well-known way to solve this is called chain-of-thought prompting, where you ask the large language model to effectively “show its work” by “‘thinking” out loud about the problem and giving an answer only after it has laid out all of its reasoning, step by step.  Chain-of-thought prompting makes language models behave much more intelligently, which isn’t surprising. Compare how you’d answer a question if someone shoves a microphone in your face and demands that you answer immediately to how you’d answer if you had time to compose a draft, review it, and then hit “publish.” OpenAI’s latest model, o1 (nicknamed Strawberry), is the first major LLM release with this “think, then answer” approach built in.  Unsurprisingly, the company reports that the method makes the model a lot smarter. In a blog post, OpenAI said o1 “performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o correctly solved only 13 percent of problems, while the reasoning model scored 83 percent.” This major improvement in the model’s ability to think also intensifies some of the dangerous capabilities that leading AI researchers have long been on the lookout for. Before release, OpenAI tests its models for their capabilities with chemical, biological, radiological, and nuclear weapons, the abilities that would be most sought-after by terrorist groups that don’t have the expertise to build them with current technology.  As my colleague Sigal Samuel wrote recently, OpenAI o1 is the first model to score “medium” risk in this category. That means that while it’s not capable enough to walk, say, a complete beginner through developing a deadly pathogen, the evaluators found that it “can help experts with the operational planning of reproducing a known biological threat.”  These capabilities are one of the most clear-cut examples of AI as a dual-use technology: a more intelligent model becomes more capable in a wide array of uses, both benign and malign.If future AI does get good enough to tutor any college biology major through steps involved in recreating, say, smallpox in the lab, this would potentially have catastrophic casualties. At the same time, AIs that can tutor people through complex biology projects will do an enormous amount of good by accelerating lifesaving research. It is intelligence itself, artificial or otherwise, that is the double-edged sword. The point of doing AI safety work to evaluate these risks is to figure out how to mitigate them with policy so we can get the good without the bad. Every time OpenAI or one of its competitors (Meta, Google, Anthropic) releases a new model, we retread the same conversations. Some people find a question on which the AI performs very impressively, and awed screenshots circulate. Others find a question on which the AI bombs — say, “how many ‘r’s are there in ‘strawberry’” or “how do you cross a river with a goat” — and share those as proof that AI is still more hype than product.  Part of this pattern is driven by the lack of good scientific measures of how capable an AI system is. We used to have benchmarks that were meant to describe AI language and reasoning capabilities, but the rapid pace of AI improvement has gotten ahead of them, with benchmarks often “saturated.” This means AI performs as well as a human on these benchmark tests, and as a result they’re no longer useful for measuring further improvements in skill. I strongly recommend trying AIs out yourself to get a feel for how well they work. (OpenAI o1 is only available to paid subscribers for now, and even then is very rate-limited, but there are new top model releases all the time.) It’s still too easy to fall into the trap of trying to prove a new release “impressive” or “unimpressive” by selectively mining for tasks where they excel or where they embarrass themselves, instead of looking at the big picture.  The big picture is that, across nearly all tasks we’ve invented for them, AI systems are continuing to improve rapidly, but the incredible performance on almost every test we can devise hasn’t yet translated into many economic applications. Companies are still struggling to identify how to make money off LLMs. A big obstacle is the inherent unreliability of the models, and in principle an approach like OpenAI o1’s — in which the model gets more of a chance to think before it answers — might be a way to drastically improve reliability without the expense of training a much bigger model.  In all likelihood, there isn’t going to be a silver bullet that suddenly fixes the longstanding limitations of large language models. Instead, I suspect they’ll be gradually eroded over a series of releases, with the unthinkable becoming achievable and then mundane over the course of a few years — which is precisely how AI has proceeded so far.  But as ChatGPT — which itself was only a moderate improvement over OpenAI’s previous chatbots but which reached hundreds of millions of people overnight — demonstrates, technical progress being incremental doesn’t mean societal impact is incremental. Sometimes the grind of improvements to various parts of how an LLM operates — or improvements to its UI so that more people will try it, like the chatbot itself — push us across the threshold from “party trick” to “essential tool.”  And while OpenAI has come under fire recently for ignoring the safety implications of their work and silencing whistleblowers, its o1 release seems to take the policy implications seriously, including collaborating with external organizations to check what their model can do. I’m grateful that they’re making that work possible, and I have a feeling that as models keep improving, we will need such conscientious work more than ever.  A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Education Read on Vox
News Image Welcome to the new and improved Verge Deals newsletter

Hey, folks! Every week for the past four years, the team behind Verge Deals has combed the web, looking for the best deals and discounts on the tech we love most at The Verge. We pride ourselves in having tried and tested every product we recommend — well, almost every product — and we continue to share those deals with our readers via our daily deal coverage and Verge Deals newsletter. That being said, everyone could use a little change every now and again. No, Verge Deals is not going away — quite the contrary, actually. We’ve given our newsletter a fresh coat of virtual paint to reflect our new(ish) colors and design language, and we plan to continue to deliver a fresh batch of deals to your inbox every Friday afternoon. This time,...

Business Read on The Verge Tech
News Image Want to Get Into Founder Mode? You Should Be So Lucky

Paul Graham’s viral essay explains why Brian Chesky and Steve Jobs ruled and professional managers stink. But if a manager is smart and the founder is meh, who’s better?

Education Read on WIRED Business