With help from Derek Robertson
It’s conventional wisdom by now that the future of war is unfolding in real time in Russia’s conflict with Ukraine.
For years, the simmering hostilities between Vladimir Putin’s government and his NATO-friendly neighbor have created a testing ground for information warfare and cybercrime.
And now that the conflict is an open military battle, it has played out on Twitter, and in satellite images, using new technology in a way no other modern war has done.
But the future can also be maddeningly slow to arrive — especially when the U.S Defense Department’s acquisition system is involved.
As my colleague Lee Hudson and I reported in a piece out today, defense executives and bureaucrats are complaining that many of their efforts to provide new technologies to help Ukraine beat back the Russian invasion are bogged down in a Pentagon aid process that is still too lumbering and bureaucratic. An effort that started in April with an old-fashioned request for information process is now looking likely to drag well into the summer.
Technologies waiting in the wings include promising new satellite communications tools to connect Ukrainian soldiers, radars that enhance the Ukrainian military’s ability to track and defeat Russian missiles or tanks, and the next level of drones that are easier to deploy to gather intelligence and target Russian formations.
One Pentagon organization that has expressed frustration at the pace is the Defense Innovation Unit, the Silicon Valley outpost established in 2015 to bypass the traditional acquisition system to tap into the most innovative technologies.
And our reporting found that despite its desire for new warfighting tech, DOD is reluctant to send unproven technology that hasn’t been used by U.S. forces or lacks proper military testing or training manuals.
One big fear: the Pentagon will rush new systems into Ukraine – and spend untold sums of taxpayer dollars – on high-tech gear that ends up being discarded because it is of little use or falls into the wrong hands.
“Systems that require significant training — or don’t have training packages developed — are more challenging,” said Jessica Maxwell, a Pentagon spokesperson. “Similarly, the intensity of this fight creates significant maintenance challenges, especially for systems that have not been ruggedized for combat.”
Scientists and philosophers have debated whether artificial intelligence can become “sentient” since before the technology even really existed, but there’s a much nearer-term question: What impact does today’s human-like AI have on our inner lives?
The Washington Post’s recent interview with a Google engineer who has become convinced that the company’s AI has come to life is just as much about the second question as it is about the first.
AI researchers have long warned that among the technology’s most potentially dangerous features might be its psychological effect on humans, ever keen to project our own values, fears, and biases onto otherwise neutral technology and media. It may not matter if large learning models like Google’s LaMDA (the one in question here) are sentient or not if they can effectively persuade us otherwise and alter our behavior accordingly.
“Machine learning is kind of boring,” said Ben Recht, a University of California Berkeley professor with extensive experience in the field. “It’s just matching patterns, but matching patterns can be really powerful,” as in the case of using databases of human text to produce new and “original” content. “There’s been plenty of talk for decades about the harms of video games, and I’m not sure a conversational agent is that much different.”
The sweeping AI Act currently being devised in Europe contains numerous measures aimed at mitigating the technology’s potential harms, but ultimately how we choose to perceive the technology is beyond regulators’ reach. — Derek Robertson
This week’s panic in the crypto market is leading to yet another form of anxiety on the industry side, as big trading platforms fear a potential regulatory crackdown after withdrawals were halted from crypto lender Celsius.
POLITICO’s Bjarke Smith-Meyer and Sam Sutton have the story, writing that “at an Amsterdam fintech conference last week, attendees working for crypto companies were whispering concerns about Celsius and fearing a regulatory backlash.”
John Reed Stark, a former chief of the SEC’s Office of Internet Enforcement, told them that the lack of regulation is “mind-boggling,” adding that lending platforms have become “a plague with no regulatory oversight, no consumer protections. No fiduciary infrastructure of any type.”
The word choice of “plague” doesn’t leave much to the imagination when it comes to how regulators might approach the Wild West-like network of crypto lenders and trading platforms. That network has sprawled amid a lack of regulatory clarity around crypto — but that’s rapidly changing, as the SEC steadily makes a patchwork case that cryptocurrencies are securities, and crypto-friendly senators are putting forth a competing vision that’s largely self-regulatory. — Derek Robertson
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Konstantin Kakaes ([email protected]); and Heidi Vogt ([email protected]). Follow us on Twitter @DigitalFuture.