We live in a world where technologies are specifically designed to spread moral outrage and moral panic to enhance shareholder returns. We call these technologies "social media" but in fact "anti-social media" is a more appropriate name. Bullies and shouty people dominate these platforms with simplistic, meme-friendly hate-takes on the issue dujour.
Real moral thinking is not like that, and I'm going to engage in some here.
The morally optimum state, Aristotle told us, is a mean between extremes. That is, it's in the middle of things, and in particular, it's in the messy middle of things. Not at some "pure" outlying point, but somewhere in the vicinity of various awkward compromises.
Human beings are biological systems, and the Darwinian processes that produced us generate adequate, not optimal, solutions to the problem of the day. There is no uniquely defined, deep, narrow evolutionary optimum in anything: diet, dress, sleep cycle, pro-social vs anti-social behaviour, and so on...
To claim that in any case there is a single, deep, clear, narrow, optimum for one person, much less all people, is to claim that humans beings did not arise from evolution by variation and natural selection. Maybe we were created by god or sprang fully formed from the head of Zeus... I dunno. All I know is anyone claiming that there is every any simple, obvious, clear, unequivocal, deep, narrow, moral or behavioural optimum is an evolutionary denialist.
This is not to say that there aren't things that are unequivocally better and things that are unequivocally worse, or even bad. Think of human life in all things as a broad valley, a shallow bowl caught on all sides between mountains, with rolling hills and dells scattered throughout, and the odd crag or deep gully here or there. There are a hell of a lot of places where humans can live comfortably in such a valley. Insisting that everyone be herded and penned into one particular spot and that anyone who wants to live elsewhere is evil does not reflect the reality of this landscape.
Which is why--although I have my preferred little craggy bit--I try to talk to people who prefer to live elsewhere, even while the rushing mob of shouting trolls and angry outragers can make that difficult.
This is because anti-social media is, like all technology, a mixed blessing. It empowers the trolls as much or more as it empowers the people of good will who want to have honest and difficult conversations rather than sharing memes about how much--and whom--they hate.
The benefits and hazards of technology is something I've given a lot of thought to: I'm a nuclear physicist who still believes in the value of nuclear power as a replacement for base load coal in our fight against climate change, especially now we have solutions to so many of the problems of early reactor designs. But it has some downsides, too, like giving us the power to blow ourselves up.
I'm also the inventor of an entire class of image registration algorithms that now dominates the field, and it is very likely that the drones that are killing Russian tank crews in Ukraine are using derivatives of this algorithm in their terminal phase guidance systems. When I invented the algorithm--called pseudo-correlation--I was working on a medical imaging problem, but I was aware of the potential military uses right away. Cruise missiles were at that time part of a major strategic shift in the West, because their precise terminal guidance allowed them to destroy ICBMs in their silos, creating the possibility of a non-nuclear first-strike: when your targeting accuracy is a metre or so, you don't need a nuclear warhead to ruin a missile launch officer’s day.
At the time, implementing such a system--which requires matching in real-time what the camera on the nose of the missile is seeing to a target image--took extremely expensive specialized hardware. Pseudo-correlation allowed the same job to be done on commodity hardware that cost a few thousand dollars in the early 1990s. Today you could do it on your watch, or a five dollar micro-controller.
I published the algorithm--it's my most cited paper in applied physics--and watched over the decades as it popped up here and there in patents and whatnot. It has a more famous younger cousin called "mutual information" that came along three or four years later, and independent (re)invention was always a possibility. Using Monte Carlo methods to evaluate a cross-correlation integral was bound to occur to someone else eventually. The history of science and engineering is rife with examples of multiple discoveries or invention of the same thing around the same time.
So when we evaluate what "should" be done in the face of any new technological capability we should keep that in mind: if the current discoverer doesn't develop it, someone else almost certainly will. And everything we invent will get used for everything it is able to be used for.
All power gets used.
We're just not necessarily very good at predicting what it will be used for: pseudo-correlation and its relatives have had a big positive impact on image-guided surgery, which didn't even exist when I invented the algorithm. How any given discovery will interact with other developments makes the true scope of any power we create truly unpredictable.
So when it comes to new work in biotechnology, where algorithms created to find new drugs could as easily be used to find new toxins (link may or may not work) the most we can say is: all power gets used.
Whatever can be done with a new technology will be done with that new technology, and we don't know what that entails.
We should go ahead regardless.
The problem is simple: since we don't know what the risks and benefits of any technology will be, if we are going to be really risk averse we won't develop any technology at all. Some technologies are so inherently risky that we have entire government departments dedicated to countering daily instances of those technologies getting out of control. One technology that falls into that category is fire, which has been around for longer than humans have been human (and, unsurprisingly, “far earlier than had been previously thought”.)
Think about that: after a million years we're still working on balancing the benefits and risks of fire.
Now tell me that any modern technological question about risks and benefits has a simple, obvious, one-sided answer.
Drone warfare, biohacking, and more are going to have the kind of impact on human life in the 21st century and beyond that fossil fuels and steam had in the 19th and 20th centuries. While I'm far from a technological determinist, there's no question that the material conditions of destruction are as important to the shape of viable civilizations as the material conditions of production.
We don't know where these new technologies will take us... we only know they will happen, and we'd better be prepared for flexible, resilient, response, not rigid insistence that there is One True Way to respond to any new technological power or possibility.