io9 by George Dvorsky
As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.
As I was putting this list together, it became obvious to me that many of the technologies described below could be put to tremendously good use. It was important, therefore, for me to make the distinction between a technology per se and how it might be put to ill use. Take nanotechnology, for example. Once developed, it could be used to end scarcity, clean-up the environment, and rework human biology. But it could also be used to destroy the planet in fairly short order. So, when it comes time to develop these futuristic technologies, we’ll have to do it safely and responsibly. But just as importantly, we’ll also have to recognize when a particular line of technological inquiry is simply not worth the benefits. Artificial superintelligence may be a potent example.
That said, some technologies are objectively evil. Here’s what Patrick Lin, the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, told io9 about this:
The idea that technology is neutral or amoral is a myth that needs to be dispelled. The designer can imbue ethics into the creation, even if the artifact has no moral agency itself. This feature may be too subtle to notice in most cases, but some technologies are born from evil and don’t have redeeming uses, e.g., gas chambers and any device here. And even without that point (whether technology can be intrinsically good or bad), everyone agrees that most technologies can have both good and bad uses. If there’s a greater likelihood of bad uses than good ones, then that may be a reason not to develop the technology.
With all that out of the way, here are 10 bone-chilling technologies that should never be allowed to exist (listed in no particular order):
1. Weaponized Nanotechnology
Nothing could end our reign here on Earth faster than weaponized — or severely botched — molecular assembling nanotechnology.
It’s a threat that stems from two extremely powerful forces: unchecked self-replication and exponential growth. A sufficiently nihilistic government, non-state actor, or individual could engineer microscopic machines that consume our planet’s critical resources at a rapid-fire rate while replicating themselves in the process and leaving useless bi-products in their wake — a residue futurists like to call “grey goo.” (image: scene from The Animatrix)
Nanotechnology theorist Robert Freitas has brainstormed several possible variations of planet-killing nanotech, including aerovores (a.k.a. grey dust), grey plankton, grey lichens, and so-called biomass killers. Aeorovores would blot out all sunlight, grey plankton would consist of seabed-grown replicators that eat up land-based carbon-rich ecology, grey lichens would destroy land-based geology, and biomass killers would attack various organisms.
According to Freitas, a worst case scenario of “global ecophagy” would take about 20 months, “which is plenty of advance warning to mount an effective defense.” By defence, Freitas is referring to countermeasures likely involving self-replicating nanotechnology, or some kind of system that disrupts the internal mechanisms of the nanobots. Alternately, we could set up “active shields” in advance, though most nanotechnology experts agree they’ll be useless. Consequently, a moratorium on weaponized nanotechnology should be established and enforced.
2. Conscious Machines
It’s generally taken for granted that we’ll eventually imbue a machine with artificial consciousness. But we need to think very seriously about this before we go ahead and do such a thing. It may actually be very cruel to build a functional brain inside a computer — and that goes for both animal and human emulations. (image: Bruce Rolff/shutterstock)
Back in 2003, philosopher Thomas Metzinger argued that it would be horrendously unethical to develop software that can suffer:
What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.
Futurist Louie Helm agrees. Here’s what he told me:
One of the best things about computers is that you can make them sum a million columns in a spreadsheet without them getting resentful or bored. Since we plan to use artificial intelligence in place of human intellectual labor, I think it would be immoral to purposely program it to be conscious. Trapping a conscious being inside a machine and forcing it to do work for you is isomorphic to slavery. Additionally, consciousness is probably really fragile. In humans, a few miscoded genes can cause Down Syndrome, schizophrenia, autism, or epilepsy. So how terrible would it feel to be a slightly misprogrammed form of consciousness? For instance, several well-funded AI developers want to recreate human intelligence in machines by simulating the biological structure of human brains. I sort of hope and expect that these near-term attempts at cortical simulations will be too coarse to really work. But to the extent that they do work, the first “success” will likely create cripplingly unpleasant or otherwise deranged states of subjective experience. So as a programmer, I’m generally against self-aware artificial intelligence. Not because it wouldn’t be cool. But because I’m just morally opposed to slavery, torture, and unnecessary code.
3. Artificial Superintelligence
As Stephen Hawking declared earlier this year, artificial intelligence could be our worst mistake in history. Indeed, as we’ve noted many times before here on io9, the advent of greater-than-human intelligence could prove catastrophic. The introduction of systems far faster and smarter than us would force us to take a back seat. We’d be at the mercy of whatever the artificial superintelligence decides to do — and it’s not immediately clear that we’ll be able to design a friendly AI to prevent this. We need to solve this problem, otherwise building an ASI would be absolutely nuts. (image: agsandrew/shutterstock)
4. Time Travel
I’m actually not much of a believer in time travel (i.e. where are all the time travelers?), but I will say this — if it’s possible, we’ll want to stay the hell away from it.
It would be so crazily dangerous. Any scifi movie dealing with contaminated timelines should give you an idea of the potential perils, especially those nasty paradoxes. And even if some form of quantum time travel is possible — in which completely new and discreet timelines are created — the cultural and technological exchange between disparate civilizations couldn’t possibly end well.
5. Mind Reading Devices
The prospect exists for machines that can read people’s thoughts and memories at a distance and without their consent. This likely won’t be possible until human brains are more intimately integrated within the web and other communication channels.
Last year, for example, scientists from the Netherlands used brain scan data and computer algorithms to determine which letters a person was looking at. The breakthrough hinted at the potential for a third party to reconstruct human thoughts at an unprecedented level of detail, including what we see, think, and remember. Such devices, if used en masse by some kind of totalitarian regime or police state, would make life intolerable. It would introduce an Orwellian world in which our “thought crimes” could actually be enforced. (image: Radboud University Nijmegen)
6. Brain Hacking Devices
Relatedly, there’s also the potential for our minds to be altered against our knowledge or consent. Once we have chips in our brain, and assuming we won’t be able to develop effective cognitive firewalls, our minds will be exposed to the Internet and all its evils.
Incredibly, we’ve already taken the first steps toward this goal. Recently, an international team of neuroscientists set up an experiment that allowed participants to engage in brain-to-brain communication over the Internet. Sure, it’s exciting, but this tech-enabled telepathy could open a pandora’s box of problems. Perhaps the best — and scariest — treatment of this possibility was portrayed in Ghost in the Shell, in which an artificially intelligent hacker was capable of modifying the memories and intentions of its victims. Now imagine such a thing in the hands of organized crime and paranoid governments.
7. Autonomous Robots Designed to Kill Humans
The potential for autonomous killing machines is a scary one — and perhaps the one item on this list that’s already an issue today.
Here’s what futurist Michael LaTorra told me:
We do not yet have a machine that exhibits general intelligence even close to the human level. But human level intelligence is not required for the operation of autonomous robots with lethal capabilities. Building robotic military vehicles of all sorts is already achievable. Robot tanks, aircraft, ships, submarines, and humanoid-shaped soldiers are possible today. Unlike remote-controlled drones, military robots could identify targets and destroy them without a human giving the final order to shoot. The dangers of such technology should be obvious, but it goes beyond the immediate threat of “friendly fire” incidents in which robots mistakenly kill people from their own side of a conflict, or even innocent civilians. The greater danger lurks in the international arms race that could be set off if any nation deploys autonomous military robots. After a few cycles of improvement, the race to develop ever more powerful military robots could cross a threshold in which the latest generation of autonomous military robots would be able to outfight any human-controlled military system. And then, either by accident (“Who knew that Artificial Intelligence could emerge spontaneously in a military robot?”) or by design (“We didn’t think hackers could re-program our military robots remotely!”) humankind might find itself crushed into subservience, like the helot slaves of Spartan AI overlords.
8. Weaponized Pathogens
This is another bad one that’s disturbingly topical. As noted by Ray Kurzweil and Bill Joy back in 2005, publishing the genomes of deadly viruses for all the world to see is a recipe for destruction. There’s always the possibility that some idiot or a fanatical group will take this information and either reconstruct the virus from scratch or modify an existing virus to make it even more virulent — and then release it onto the world. It has been estimated, for example, that the engineered Avian Flu could kill half of the world’s humans. Just as disturbingly, researchers from China combined bird and swine flus to create a mutant airborne virus. The idea, of course, is to know the enemy and develop possible countermeasures before an actual pandemic strikes. But there’s always the danger that the virus could escape from the lab and wreak havoc in human populations. Or that the virus could be weaponized and unleashed. There’s even the scary potential for weaponized genome specific viruses.
It’s time for authorities to start thinking about this grim possibility before something awful happens. As reported in Foreign Policy, ISIS is certainly one group that already appears ready and willing.
9. Virtual Prisons and Punishment
What will jails and punishment be like when people can live for hundreds or thousands of years? And what if prisoners have their minds uploaded? Ethicist Rebecca Roache offers these horrifying scenarios:
The benefits of…radical lifespan enhancement are obvious—but it could also be harnessed to increase the severity of punishments. In cases where a thirty-year life sentence is judged too lenient, convicted criminals could be sentenced to receive a life sentence in conjunction with lifespan enhancement. As a result, life imprisonment could mean several hundred years rather than a few decades. It would, of course, be more expensive for society to support such sentences. However, if lifespan enhancement were widely available, this cost could be offset by the increased contributions of a longer-lived workforce.
…[Uploading] the mind of a convicted criminal and running it a million times faster than normal would enable the uploaded criminal to serve a 1,000 year sentence in eight-and-a-half hours. This would, obviously, be much cheaper for the taxpayer than extending criminals’ lifespans to enable them to serve 1,000 years in real time. Further, the eight-and-a-half hour 1,000-year sentence could be followed by a few hours (or, from the point of view of the criminal, several hundred years) of treatment and rehabilitation. Between sunrise and sunset, then, the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world (if technology facilitates transferring them back to a biological substrate) or, perhaps, to exile in a computer simulated world.
That’s awful! Now, it’s important to note that Roache is not advocating these punishment methods — she’s just doing some foresight. But holy smokes, let’s never EVER turn this into a reality.
10. Hell Engineering
This one’s quite similar to the previous item. Some futurists make the case for paradise engineering — the use of advanced technologies, particularly consciousness uploading and virtual reality, to create a heaven on Earth. But if you can create heaven, you can create hell. It’s a prospect that’s particularly chilling when you consider lifespans of indefinite length, along with the nearly boundless possibilities for psychological and physical anguish. This is actually one of the worst things I can think of; why anyone would want to develop such a thing is beyond me. It’s yet another reason for banning the development of artificial superintelligence — and the onset of the so-called Roko’s Basilisk problem.
Depois de ter escrito “Goor – A Crónica de Feaglar I/II” e “O Regresso dos Deuses – Rebelião” – tendo até já o rascunho do que seria uma continuação do último, no mesmo universo, mas no futuro e com um nível tecnológico superior – decidi deixar a fantasia épica e dedicar o meu pouco tempo disponível a outro género, continuando a trilhar o caminho de escrever o que gosto, sem qualquer orientação “comercial/mainstream” que me condicione, sem prazos ou intenções de publicação. Continuarei a não incluir coisas como “paixonetas da moda” só para agradar aos gostos da maioria dos leitores. As coisas existirão se tiverem de existir. Não gostam? Muito bem…
Poderei até demorar anos a voltar a ter outra história terminada. Já não tenho as horas e horas diárias para escrever que tive em tempos e agora sei que a escrita é uma maratona e não um prova de velocidade. Olhando para o que já escrevi sinto-me insatisfeito e desejo aproveitar essa insatisfação e os preciosos momentos que tenho disponíveis para melhorar – um processo que deve ser constante e que até deverá consumir mais tempo que a escrita em si. Se para escrever um só parágrafo eu tiver de passar dias a pesquisar e a mergulhar em reflexões, não me queixarei, pelo contrário… Neste momento quero esse desafio, exijo-o!
Deixar a fantasia épica não é um divórcio conflituoso em que as virtudes da anterior companheira são esquecidas e pervertidas em defeitos. Nada disso! É apenas uma escolha sem simbolismos inclusos, tão natural como a decisão que tomamos num cruzamento. Felizmente, tenho a liberdade de decidir onde ficam esses cruzamentos na minha estrada.
Change your opinions, keep to your principles; change your leaves, keep intact your roots.
Muita coisa a respeito dos hominídeos extintos apelidados de hobbits permanece uma grande polémica dez anos depois que os fósseis foram descobertos na ilha indonésia de Flores. Porém, um novo estudo oferece uma contribuição forte à hipótese original a seu respeito: são os remanescentes de uma espécie distinta até então desconhecida que viveu até aproximadamente 17 mil anos atrás.
Comparações detalhadas mostram que o único crânio entre os restos de esqueletos é “claramente distinto” dos crânios de humanos modernos saudáveis, garante o estudo. Assim, o espécime fóssil pode muito bem merecer a designação de representante de uma espécie extinta, que os cientistas batizaram Homo floresiensis .
Boa parte do debate se concentrou em argumentos de cépticos segundo os quais esses hominídeos de cérebro e corpo pequenos não passavam de um Homo sapiens moderno com uma entre várias desordens de crescimento, possivelmente microcefalia, síndrome de Laron ou hipotireoidismo endêmico, conhecido como cretinismo.
Em estudo publicado no periódico científico PLoS One , os pesquisadores afirmaram que seus achados “rebatem as hipóteses de condições patológicas”.
A principal autora, Karen L. Baab, antropóloga da Universidade Stony Brook, Long Island, disse que o estudo gerou as medidas mais precisas e abrangentes até agora do formato externo – cada crista e sulco, cada caroço e saliência – do crânio do Homo floresiensis .
As medidas foram comparadas com crânios de hominídeos fósseis extintos, incluindo oHomo erectus , neandertais e outras espécies arcaicas de hominídeos, com crânios de humanos modernos normais, além de humanos com cada uma dessas condições patológicas.
Os pesquisadores, incluindo Kieran P. McNulty, da Universidade de Minnesota, e Katerina Harvati, da Universidade de Tübingen, Alemanha, concluíram que o crânio do H. floresiensisera mais parecido com os vários hominídeos fósseis do que com os humanos modernos normais ou com aqueles com as patologias. Durante entrevista, Baab contou que eles “tentaram testar praticamente todas as hipóteses” e oferecer “uma visão muito mais completa” do formato do crânio do hobbit, comparado aos estudos anteriores.
De acordo com ela, os achados completaram a pesquisa anterior conduzida por Dean Falk, antropólogo da Universidade Estadual da Flórida, especializado em evolução cerebral. Eles utilizaram tomografias computadorizadas para criar moldes internos mostrando o formato do cérebro a partir da impressão deixada por ele na superfície interna do crânio. Os estudiosos chegaram à conclusão que o hobbit era uma espécie nova relacionada estreitamente com oH. erectus e não um humano com microcefalia.
Os fósseis do H. floresiensis foram encontrados em 2003, enterrados em sedimentos da entrada de uma grande caverna conhecida como Liang Bua. Desse nome veio o rótulo LB1 para o único crânio, que não é maior do que uma toranja. O tamanho sugere que o cérebro tinha menos de um terço do de um humano. A partir de outros pedaços do esqueleto de oito indivíduos, os hobbits mediam cerca de 90 centímetros, caminhavam erectos e, em termos anatómicos, eram mais primitivos do que o H. sapiens .
in Último Segundo
Ao analisar imagens da sonda Opportunity, em Marte, cientistas encontraram uma formação misteriosa. Basicamente, uma rocha apareceu misteriosamente diante das câmeras, como se tivesse ‘brotado’ do solo.
“Ficamos completamente surpresos”, afirmou o cientista da Nasa, Steve Squyres, para aDiscovery News. “Ficamos, tipo, ‘espera um segundo, aquilo não estava ali antes, não pode estar certo’. Estamos absolutamente surpresos”.
Mas antes que você pense em aliens ‘tirando uma’ com nossos cientistas, vale saber que a hipótese mais aceita é a própria sonda Opportunity tenha esbarrado na rocha e a movido para a frente de sua câmera. Astrônomos também apontam a possibilidade da pedra ter caído no local após um meteoro passar pelo local.
“Acho que a culpa foi nossa mesmo”, declarou Squyres
Pelo menos algumas das pessoas que se cruzaram comigo por estes mundos da net, conhecem-me deste lugar:
E este meu cantinho de leituras, o sítio onde estou sempre, mesmo quando não tenho tempo para mais nada, fez ontem 5 anos. Por isso, se já conhecem a minha “casa”, obrigada por serem das pessoas que fazem com que valha a pena. E, se não conhecem, sejam bem-vindos quando me quiserem visitar.