Science Fiction – Fantasy – Strange – Books – News – Space

Deixe um comentário

The Most Terrifying Thought Experiment of All Time

Why are techno-futurists so freaked out by Roko’s Basilisk?

 The Ring.
Before you die, you see Roko’s Basilisk. It’s like the videotape in The Ring.

Still courtesy of DreamWorks LLC

WARNING: Reading this article may commit you to an eternity of suffering and torment.

Slender Man. Smile Dog. Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko’s Basilisk. For Roko’s Basilisk is an evil, godlike form of artificial intelligence, so dangerous that if you see it, or even think about it too hard, you will spend the rest of eternity screaming in its torture chamber. It’s like the videotape in The Ring. Even death is no escape, for if you die, Roko’s Basilisk will resurrect you and begin the torture again.

Are you sure you want to keep reading? Because the worst part is that Roko’s Basilisk already exists. Or at least, italready will have existed—which is just as bad.

Roko’s Basilisk exists at the horizon where philosophical thought experiment blurs into urban legend. The Basilisk made its first appearance on the discussion board LessWrong, a gathering point for highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality. LessWrong’s founder, Eliezer Yudkowsky, is a significant figure in techno-futurism; his research institute, the Machine Intelligence Research Institute, which funds and promotes research around the advancement of artificial intelligence, has been boosted and funded by high-profile techies like Peter Thiel and Ray Kurzweil, and Yudkowsky is a prominent contributor to academic discussions of technological ethics and decision theory. What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.

One day, LessWrong user Roko postulated a thought experiment: What if, in the future, a somewhat malevolent AI were to come about and punish those who did not do its bidding? What if there were a way (and I will explain how) for this AI to punish people today who are not helping it come into existence later? In that case, weren’t the readers of LessWrong right then being given the choice of either helping that evil AI come into existence or being condemned to suffer?

You may be a bit confused, but the founder of LessWrong, Eliezer Yudkowsky, was not. He reacted with horror:

Listen to me very closely, you idiot.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.

Yudkowsky said that Roko had already given nightmares to several LessWrong users and had brought them to the point of breakdown. Yudkowsky ended up deleting the thread completely, thus assuring that Roko’s Basilisk would become the stuff of legend. It was a thought experiment so dangerous that merely thinking about it was hazardous not only to your mental health, but to your very fate.

Some background is in order. The LessWrong community is concerned with the future of humanity, and in particular with the singularity—the hypothesized future point at which computing power becomes so great that superhuman artificial intelligence becomes possible, as does the capability to simulate human minds, upload minds to computers, and more or less allow a computer to simulate life itself. The term was coined in 1958 in a conversation between mathematical geniuses Stanislaw Ulam and John von Neumann, where von Neumann said, “The ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Futurists like science-fiction writer Vernor Vinge and engineer/author Kurzweil popularized the term, and as with many interested in the singularity, they believe that exponential increases in computing power will cause the singularity to happen very soon—within the next 50 years or so. Kurzweil is chugging 150 vitamins a day to stay alive until the singularity, while Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever. “If you don’t sign up your kids for cryonics then you are a lousy parent,” Yudkowsky writes.

If you believe the singularity is coming and that very powerful AIs are in our future, one obvious question is whether those AIs will be benevolent or malicious. Yudkowsky’s foundation, the Machine Intelligence Research Institute, has the explicit goal of steering the future toward “friendly AI.” For him, and for many LessWrong posters, this issue is of paramount importance, easily trumping the environment and politics. To them, the singularity brings about the machine equivalent of God itself.

Yet this doesn’t explain why Roko’s Basilisk is so horrifying. That requires looking at a critical article of faith in the LessWrong ethos: timeless decision theory. TDT is a guideline for rational action based on game theory, Bayesian probability, and decision theory, with a smattering of parallel universes and quantum mechanics on the side. TDT has its roots in the classic thought experiment of decision theory called Newcomb’s paradox, in which a superintelligent alien presents two boxes to you:


The alien gives you the choice of either taking both boxes, or only taking Box B. If you take both boxes, you’re guaranteed at least $1,000. If you just take Box B, you aren’t guaranteed anything. But the alien has another twist: Its supercomputer, which knows just about everything, made a prediction a week ago as to whether you would take both boxes or just Box B. If the supercomputer predicted you’d take both boxes, then the alien left the second box empty. If the supercomputer predicted you’d just take Box B, then the alien put the $1 million in Box B.

So, what are you going to do? Remember, the supercomputer has always been right in the past.

This problem has baffled no end of decision theorists. The alien can’t change what’s already in the boxes, so whatever you do, you’re guaranteed to end up with more money by taking both boxes than by taking just Box B, regardless of the prediction. Of course, if you think that way and the computer predicted you’d think that way, then Box B will be empty and you’ll only get $1,000. If the computer is so awesome at its predictions, you ought to take Box B only and get the cool million, right? But what if the computer was wrong this time? And regardless, whatever the computer said thencan’t possibly change what’s happening now, right? So prediction be damned, take both boxes! But then …

The maddening conflict between free will and godlike prediction has not led to any resolution of Newcomb’s paradox, and people will call themselves “one-boxers” or “two-boxers” depending on where they side. (My wife once declared herself a one-boxer, saying, “I trust the computer.”)

TDT has some very definite advice on Newcomb’s paradox: Take Box B. But TDT goes a bit further. Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. (I’ve adopted this example from Gary Drescher’s Good and Real, which uses a variant on TDT to try to show that Kantian ethics is true.) The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself. That includes simulating you. So you, right this moment, might be in the computer’s simulation, and what you do will impact what happens in reality (or other realities). So take Box B and the real you will get a cool million.

What does all this have to do with Roko’s Basilisk? Well, Roko’s Basilisk also has two boxes to offer you. Perhaps you, right now, are in a simulation being run by Roko’s Basilisk. Then perhaps Roko’s Basilisk is implicitly offering you a somewhat modified version of Newcomb’s paradox, like this:


Roko’s Basilisk has told you that if you just take Box B, then it’s got Eternal Torment in it, because Roko’s Basilisk would really you rather take Box A and Box B. In that case, you’d best make sure you’re devoting your life to helping create Roko’s Basilisk! Because, should Roko’s Basilisk come to pass (or worse, if it’s already come to pass and is God of this particular instance of reality) and it sees that you chose not to help it out, you’re screwed.

You may be wondering why this is such a big deal for the LessWrong people, given the apparently far-fetched nature of the thought experiment. It’s not that Roko’s Basilisk will necessarily materialize, or is even likely to. It’s more that if you’ve committed yourself to timeless decision theory, then thinking about this sort of trade literally makes it more likely to happen. After all, if Roko’s Basilisk were to see that this sort of blackmail gets you to help it come into existence, then it would, as a rational actor, blackmail you. The problem isn’t with the Basilisk itself, but with you. Yudkowsky doesn’t censor every mention of Roko’s Basilisk because he believes it exists or will exist, but because he believes that the idea of the Basilisk (and the ideas behind it) is dangerous.

Now, Roko’s Basilisk is only dangerous if you believe all of the above preconditions and commit to making the two-box deal with the Basilisk. But at least some of the LessWrong members do believe all of the above, which makes Roko’s Basilisk quite literally forbidden knowledge. I was going to compare it to H. P. Lovecraft’s horror stories in which a man discovers the forbidden Truth about the World, unleashes Cthulhu, and goes insane, but then I found that Yudkowsky had already done it for me, by comparing the Roko’s Basilisk thought experiment to the Necronomicon, Lovecraft’s fabled tome of evil knowledge and demonic spells. Roko, for his part, put the blame on LessWrong for spurring him to the idea of the Basilisk in the first place: “I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self-harm,” he wrote.

If you do not subscribe to the theories that underlie Roko’s Basilisk and thus feel no temptation to bow down to your once and future evil machine overlord, then Roko’s Basilisk poses you no threat. (It is ironic that it’s only a mental health risk to those who have already bought into Yudkowsky’s thinking.) Believing in Roko’s Basilisk may simply be a “referendum on autism,” as a friend put it. But I do believe there’s a more serious issue at work here because Yudkowsky and other so-called transhumanists are attracting so much prestige and money for their projects, primarily from rich techies. I don’t think their projects (which only seem to involve publishing papers and hosting conferences) have much chance of creating either Roko’s Basilisk or Eliezer’s Big Friendly God. But the combination of messianic ambitions, being convinced of your own infallibility, and a lot of cash never works out well, regardless of ideology, and I don’t expect Yudkowsky and his cohorts to be an exception.

I worry less about Roko’s Basilisk than about people who believe themselves to have transcended conventional morality. Like his projected Friendly AIs, Yudkowsky is a moral utilitarian: He believes that that the greatest good for the greatest number of people is always ethically justified, even if a few people have to die or suffer along the way. He has explicitly argued that given the choice, it is preferable to torture a single person for 50 years than for a sufficient number of people (to be fair, a lot of people) to get dust specks in their eyes. No one, not even God, is likely to face that choice, but here’s a different case: What if a snarky Slate tech columnist writes about a thought experiment that can destroy people’s minds, thus hurting people and blocking progress toward the singularity and Friendly AI? In that case, any potential good that could come from my life would far be outweighed by the harm I’m causing. And should the cryogenically sustained Eliezer Yudkowsky merge with the singularity and decide to simulate whether or not I write this column … please, Almighty Eliezer, don’t torture me.


Deixe um comentário




Editorial Inquérito, 1941.
“Longe de mim a ideia de escrever uma teoria da guerra. Eu sou, como o tenho dito tantas vezes, hostil a todas as teorias. A guerra é realidade – realidade das mais graves na vida dum povo. É isto o que desejo aqui demonstrar, sem para isso “levar mochos a Atenas”, quero dizer, sem procurar insistir em generalidades comummente adquiridas; pelo contrário, dirigir-me-ei ao povo, a cada um de entre o povo, e tratarei em pormenor diferentes matérias que necessariamente lhe escapam. O povo deve aprender a conhecer a própria essência da sua luta pela vida. Não serão indigestas obras científicas sobre a guerra que o esclarecerão, mas sim exposições tão acessíveis como breves. O que vou expor é a mais autêntica experiência pessoal da guerra, e não um comentário oficial, como se poderá supor no estrangeiro.” (excerto do Cap. I, Carácter da Guerra Total)
Erich Ludendorff, também conhecido como Erich von Ludendorff (1865-1937). “Foi um general alemão, com poderes praticamente ditatoriais nos últimos meses da Primeira Guerra Mundial. Na frente leste, foi chefe do Estado-Maior de Hindenburg. Depois da derrota na ofensiva de Verdun, foi deslocado para a frente ocidental e tornou-se, juntamente com Hindenburg, o principal comandante da Oberste Heeresleitung. Mais tarde, em 1918, planeou o último grande ataque alemão, mas foi derrotado por Foch. No final da guerra, até Setembro de 1918, defendeu a tese de que a Alemanha devia negociar uma paz vitoriosa e não aceitar a rendição. Em 1925, foi candidato presidencial pelo Partido Nazi, mas perdeu as eleições, fundando em seguida um pequeno partido político.”

Deste livro, ainda com o “selo” da censura, destaco alguns aspectos como a sua visão do que será a guerra moderna e a problemática dos combustíveis para a Alemanha ( de facto, uma fraqueza exposta durante a II Guerra Mundial). O ideário que expõe sem quaisquer pudores é claramente anti-ocidental e anti-semita, chegando a ser tristemente actual…

Deixe um comentário

10 Horrifying Technologies That Should Never Be Allowed To Exist

io9 by George Dvorsky

As we head deeper into the 21st century, we’re starting to catch a glimpse of the fantastic technological possibilities that await. But we’re also starting to get a grim sense of the potential horrors. Here are 10 frightening technologies that should never, ever, come into existence.

As I was putting this list together, it became obvious to me that many of the technologies described below could be put to tremendously good use. It was important, therefore, for me to make the distinction between a technology per se and how it might be put to ill use. Take nanotechnology, for example. Once developed, it could be used to end scarcity, clean-up the environment, and rework human biology. But it could also be used to destroy the planet in fairly short order. So, when it comes time to develop these futuristic technologies, we’ll have to do it safely and responsibly. But just as importantly, we’ll also have to recognize when a particular line of technological inquiry is simply not worth the benefits. Artificial superintelligence may be a potent example.

That said, some technologies are objectively evil. Here’s what Patrick Lin, the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, told io9 about this:

The idea that technology is neutral or amoral is a myth that needs to be dispelled. The designer can imbue ethics into the creation, even if the artifact has no moral agency itself. This feature may be too subtle to notice in most cases, but some technologies are born from evil and don’t have redeeming uses, e.g., gas chambers and any device here. And even without that point (whether technology can be intrinsically good or bad), everyone agrees that most technologies can have both good and bad uses. If there’s a greater likelihood of bad uses than good ones, then that may be a reason not to develop the technology.

With all that out of the way, here are 10 bone-chilling technologies that should never be allowed to exist (listed in no particular order):

1. Weaponized Nanotechnology

Nothing could end our reign here on Earth faster than weaponized — or severely botched — molecular assembling nanotechnology.

10 Horrifying Technologies That Should Never Be Allowed To Exist

It’s a threat that stems from two extremely powerful forces: unchecked self-replication and exponential growth. A sufficiently nihilistic government, non-state actor, or individual could engineer microscopic machines that consume our planet’s critical resources at a rapid-fire rate while replicating themselves in the process and leaving useless bi-products in their wake — a residue futurists like to call “grey goo.” (image: scene from The Animatrix)

Nanotechnology theorist Robert Freitas has brainstormed several possible variations of planet-killing nanotech, including aerovores (a.k.a. grey dust), grey plankton, grey lichens, and so-called biomass killers. Aeorovores would blot out all sunlight, grey plankton would consist of seabed-grown replicators that eat up land-based carbon-rich ecology, grey lichens would destroy land-based geology, and biomass killers would attack various organisms.

According to Freitas, a worst case scenario of “global ecophagy” would take about 20 months, “which is plenty of advance warning to mount an effective defense.” By defence, Freitas is referring to countermeasures likely involving self-replicating nanotechnology, or some kind of system that disrupts the internal mechanisms of the nanobots. Alternately, we could set up “active shields” in advance, though most nanotechnology experts agree they’ll be useless. Consequently, a moratorium on weaponized nanotechnology should be established and enforced.

2. Conscious Machines

10 Horrifying Technologies That Should Never Be Allowed To Exist

It’s generally taken for granted that we’ll eventually imbue a machine with artificial consciousness. But we need to think very seriously about this before we go ahead and do such a thing. It may actually be very cruel to build a functional brain inside a computer — and that goes for both animal and human emulations. (image: Bruce Rolff/shutterstock)

Back in 2003, philosopher Thomas Metzinger argued that it would be horrendously unethical to develop software that can suffer:

What would you say if someone came along and said, “Hey, we want to genetically engineer mentally retarded human infants! For reasons of scientific progress we need infants with certain cognitive and emotional deficits in order to study their postnatal psychological development — we urgently need some funding for this important and innovative kind of research!” You would certainly think this was not only an absurd and appalling but also a dangerous idea. It would hopefully not pass any ethics committee in the democratic world. However, what today’s ethics committees don’t see is how the first machines satisfying a minimally sufficient set of constraints for conscious experience could be just like such mentally retarded infants. They would suffer from all kinds of functional and representational deficits too. But they would now also subjectively experience those deficits. In addition, they would have no political lobby — no representatives in any ethics committee.

Futurist Louie Helm agrees. Here’s what he told me:

One of the best things about computers is that you can make them sum a million columns in a spreadsheet without them getting resentful or bored. Since we plan to use artificial intelligence in place of human intellectual labor, I think it would be immoral to purposely program it to be conscious. Trapping a conscious being inside a machine and forcing it to do work for you is isomorphic to slavery. Additionally, consciousness is probably really fragile. In humans, a few miscoded genes can cause Down Syndrome, schizophrenia, autism, or epilepsy. So how terrible would it feel to be a slightly misprogrammed form of consciousness? For instance, several well-funded AI developers want to recreate human intelligence in machines by simulating the biological structure of human brains. I sort of hope and expect that these near-term attempts at cortical simulations will be too coarse to really work. But to the extent that they do work, the first “success” will likely create cripplingly unpleasant or otherwise deranged states of subjective experience. So as a programmer, I’m generally against self-aware artificial intelligence. Not because it wouldn’t be cool. But because I’m just morally opposed to slavery, torture, and unnecessary code.

3. Artificial Superintelligence

10 Horrifying Technologies That Should Never Be Allowed To ExistEXPAND

As Stephen Hawking declared earlier this year, artificial intelligence could be our worst mistake in history. Indeed, as we’ve noted many times before here on io9, the advent of greater-than-human intelligence could prove catastrophic. The introduction of systems far faster and smarter than us would force us to take a back seat. We’d be at the mercy of whatever the artificial superintelligence decides to do — and it’s not immediately clear that we’ll be able to design a friendly AI to prevent this. We need to solve this problem, otherwise building an ASI would be absolutely nuts. (image: agsandrew/shutterstock)

4. Time Travel

I’m actually not much of a believer in time travel (i.e. where are all the time travelers?), but I will say this — if it’s possible, we’ll want to stay the hell away from it.

10 Horrifying Technologies That Should Never Be Allowed To Exist

It would be so crazily dangerous. Any scifi movie dealing with contaminated timelines should give you an idea of the potential perils, especially those nasty paradoxes. And even if some form of quantum time travel is possible — in which completely new and discreet timelines are created — the cultural and technological exchange between disparate civilizations couldn’t possibly end well.

5. Mind Reading Devices

The prospect exists for machines that can read people’s thoughts and memories at a distance and without their consent. This likely won’t be possible until human brains are more intimately integrated within the web and other communication channels.

10 Horrifying Technologies That Should Never Be Allowed To Exist

Last year, for example, scientists from the Netherlands used brain scan data and computer algorithms to determine which letters a person was looking at. The breakthrough hinted at the potential for a third party to reconstruct human thoughts at an unprecedented level of detail, including what we see, think, and remember. Such devices, if used en masse by some kind of totalitarian regime or police state, would make life intolerable. It would introduce an Orwellian world in which our “thought crimes” could actually be enforced. (image: Radboud University Nijmegen)

6. Brain Hacking Devices

Relatedly, there’s also the potential for our minds to be altered against our knowledge or consent. Once we have chips in our brain, and assuming we won’t be able to develop effective cognitive firewalls, our minds will be exposed to the Internet and all its evils.

10 Horrifying Technologies That Should Never Be Allowed To Exist

Incredibly, we’ve already taken the first steps toward this goal. Recently, an international team of neuroscientists set up an experiment that allowed participants to engage in brain-to-brain communication over the Internet. Sure, it’s exciting, but this tech-enabled telepathy could open a pandora’s box of problems. Perhaps the best — and scariest — treatment of this possibility was portrayed in Ghost in the Shell, in which an artificially intelligent hacker was capable of modifying the memories and intentions of its victims. Now imagine such a thing in the hands of organized crime and paranoid governments.

7. Autonomous Robots Designed to Kill Humans

10 Horrifying Technologies That Should Never Be Allowed To Exist

The potential for autonomous killing machines is a scary one — and perhaps the one item on this list that’s already an issue today.

Here’s what futurist Michael LaTorra told me:

We do not yet have a machine that exhibits general intelligence even close to the human level. But human level intelligence is not required for the operation of autonomous robots with lethal capabilities. Building robotic military vehicles of all sorts is already achievable. Robot tanks, aircraft, ships, submarines, and humanoid-shaped soldiers are possible today. Unlike remote-controlled drones, military robots could identify targets and destroy them without a human giving the final order to shoot. The dangers of such technology should be obvious, but it goes beyond the immediate threat of “friendly fire” incidents in which robots mistakenly kill people from their own side of a conflict, or even innocent civilians. The greater danger lurks in the international arms race that could be set off if any nation deploys autonomous military robots. After a few cycles of improvement, the race to develop ever more powerful military robots could cross a threshold in which the latest generation of autonomous military robots would be able to outfight any human-controlled military system. And then, either by accident (“Who knew that Artificial Intelligence could emerge spontaneously in a military robot?”) or by design (“We didn’t think hackers could re-program our military robots remotely!”) humankind might find itself crushed into subservience, like the helot slaves of Spartan AI overlords.

8. Weaponized Pathogens

10 Horrifying Technologies That Should Never Be Allowed To Exist

This is another bad one that’s disturbingly topical. As noted by Ray Kurzweil and Bill Joy back in 2005, publishing the genomes of deadly viruses for all the world to see is a recipe for destruction. There’s always the possibility that some idiot or a fanatical group will take this information and either reconstruct the virus from scratch or modify an existing virus to make it even more virulent — and then release it onto the world. It has been estimated, for example, that the engineered Avian Flu could kill half of the world’s humans. Just as disturbingly, researchers from China combined bird and swine flus to create a mutant airborne virus. The idea, of course, is to know the enemy and develop possible countermeasures before an actual pandemic strikes. But there’s always the danger that the virus could escape from the lab and wreak havoc in human populations. Or that the virus could be weaponized and unleashed. There’s even the scary potential for weaponized genome specific viruses.

It’s time for authorities to start thinking about this grim possibility before something awful happens. As reported in Foreign Policy, ISIS is certainly one group that already appears ready and willing.

9. Virtual Prisons and Punishment

10 Horrifying Technologies That Should Never Be Allowed To Exist

What will jails and punishment be like when people can live for hundreds or thousands of years? And what if prisoners have their minds uploaded? Ethicist Rebecca Roache offers these horrifying scenarios:

The benefits of…radical lifespan enhancement are obvious—but it could also be harnessed to increase the severity of punishments. In cases where a thirty-year life sentence is judged too lenient, convicted criminals could be sentenced to receive a life sentence in conjunction with lifespan enhancement. As a result, life imprisonment could mean several hundred years rather than a few decades. It would, of course, be more expensive for society to support such sentences. However, if lifespan enhancement were widely available, this cost could be offset by the increased contributions of a longer-lived workforce.

…[Uploading] the mind of a convicted criminal and running it a million times faster than normal would enable the uploaded criminal to serve a 1,000 year sentence in eight-and-a-half hours. This would, obviously, be much cheaper for the taxpayer than extending criminals’ lifespans to enable them to serve 1,000 years in real time. Further, the eight-and-a-half hour 1,000-year sentence could be followed by a few hours (or, from the point of view of the criminal, several hundred years) of treatment and rehabilitation. Between sunrise and sunset, then, the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world (if technology facilitates transferring them back to a biological substrate) or, perhaps, to exile in a computer simulated world.

That’s awful! Now, it’s important to note that Roache is not advocating these punishment methods — she’s just doing some foresight. But holy smokes, let’s never EVER turn this into a reality.

10. Hell Engineering

10 Horrifying Technologies That Should Never Be Allowed To ExistEXPAND

This one’s quite similar to the previous item. Some futurists make the case for paradise engineering — the use of advanced technologies, particularly consciousness uploading and virtual reality, to create a heaven on Earth. But if you can create heaven, you can create hell. It’s a prospect that’s particularly chilling when you consider lifespans of indefinite length, along with the nearly boundless possibilities for psychological and physical anguish. This is actually one of the worst things I can think of; why anyone would want to develop such a thing is beyond me. It’s yet another reason for banning the development of artificial superintelligence — and the onset of the so-called Roko’s Basilisk problem.

Deixe um comentário

Aí Vem a Guerra – Jerónimo M. S. Paiva (1939)


Depois de ter mergulhado com calma no Aí Vem a Guerra! de Jerónimo M. S. Paiva, Edição Gazeta do Sul, Montijo,1939, obra de reflexão filosófica (inesperadamente cativante!) sobre a guerra, a sociedade e o próprio Homem, infelizmente perdida no nevoeiro do desconhecimento do grande público. Não encontro, aliás, informação sobre o autor, além daquela presente no próprio livro:

Jerónimo M. S. Paiva 


(residente em Beja)

Obras do autor à data da impressão do livro Aí Vem a Guerra – 1939:

Cartas Cruéis (Crítica político-social)

Do Alto Alentejo (Regionalismos)

Garra Extremista (Patologia-social)

Pantera Social (Romance)

Do Meu Alentejo (Novela Regional)

*Mais tarde, acedendo aos registos da FLUP, descubro que a novela Do Meu Alentejo ganhou o 1º prémio do Concurso Literário Regional de 1938. Na Biblioteca Digital do Alentejo encontro mais alguns dados sobre esta obra e uma imagem da capa.


Após mais alguma pesquisa no arquivo de publicações periódicas, descubro que o autor foi director editorial do semanário A Rajada (1930)

Todavia, a informação continua a ser escassa sobre este autor que me surpreendeu com a obra que li. Julgo que mereceria mais atenção. Agradeço a quem me possa dar mais informações.

Deixe um comentário

Dinosaurs Versus Aliens

From the minds of acclaimed filmmaker, Barry Sonnenfeld (director of the, “Men In Black” Films) and superstar graphic novel creator, Grant Morrison (Batman, The Invisibles, Action Comics, 18 Days), comes “Dinosaurs Vs. Aliens,” from Liquid Comics. The story focuses on a secret world war battle that was never recorded in our history books. When an alien invasion attacks Earth in the age of the dinosaurs, our planet’s only saviors are the savage prehistoric beasts which are much more intelligent than humanity has ever imagined.



Existem mais 4 episódios.

Deixe um comentário

Mudança de rumo, sem mapa ou destino.




Depois de ter escrito “Goor – A Crónica de Feaglar I/II” e “O Regresso dos Deuses – Rebelião” – tendo até já o rascunho do que seria uma continuação do último, no mesmo universo, mas no futuro e com um nível tecnológico superior – decidi deixar a fantasia épica e dedicar o meu pouco tempo disponível a outro género, continuando a trilhar o caminho de escrever o que gosto, sem qualquer orientação “comercial/mainstream” que me condicione, sem prazos ou intenções de publicação. Continuarei a não incluir coisas como “paixonetas da moda” só para agradar aos gostos da maioria dos leitores. As coisas existirão se tiverem de existir. Não gostam? Muito bem…

Poderei até demorar anos a voltar a ter outra história terminada. Já não tenho as horas e horas diárias para escrever que tive em tempos e agora sei que a escrita é uma maratona e não um prova de velocidade. Olhando para o que já escrevi sinto-me insatisfeito e desejo aproveitar essa insatisfação e os preciosos momentos que tenho disponíveis para melhorar – um processo que deve ser constante e que até deverá consumir mais tempo que a escrita em si. Se para escrever um só parágrafo eu tiver de passar dias a pesquisar e a mergulhar em reflexões, não me queixarei, pelo contrário… Neste momento quero esse desafio, exijo-o!

Deixar a fantasia épica não é um divórcio conflituoso em que as virtudes da anterior companheira são esquecidas e pervertidas em defeitos. Nada disso! É apenas uma escolha sem simbolismos inclusos, tão natural como a decisão que tomamos num cruzamento. Felizmente, tenho a liberdade de decidir onde ficam esses cruzamentos na minha estrada.


Pedro Ventura

Change your opinions, keep to your principles; change your leaves, keep intact your roots.

Victor Hugo