One of the main open questions in discussion about x-risks is time-scale of the possible global catastrophe. Time-scale is time from now during which it will or happened or permanently prevented. Two main opinions exist: decades or centuries. If we take in account many predictions about continuing exponential or even hyperbolic development of new technologies when we should conclude that superhuman AI and ability to create super deadly biological viruses should be ready between 2030 (Vinge) or 2045 (Kurzweil). We write it in 2014, so it is just 15-30 years from now. As well as predictions about runaway global warming, limits of growth, peak oil and some version of Doomsday argument – all of them are centers around the year 2030. Such prediction easily could be falsified because 2030 year is rather soon. And also such prediction left us hopeless because it is clear that in such short timeframe it is unlikely that we could do something to prevent x-risks especially knowing how small were pervious efforts. But if we take one hundred years timeframe we as authors will have some advantages. We are signaling to be more respectful and conservative. It will be almost never proved that we are false during our lifetime. We have 10 times more chances to be right just because we have larger timeframe. We have plenty of time to implement some defense measures or in fact to think that such measures would be implemented (they will not). We may also think that we are correcting overoptimistic bias. It is well known that predictions about AI used to be overoptimistic.