Sunday, 7 December 2014

Bantokfomoki in space, part 2

In the first entry of this rebuttal, we went over two of the four videos released by bantokfomoki which attempt to refute the concept of relativistic space travel. Homing in on what he calls the apex reaction engine, an anti-matter drive used in the avatar film, bantokfomki (otherwise known as EBTX) sets forth a premise: ''If it cannot be made to work in principle, then no reaction rocket of any sort is suitable for interstellar space travel.'' Variations of this theme are restated numerous times throughout his video series. EBTXs argument is that in order to accelerate to fractions of light speed, the engines will emit so much heat that they would inadvertently vaporise themselves and whatever ship they are attached to. It is the same observation made by arthur c clarke in his book, the exploration of space, and it holds a grain of truth.
While EBTX had indeed managed to prove that the ISV venture star would be inoperable under the circumstances given in the movie, that does not imply it couldn't work under a different set of circumstances (even though he frequently insists this is the case). For instance, when the ships velocity and acceleration are significantly decreased, the problems with venting waste heat are easier to manage, and mass ratios become more practical. And as for why anyone should care so much about this subject? The starship depicted in avatar is a viable merger between two of the best propulsion concepts known to science: Charles pellegrinos valkyrie, and robert forwards photon sail. If they turn out to work in some shape or form, then humanity will have a real shot at becoming a space-farring species.
Video #3
[1] If you lower anything at all that resembles an interstellar engine into the sun with a magic rope, and you pull it up 6 months later, all you will have is a loop in your magic rope with a knot... This goes for any materials anywhere in the universe. There are no chemical bonds that can indefinitely withstand solar temperatures without breaking.
This is true, but also somewhat irrelevant. EBTX seems to be confusing temperature with heat, which is a common mistake. The two are not the same. For example, while the corona of the sun can reach a temperature of 1 million celsius, the gas has a density of just 0.0000000001 times that of the earths atmosphere at sea level. The thermal energy of a cubic meter of corona gas is less than 0.002 joules.
[2] With that in mind, we can make some calculations about the venture stars anti-matter propulsion system. And because anti-matter is the most efficient fuel possible, we can be sure that no other type of engine would do any better.
Thats tough to say for certain, since nobody knows what the ships exhaust velocity is. There are many different kinds of AMAT engines on the books, and its never stated which type is used in the movie.
[3] To proceed, we need to know what the diameter of the rocket nozzle is on the venture. From the picture, we measure about 13.5 meters. The area of the hole is Pi r squared, or 143 square meters. Theres two engines, so that gives us about 300 square meters total.
The nozzle actually appears to be less than 13.5 meters across, going by this picture. Not that it really matters, given how collimated the matter stream is.
[4] The accumulated energy of the 125,000 metric ton venture star at 70% of light velocity is 2.75 x 10 24 joules. Dividing that by 15,768,000 seconds (thats six months) gives us an engine power output of 174 petawatts, which is the solar output to the earth.
EBTX is rounding up the mass of the venture star, so that the energy requirements for its propulsion are equal to the solar energy received by earth. This is a useful fiction which apparently makes the calculations easier to do, although it contradicts the mass of 100,000 tons he gave in earlier videos.
[5] Now we want to know how much of the suns surface area is devoted to lighting up the earth with its bountiful energy. The surface area of a sphere with a radius equal to the distance of the earth to the sun is 2.8 x 10 23 square meters. The area of the earth that blocks sunlight is Pi r square, where r is the earths radius, at 6,371,000 meters. So the earth subtends 1.275 x 10 14 square meters. Dividing this by the other shows that the earth takes up... 2.73 x 10 9 square meters.
Thats a long chain of numbers, but they are correct. The incident area of the sun comes out at 2,730,000,000 square meters exactly. Another interesting note: The sun radiates 63 megawatts per square meter at its surface, and by the time this energy reaches earth, it has dissipated to just 1360 watts due to the inverse square law.
[6] So, for an interstellar ship like that envisioned in avatar, if it had a mass of 125,000 metric tons and accelerated to .7 C in 6 months, with exhaust holes from whence comes the products of anti-matter annihilation that are about 300 square meters total, this means that the energy output from those holes are approximately 9,100,000 times hotter than the surface of the sun.
Thats a very pessimistic stance to take. First, the engines are being supported by heat radiators which have a surface area at least 500 times greater than them. Dividing the 2,730,000,000 square meters of the suns incident area by the 150,000 square meters of the ships radiators reveals a difference of 18,200 fold, and not 9,100,000 fold as EBTX claimed. Second, because the engines could have an efficiency rating of up to 99%, the actual waste heat that needs to be vented into space is decreased by 2 orders of magnitude. Unfortunately, that still leaves the ship running 182 times hotter than the suns surface, equal to a blistering 1,051,596 kelvin.
No matter how you twist the numbers here, one cannot help but conclude that the circumstances given in the movie are impossible to work with. 174 petawatts is way too much power for the venture star to safely cope with, since it would need to be radiating many, many times more energy per unit area than the sun does! Luckily, there is a solution in sight. We can turn back to the parameters used in the first entry of this article, which specify accelerating for 60 months up to 7% of light velocity. By accepting a significantly longer travel time (67.5 years, to be exact), we can decrease the ships power draw by 3 orders of magnitude, and get it down to just 174 terawatts. With this change to the mission parameters, the crafts temperature will have been decreased to just .182 times hotter than the surface of the sun, or 1051 kelvin. YMMV, of course.
[7] Of course, if the venture star were much lighter, the energy output of its engines will be proportionately less. So that a ship 1/10th the mass will have an orifice energy output of only 910,000 times more than an area of the sun the same size as that orifice. And a ship lighter by 1/100th, or about 1250 metric tons, will have an orifice energy output of only 91,000 times greater than that of 300 square meters on the surface of the sun.
Our host is no doubt aware that decreasing the velocity by a factor of 10 will decrease the kinetic energy (and hence the energy needed to propel the ship) by a factor of 100. And that since the ship is going to be in transit for a longer period of time, its acceleration could also be decreased by a factor of 10 without any penaltys. This neatly circumvents the need to play around with the ships mass, which wouldn't work anyway, because even if you try to make a miniature venture star, the surface area will decrease in addition to the volume. Not by a proportionate amount, but more than enough to make such a strategy futile.
[8] If you understand that an engine cannot in principle sit submerged under the surface of the sun for 6 months without being destroyed, you can also understand that an engine cannot last 6 months in an environment thats millions of times hotter still. The case is closed for anti-matter engines that propel a ship to .7 C in a mere six months.
While that may be strictly true, it doesn't imply that humanity will not achieve interstellar travel using different speed regimes. As demonstrated in this article, a velocity of .07 C reached after 60 months keeps mass ratios and heating problems to a minimum. It is a TALL order to get to nearby stars with currently known technologys, but it can be done. Not this century, and maybe not even the next, but someday...
Video #4
[9] So the engines must put out energy equivalent to all the energy coming out of the surface of the sun from an area of 2730 square kilometers, and it must do so for 6 months to get to 70% of light velocity. But in fact, that was being overly generous. The engines must actually output over twice that energy, or 18,200,000 times the suns output per unit area.
Twice your original estimate? What will you pull out of the magicians hat next?
[10] In the rocket venue, at the beginning of acceleration, nearly all the energy is wasted out the rear exhaust, with a small amount being invested in the kinetic energy of the payload. At the end of the acceleration, and if that acceleration is some reasonable fraction of light velocity (and the exhaust velocity approximates light velocity), much more energy is deposited in the craft and less in the exhaust, simply by the nature of the energy book keeping process.
 EBTX is referring to the oberth effect, a phenomenon whereby the ships propellant will provide more thrust than normal after exceeding a certain velocity (since it will have a kinetic energy close to its potential energy). Its a bit like driving your car down a steep hill and gaining a speed boost because of gravity. Whether or not this is a good thing depends on just how high your starships delta-v is: Slow ships don't get to take advantage of this, but fast ships like the venture star do. With this understood, the oberth effect cannot be construed as something which forces starships to output more energy in order to attain their cruising speed.
[11] Because temperature is related to energy output in the star as 4th root, we can estimate the avatar engines temperature by its output at 18 million times the solar output. The 4th root of 18,200,000 is 65, while the surface temperature of the sun is 5778 degrees kelvin. So the internal engine temperature of the venture engine would be 65 times 5778, which is equal to 375,570 degrees kelvin.
If you completely ignore the role played by the heat radiators and engine efficiency, then yes, that would be a reasonable conclusion. The stefan-boltzmann law acknowledges that the total energy radiated per unit time by a blackbody is proportional to the 4th power of its temperature. Whats less clear is how decreasing the surface area of a blackbody will affect its temperature. Will halving the surface area double the temperature? If so, then how much will the luminosity change? The devil is in the details!
[12] There are only a couple of things that can happen in the engine context when a proton and anti-proton annihilate. Of the two gamma rays formed, one must exit out the exhaust port, thats the wasted half. The other gamma ray can then go through the engine compartment, unimpeded, in which case it is useless as propellant.
EBTX neglects to mention that gamma rays are merely the byproducts from electron and positron collisions: When the protons and anti-protons themselves annihilate, they produce neutral and charged pions travelling at relativistic speeds which can be magnetically deflected to produce thrust. There are various efficiency losses associated with this process, since these particles have short half lives and decay into more unstable forms, but they are not major. Its worth mentioning also that a proton is 1837 times heavier than an electron, and hence has alot more relevance in terms of kinetic energy (so this commentary on gamma rays comes off as intentionally misleading).
[13] Or it can collide with the engine compartment and thereby plasmify whatever it hits, thus contributing to the propulsion of the ship at the expense of its disintegration. Or, it can collide with the engine compartment and somehow reflect back out the rear exhaust hole, thus giving up the maximum energy of the ship without destroying it.
Again, protecting the craft from ionizing radiation is not a major dilemma. We already know how to build efficient shields using a sheet of tungsten, with a v-shaped cross section to reflect neutrons and x-rays (although gamma rays are a tougher proposition). In addition, some regions of the venture star were specifically mentioned as using almost no metal, so as to reduce the possibility of gamma and x-rays ablating the hull and producing secondary radiation.
[14] A fair approximation for the venture is a cylinder around each radiator. The radiators appear to be about 300 meters long, and maybe 80 meters wide, so the effective radiative surface is about Pi times 80 times 300 times 2 radiators, equals about 150,000 square meters. The maximum temperature they can glow at is the temperature of the surface of the sun, that is they glow white hot when running for six months. Don't ask how they glow without melting, we'll just give them that.
Thats a rather conservative estimate on the area of the radiators, but even so, the ship would never reach such high temperatures (assuming that the measures adopted here are viable). EBTX trys to create the illusion of impossibility by focusing on the individual problems faced by each system, and ignoring how these systems synergistically work together. If you study things in isolation, then of course the individual parts will seem absurd and inadequate. In a combustion engine, pistons are impractical without a radiator grill to collect the heat, and pointless if you don't have spark plugs to ignite the fuel.
[15] So the engine energy output is equivalent to 2730 square kilometers on the surface of the sun, or 2,730,000,000 square meters, while the radiators give off the suns energy at 150,000 square meters. This means that the efficiency of the engine is minimally, just about theoretically perfect. Or for every erg wasted as excess heat in the engine, 18,199 ergs go directly into the kinetic energy of the ship. Wow.
If anyone designed the venture star in such a foolish way, then a perpetual motion machine of the 2nd kind would be in order.

This rebuttal should wrap up most of the loose ends raised by EBTX and get the ball of critical thought rolling again. All in all he made some valid observations, and mapped out alot of unexplored territory, to the benefit of his audience. With that said, its interesting to note what problems didn't warrant a mention from him: Most obvious is the unimaginable cost of electricity required just to power the ship out of the solar system. Even if we meter the energy usage at the same rate as BC hydro, which is 11.27 cents per kilowatt-hour, the venture star still has a power draw of 1.4 x 10 17 watts. This means that the cost of powering it for the 1st hour alone comes out at a staggering $1577 trillion! Thats more than the entire world GDP. If you were to adopt a more modest speed regime of .07 C, that still leaves the electricity bill at $1.577 trillion. The economy would need to grow many orders of magnitude larger before such an undertaking could even begin to be considered.
Heres another problem that EBTX didn't foresee. A natural consequence of the ships tensile truss is that it puts the crew habitat behind the engines, which means the exhaust flare passes within 100 meters of it on both sides. For some idea on the dangers of this engine plume, consider that a welder emits light at an intensity of around 50,000 lux (enough to damage the retina if observed for more than a few seconds), whereas the exhaust flare was described as being 'an incandescent plasma a million times brighter than a welding arc, and over thirty kilometers long.' The further an object juts out from the centerline of the truss, the more radiant heat it will be exposed to. There was no excuse not to provide the crew habitat with a cone shaped canopy for protection. Another problem is that the whipple shield stack can only offer coverage to the venture star during the departure to alpha centauri: When the ship returns to sol, the shield is forced to drag behind it, thus leaving everyone exposed to micrometeorites.
Six months is a helluva long time to remain unprotected while your ship is getting up to 70% of light velocity! There is a throwaway mention about the shield stack getting detached and moved by thrusters on a vector ahead of the venture star, but this method is only used after cruise speed has been reached. Theoretically, you could just bolt it onto the front of the ship, but there doesn't seem to be a convenient spot where it could be mounted. There is another reason why the shield stack cannot be left at the back: It is so large that even with the engines canted several degrees outward, the exhaust plume cannot help but wash against it! The dangers presented by this should be obvious. As one source put it: 'Melt isn't the proper word for what happens to a solid substance bombarded by a relativistic particle stream. Spallation is more like it. Chemical bonds simply aren't strong enough to prevent relativistic particles from stripping away affected atoms.'

Friday, 31 October 2014

Bantokfomoki in space, part 1

This is a response to the series of videos released by bantokfomoki on the subject of space travel, as depicted in the avatar movie by james cameron. Bantokfomoki has made four such videos so far, all centered on the performance of the starship known as the venture star. There is alot of good science in them, but they suffer from a certain absolutism. The message he consistently trys to push is that without some kindof reactionless drive, travel to other stars within a human life span is fundamentally impossible. Bantokfomoki is better known by his website EBTX, which stands for Evidence Based Treatment X. He is an expert generalist able to contribute insights on a variety of different topics. With regards to the avatar film, though, he may have jumped the shark.
Bantokfomoki (who we'll call EBTX for now) released his latest video on october 4th, 2014. Referring to the venture stars anti-matter engine, he lays down a premise: ''If it cannot be made to work in principle, then no reaction rocket of any sort is suitable for interstellar space travel.'' Geesh, talk about laying down the gauntlet. EBTX assumes that because this one star ship may not be feasible under one set of mission parameters, it cannot be feasible under any other set of parameters. Thats going to create alot of problems for him down the road: Even if the original specifications are at odds with the laws of physics, they can be altered and rearranged. Playing with the numbers until you get a desirable result is an integral part of what rocket science is!
Now, since this latest video builds upon the work of the others in his series, we're going to do a quick review of them to get up to speed. For the best outcome, readers should watch the videos in their entirety before going over the rebuttal here. The reason for EBTXs detraction of relativistic space travel is that, in order to accelerate to fractions of light speed, the engines will emit so much heat that they would inadvertently vaporise themselves and whatever ship they are attached to. He comes to this conclusion using a different approach in each video, building a cumulative argument that has befuddled most space-exploration advocates. We will check his work for consistency and see whether or not there are easy alternatives to this dilemma.
Video #1
[1] ...Travel to other star systems in time frames on the scale of a human life span are pure drivel.
He has a very strong conviction of his, as we will see.

[2] Lets first get a picture of what 4 light years really are. Imagine the sun is a pea on a plate in the middle of your dinner table. Then the earth will be a grain of salt at the edge of the table. Jupiter will be about as big as the printed letter o (small case) in a newspaper on the wall of your dining room.
The sheer distances involved with interstellar travel are often under appreciated. The swedish have a series of monuments which represent a scale model of the solar system, the center of which is the erricsson globe: Even at a 1:20,000,000 scale, neptune is located a breathtaking 229 kilometers away!
[3] Since the avatar website doesn't specify a mass, lets assume a mass of 100,000 metric tons for the mass of the craft. Thats the weight of a large aircraft carrier.
If EBTX is referring to its wet mass (I.E, after it is loaded with propellent), that seems like a pretty safe guess. The venture star was, after all, based on the design philosophy laid down by project valkyrie. You can tell because the engines are mounted on the front of the ship, whereas the payload is dragged behind it on a truss. This allows designers to use flimsier materials and skimp on radiation shielding, which results in a very lightweight craft weighing a fraction of what conventional starships would. What we cannot say with certainty is what the venture stars dry mass would be...
[4] The kinetic energy of this craft at 70% of light velocity is 2.2 x 10 24 joules. One half of a year is 15,768,000 seconds. Dividing that by 2.2 x 10 24 joules gives us the average power output we need to get from our matter anti-matter annihilation engine. This gives us 1.4 x 10 17 watts.
EBTX is going over the ships mission profile, which specifys an acceleration of 1.5 gs for half a year. This leads to a delta v (change in velocity) of 210,000 km/s, and a voyage lasting 6.75 years from start to finish. His estimates on the energy consumption are an absolute minimum, since they do not factor in the various inefficiencys that come with reaction engines. With this established, he then moves to the crux of the matter.
[5] But lets suppose that our engine is unbelievably 99.99% efficient, and only 1/10,000th part of our energy will end up as heat to effect our engine. But 1/10,000th part is 1/6th of the energy released by a hiroshima bomb per second. That amount of heat would destroy any conceivable engine in the 1st second.
EBTX doesn't provide any calculations on what the heat capacity of the ship might be, so hes basically just speculating on this. We'll need to do some basic math in order to find out how much heat the engines can handle. Lets start off by assuming that the engines weigh 10,000 tons, and are made of mere carbon steel. This substance has a specific heat of 490 joules per kilogram, I.E, you have to input 490 joules to raise its temperature by 1 celsius. If the temperature before ignition is 0 celsius, and the safe operating limit is 600 celsius, this gives the engines a heat capacity of 2.94 x 10 12 joules. Unfortunately, the heat load from the engines (even at 99.99% efficiency) is 1.05x13 joules, too much to handle. It would seem that EBTX has a valid point. But wait, he didn't merely say that the venture star was unworkable under the circumstances given in the movie: He made the much stronger claim that it was unworkable under any circumstances whatsoever! According to him, a journey from sol to alpha centauri in a human lifespan is out of the question. Well, lets put that assumption to the test.
What we have to do is reduce the velocity of the ship to something more realistic. Instead of .7 C lets try just .07 C, which is 21,000 km/s. By decreasing the velocity by a factor of 10, the kinetic energy (and hence the energy needed to propel the ship) is decreased by a factor of 100. Since the ship is going to be in transit for a longer period of time, we might as well reduce its acceleration by a factor of 10, too. Instead of a 6 month engine burn, the venture star will be accelerating for 60 months. So now, the total energy budget of the craft has been reduced to 2.2 x 10 21 joules, at a rate of 1.4 x 10 14 watts. Thats more like it. And while we're at it, we'll need to ditch the 99.99% efficiency rating EBTX suggested (as a reductio ad absudem), and determine a more plausible rating. Anti-matter engines normally have an efficiency of over 90%, but when you throw room-temperature super conductors into the equation, you can probably get up to about 99% efficiency. If so, that implys the total heat transference would be 1.05 x 10 12 watts. Thats low enough that we don't need to worry about the engines melting, especially when they are being assisted by the radiator panels. Voila, problem solved!

[6] In point of fact, an anti-matter propulsion system is simply ludicrous for high-g accelerations or even moderate ones, because though propulsion by means of electromagnetic radiation alone is the most efficient possible, it does not provide the acceleration produced by throwing out gobs of matter at high speeds.
Thats quite true, which is why the acceleration phase should be drawn out as long as practical. AMAT engines cannot provide nearly as much thrust as chemical or nuclear engines, and they must be given as much time as possible to get a starship up to speed. 60 months is better than 6, in this case. But its curious that EBTX limits his criticism to the venture stars engine (which is only used to decelerate upon reaching alpha centauri, and then to accelerate back to sol), and not to the titanic laser batterys that propel it at the same rate. Does he even know that the ship uses a photon sail?
[7] There is no plan for going to other star systems under the presently known conservation laws in a human lifespan time frame as anything but idiotic.
Such an optimist we are :o)
 Video #2
This (silent) video starts off from a shaky premise. Whereas his previous estimate for the venture stars wet mass was a slim 100,000 tons, EBTX trys to up the scale by comparing it alongside an aircraft carrier. Bizarrely, he suggests that the radiator panels each have as much mass as the ocean going behemoth, whereas the two engines combined are only as massive as one aircraft carrier. Who knows how he came to that conclusion... Adding in the propellant tanks and the tensile truss, he arrives at a final mass of 500,000 tons. But wait, thats not all: EBTX actually has the gaul to claim that this is merely the ships dry mass! There are so many things wrong with that, its hard to articulate. Aircraft carriers are a dense lump of metal that float on water via buoyancy, whereas the venture star is just a spindly rope of carbon nanotubes and containers flying through space. The designs have nothing in common, and its obvious he is just trying to inflate the ships mass. Complementing this deception, EBTX then uses a hopelessly backward method to try and determine how much propellant is needed to accelerate the ship up to 70% light velocity.
Using an exhaust velocity of .693 C (a figure seemingly made up on the spot), and a dry mass of 500,000 tons, he concludes that 500,000 tons of propellant would be required to accelerate the ship to .7 C. But if you input all the numbers EBTX used, this bloated ship would only reach a final velocity of .48 C. A word of caution: In order to correctly use tsiolkovskys rocket equation, you need to know the ships mass ratio and exhaust velocity. You can't just pull numbers out of your a$$. If we return to our previous assumptions about a wet mass of 100,000 tons, and a delta v of 7% light velocity, we can proceed with far more clarity than our host. Using an exhaust velocity of 59,000 kilometers per second (which is what the valkyrie engine was supposed to max out at), we get a dry mass of about 70,000 tons. This means that just 30,000 tons of matter-antimatter propellant are required to get underway, a very comfortable mass ratio that allows lots of safety room. This is important because some of the byproducts in a matter-antimatter reaction cannot be redirected through the engine chamber, and hence cannot be used to generate thrust.

Friday, 24 October 2014

Americanised canada

This has been a really lousy week for canada. In the wake of the shootings on parliament hill, this nation has adopted exactly the posture it shouldn't have. Instead of breathing a sigh of relief that only one person was killed *, canadians are puffing themselves into a panicked frenzy, debating whether or not police should be given the authority to detain people without probable cause. The prime minister wants to have an escort of RCMP wherever he goes. And some dip$hits are actually surprised that michael zehaf-bibeau managed to get his hands on a rifle, in spite of his prohibition from owning firearms. Christ, how many times do these people need to be told? Firearm laws only apply to law abiding citizens: Criminals pay no heed of them. A gun ban wouldn't work even if every firearm in existence (including those used by the state!) is tracked down and destroyed, because the knowledge and machinery needed to create them is universal. Small engine mechanics, metal fabricators, and plenty of other careers require an understanding of the same principles upon which firearms are based.
In order to enforce real gun control, we would need to revert back to pre-industrial times, which would make us easy prey against more militaristic nations. So please, liberals, shut up about firearms already. There is nothing you can do about it but accept the fact that citizens have the right to self defense, and that people will occasionally die as a result of this. Canada needs to take a gulp of fresh air. We have adopted the american belief that turning towards a police state will make us safer, and that achieving this safety is worth the cost of relinquishing our civil libertys (well, the few libertys that remain after 911). This is a preposterous notion and it needs to be shot full of holes. As for the parliament hill shooting being perpetrated on behalf of ISIS, who cares? It was a minor incident on behalf of a minor terrorist organisation on the other side of the planet. Anyone with two brain cells to rub together can see that this is not exactly on our list of national prioritys. And yet stephen harper (who was calmly drinking wine during the 'crisis') is now attempting to use the shooting as a pretext to expand government power. Like it or not, it seems we're on a dangerous path to americanisation.
*Chrissy teigen had it right when she said: ''Active shooting in canada, or as we call it in america, wednesday.'' Better ammend that to friday, too.

Friday, 10 October 2014

The convergence: Existential threats to mankind

Three years ago, I released a video on youtube which told of a monumental threat that civilisation would someday have to confront. This threat is a series of man made and environmental disasters which will overlap and amplify into something resembling a great filter. I was supposed to have followed up this warning with a full length video, but was unable to do so due to a serious illness. Before I knew it, 2011 rolled into 2012, then 2012 rolled into 2013, etc. The mistake I made was assuming that men more educated than myself would be able to connect the dots, create a hypothesis, and get it out to the public. Suffice to say, that didn't happen. It is now my challenge to try and explicate the complex nature of this crisis, at a time when global warming has totally ensnared the worlds attention economy. Please bare with me!
Now, exactly what is the convergence? It is a collusion of more than a dozen existential threats which not only exist, and continue to worsen, but will peak in intensity sometime in the 2030s. Their relative dangers and receptivity to change varys. Some of the threats have quite minor effects, but are hard to fix, and dangerous because they exacerbate and amplify the other threats (like how overpopulation forces us to find more sources of energy and food, or how pollution ruins arable land and makes it harder to create that extra food, etc). Other threats would have devastating effects if left to build up, but can be quite easily fixed before they reach critical mass. Each of these tendrils acts as a stress point in the foundations of our global society: A fissure in one can quickly spread to others, leading to an unpredictable domino sequence.
In this article, I will content myself simply with naming the specific threats, and describing the backgrounds of the more nefarious ones. It would also be useful to have a simple typology through which we can classify their nature. For now, this can take the form of three categorys. #1 will determine the disasters potential for collateral damage. #2 will determine whether or not countermeasures are practical against it. #3 will determine whether or not the threat is an adjuvant (I.E, if it negatively affects the other disasters). What I will not do is attempt to offer a comprehensive solution to these looming disasters. Because the connections and interplay between these threats are not well understood, I would also caution anyone who thinks they can enact treatments in isolation: That would be like trying to stop a volcanic eruption by plugging up the holes to the surface!
Global warming
According to the EPA, average global temperatures are expected to increase by 2 to 11.5 degrees fahrenheit by the year 2100. Depending on how high the temperature ceiling is, this would aggravate storm systems, raise sea levels, and damage ecosystems. Government agencys allege that this will happen because of the sheer quantity of carbon dioxide being pumped into the atmosphere by gasoline and diesel engines, and that the only way to prevent this is through a carbon tax. While scholars like david m.w evans have managed to punch holes in their climate model, and prove that it is inconsistent with data collected by the ARGO system, that only serves to debunk some of the more hysterical claims put forward by the global warming alarmists: This includes the belief that the earths climate system has already passed an insurmountable tipping point, or that the rate of warning is happening faster than at any point in earths history.
There are other problems with the AGW narrative, at least with regards to how it will be solved. According to ryan dawson: 'The carbon tax and the climate controls and security bill, regardless of what you think about global warming, doesn't do a thing to prevent it. All it does is divide up greenhouse gases into allotments which can be traded and sold and even invested in by third parties, just creates a market environment which lets the larger government subsidized agrobusinesses to gobble up allotments from the smaller farms and ranches and create tighter virtual monopolies. It also put control of industry into the hands of a small group of government hacks and threatens property rights all under the guise of being green. The only kind of green here is envy and money.' Carbon taxes are the wrong answer to this problem. We should pay more attention to bjorn lomborg and his studys on marine cloud whitening, which could mitigate global warming for a relatively small price tag.
Depletion of arable land
Peak water
The misandry bubble
According to one theory, feminism (which had its roots in the 'free love' movement of the 60s) is a social virus which destroys the marriage system. Nuclear familys are formed by beta males and their women, who enter into marriage chaste. It is a very stable unit which caters to the needs of the man, the woman, and their children. All advanced, patriarchal societys depend upon the nuclear family: Since they all work together to support the man, the man can devote his full effort to the job. This high productivity is what has allowed the west to outcompete all other cultures, and the marriage system itself is airtight. The only vulnerable link is the young woman herself, often before she becomes sexually active. Girls are told that its okay to delay marriage and engage in casual sex, and that this will not have a negative imapct for them in the future. Television paints an unrealistic ideal for them to emulate, and sure enough... Women get stuck in the lifestyle of sleeping with men whose only lot in life is seduction.
These men only comprise a small portion of the population, which gives them nearly unlimited options: They can pick any woman they want, and treat them however they want, with scarce few consequences. As a result, women spend years and years chasing men who are out of their league and have no intention of making a commitment, while they pass up relationships with men in their own social groups. But a womans beauty has a brief shelf life, and by the time of her 30th to 35th birthday, she will find herself kicked off the carousel by the alphas. Even the betas will not desire her as much, since men instinctively distrust women with high n-counts. This phenomenon destroys the pool of marriageable women, and consequently diminishs the incentive of men to work. With no wives or children to care for, they are able to live a more convenient, spend thrift life. Of course, this has the consequence of slowing down economic productivity, which is a death sentence to patriarchal societys.
Ozone pollution
Ocean acidification
The energy crisis
Global supplys of fossil fuels are dwindling at a rapid rate. If consumption patterns continue as they have indefinitely, there will only be enough oil reserves will last for 40 to 50 years, while gas reserves will be depleted in 70 years, and coal will disappear in about 200 years. Of course, these are just the official figures... One must remember that this crisis is partly artificial, since the US government not only supresses alternative energy sources, but has also concealed the existence of massive oil fields like those in prudhoe bay, alaska.
Military defense death spiral
The singularity
Demographic crisis
This also ties in with feminism to a certain extent. With most children now being born out of wedlock to single mothers (many of them the product of miscegenation), the indigenous populations of north america and europe are being culturally and genetically weakened. Without the benefit of growing up in a nuclear family, they will be unable to compete at home and abroad with the 3rd world, leaving them vulnerable to a takeover by baby boom. Within a generation from now, formerly strong western nations will have devolved into balkanised hellholes with low productivity.
Economic collapse
Natural disasters
One thing that should be obvious by now is how most of the threats are of very different backgrounds, and have ambiguous taxonomys (kindof like the seven deadly sins). Take natural disasters, for example. No one would consider everyday occurrences like volcanism, earthquakes, floods, hurricanes, and solar storms to be an existential threat in an of themselves. But if their magnitude could be amplified by things like global warming? And if they were to be assigned under a single category? Then they would definitely make the cut. Now that I have assembled something resembling a hypothesis, it is up to you the public to digest and critique. Comments of all kind are welcome. If you think there were any threats I marginalised, exaggerated, or forgot to even include, now is your time to speak.

Sunday, 7 September 2014

Physiology of a super soldier

What will soldiers in the next few decades look like, when the world has run out of oil and is fighting over coal and gas? For once, lets forget about equipment and weapons, and ponder on the men themselves. We know that in many combat units a pareto principle is in play, whereby a handful of veteran 'aces' score the majority of the kills. It is no small problem that in a violent confrontation, most soldiers behave in a passive manner. On average, a 100 man company may have 80 men who are little more than 'an ammunition porter for the main weapons and a rear/flank security man.' Implementing stricter entry requirements and harsher training can only partially alter this ratio: After all, militarys have a manpower quota that must be met even in an era that is awash with gory war movies and memoirs.
Another concern is whether or not baseline humans will be able to even participate in late 21st century conflicts. Judging from the mind numbing power of modern weaponry, this seems dubious. Over the past decades, military machines have become more and more capable, but the same cannot be said for the humans who use them. Those military theorists who have acknowledged the problems posed by a lethal, high tempo battlefield will often propose the adoption of powered exoskeletons. But even if armys could afford to give every infantryman such expensive equipment, it wouldn't provide the most bang for their buck. As we shall see, that distinction belongs to a class of humans bred to fight and die on their nations behalf. Soldiers who will rely not on articulated suits of armor, but a physiology geared towards the harsh demands of warfare. This approach is heavily dependent upon gene therapy and eugenics.
 Say hello to homo validus
The concept of a super soldier is certainly not new. Greek mythology was rich with storys about orion, achilles, and hercules. Demi-gods who lived among humans yet possessed incredible strength, speed, and durability. Many cultures that came into contact with the greeks would subsequently adopt heros with such attributes, something to serve both as inspiration and instruction for young men going to war. Armed conflict has always been a tough business with more losers than winners [1], but this trend has become even more pronounced since the industrial revolution. So what abilitys does the mainstream press have in mind when it comes to super soldiers? Here is one headline: 'Tomorrow's soldiers could be able to run at Olympic speeds and will be able to go for days without food or sleep, if new research into gene manipulation is successful. According to the U.S. Army's plans for the future, their soldiers will be able to carry huge weights, live off their fat stores for extended periods and even regrow limbs blown apart by bombs.' These are all good starting points, but are they really going to be sufficient in a peer vs peer conflict? Normal humans may not be able to keep up with the pace determined by their machines, forcing militarys to work under their capacitys. If this turns out to be the case, then we may need to envision a completely different kind of soldier.
For instance, gunshot wounds are an all encompassing problem that can currently only be addressed through body armor and medical care. No one has devised a biological system which would allow a soldier to take a handgun or rifle round to the torso, and have a very good chance of surviving.  Is such a thing achievable or desirable? Steering committees would be needed to determine what adaptations are suitable for a future soldier, and what kind of procedures could be used to attain them. Gene therapy comes in two different forms, somatic and germline. 'Somatic cells, such as skin or muscle cells, contain 23 chromosomal pairs and do not transmit genetic information to succeeding generations. Germline cells, which are the sperm and ova cells, contain 23 unpaired chromosomes and provide genetic information to offspring, as well as to the future generations descended from those offspring.' Military theorists are under the impression that we can rely on one-off somatic alterations to create super soldiers on demand, with no long-term ramifications. They have not considered the very real possibility that some of the adaptations required may be too extreme for a mature adults body to cope with. If so, then we're not looking at a genome army, but something more akin to the saiyans: A warrior race who will need to be socially and reproductively isolated from the native population.
They would be born in a government facility, raised as government operatives, and may pledge allegiance to a nation that abhors their very existence. Upon reaching retirement age, these soldiers will not simply disappear: They will require large living spaces where they can settle down and procreate, so that a new generation (their children) may replace them. And what of the powers and abilitys wielded by this sub-species of human? It is known that the ancestors of modern homo sapiens originally had physical strength on par with other members of the great ape family. How strong is that? Comparative studys on chimpanzees indicate they could lift four times their own body weight overhead. Thats equivalent to a 200 lbs man military pressing 800 lbs! But at some point in time, hominids had reached an evolutionary crossroads where high intelligence and manual dexterity became increasingly more relevant for their survival. Consequentially, muscular strength eroded over hundreds of thousands of years to its current low, as detailed in numerous articles. [2] This fact obviates the need to devise exotic methods of performance enhancement: In practise, an individual will simply be restoring the strength and speed that were native to their most distant forebears. Through the use germline engineering and in-vitro fertilisation, we may see humans returned to their former physical glory.
A mastiff from MGR
The couples who agree to have their (unborn) children genetically enhanced to serve in the army should ideally be military veterans themselves, to ensure that they have the long term commitment required for the project to work. Once several thousand couples have been assembled, they would each need to donate sperm and ova samples to a lab, and sign a contract allowing geneticists to insert micro-chromosomes into them. Embryos with the altered genomes would then need to have their gender pre-determined as male or female, to ensure that a breeding population of metahumans can flourish: Too few females will limit the sizes of succeeding generations, but too many females will compromise the units fighting strength (presumably, since only males will be allowed to serve in front line roles). The fertilised embryo would then be implanted into the woman, and nine months later she would give birth to a child with unbelievable physical powers.
Th next step in this process is even more interesting. During their early years, the children will be raised by their familys in a communal estate, owned and paid for by the government. They would grow up in an inclusive atmosphere with other metahumans for company, and an absence of extremists who might discriminate against them. From the age of 5 to 14, they will spend their summer months in a military themed camp where they become acquainted with many of the skills that will define their adulthood. At the age of 15, the soldiers were drawn together as a training battalion and shipped out to an actual base, learning to work alongside their peers and co-operate as an actual unit. During this time, they would be under the command of the army reserves. At the age of 18, the soldiers and their battalion were considered operational, and would go into active duty. This is when they were transferred to the regular armys command.
Metahuman soldiers will be organised into so called shock brigades. These are tier three units which are capable of serving many different roles. In high intensity conflicts, they would form the spearhead for corps level attacks and counter-attacks, enduring in conditions that would shatter the morale of ordinary soldiers. They can establish defensive screens that depleted formations or support troops can shelter behind in times of crisis. Their physiology would also make them extremely effective in close quarters fighting so typical of urban combat. In low intensity conflicts, shock brigades can be stripped of their vehicles and heavy weaponry, and sent behind enemy lines on foot to assist special forces units. This was possible because of the soldiers great strength and skill at man packing, enabling them to carry everything they needed with only an occasional air drop to replenish ammunition, food, batterys, and spare parts. Fighting as light infantry, they can pacify environments which are inhospitable to tier two and one units.
The abdominal cavity
Muscular system: Metahumans have an archaic musculo-skeletal system granting them enormous physical strength, sufficient to deadlift approximately 2000 lbs, and military press well over 800 lbs. With training, they can do even more. (A normal man can press 110 to 130 lbs, while top power lifters in peak condition can triple this) Most of this performance increase comes down to complex differences in how the muscle fibers are recruited, and a mutation in the MSTN and NCOR1 gene, which regulate muscle mass. They have a vertical jump of 72 inchs, and a standing long jump of 288 inchs. Hand speed and reflexes are also improved. In addition, metahumans have enlarged adrenal glands which are connected to a set of GVA nerve fibers: With training, they can empty its contents in around 2 seconds, giving the body explosive speed at the cost of fine motor skills [3]. The adrenal glands can also be activated in the normal manner, dumping hormones into the blood stream at a slower pace in response to stress.
Dermal and circulatory system: A metahumans skin (particularly the dermis and epidermis layer) is much thicker, making them less susceptible to cuts, burns, bruises, or abrasions. Their major arteries are smaller, more numerous and more disperse, often hidden deep within the flesh for extra protection. This makes them unlikely to bleed to death from slash or stab wounds, and prevents their circulation from being cut off by ropes, chains, or other restraints.
Skeletal system: Arguably the most extensive alteration made to their physiology, the skeleton is coated in a strong organic material which make it more resistant to chipping fractures and other impacts. Osteocytes in the lacunae of the bones are responsible for the breakdown and buildup of calcium. A mutation in the thyroid and osteoblasts would cause the massive absorption of calcium needed... Metahumans have what is termed a cranial ridge, basically a smooth growth of bone along the outside of the skull. The anterior ridge is part of the frontal and maxillary bones: It has an hour glass shape, extending from the top of the forehead to the base of the nose. The posterior ridge is part of the parietal bone: It has a more round profile. Small caliber bullets striking the ridge are stopped outright or deflected, although a hairline fracture will often result. In short, the bone around these regions of the skull are much thicker and devoid of dangerous shot traps.
Their abdominal wall is lined with a carapace composed of yellow fibro-cartilage and red bone marrow. This structure serves two important functions with regards to gunshot wounds. First, because of the dissimilar propertys of these tissues (elasticity, viscosity, density, etc), bullets will experience a phenomenon called impedance mismatch, which prevents the formation of a temporary cavity. Second, even after the bullet penetrates the carapace, it has lost so much energy that it tends to push organs aside instead of crushing them. This arrangement is so effective that most pistol calibers cannot deliver a through and through chest wound to a metahuman. In those instances when organs are damaged enough to begin leaking blood, the carapace performs another remarkable feat: Macrophages in the putty-like marrow are able to quickly break down RBCs and recycle their materials for use elsewhere in the body, which limits the severity of internal bleeding [4]. Although the individual can still die from exsanguination if the wound is severe enough, this is not common, since their blood clots more quickly.
Minor alterations: The brain has a thicker layer of cerobrospinal fluid (a mild form of hydrocephalus), which makes them less susceptible to concussions. Their eyes have a widened fovea, giving them an enhanced field of central vision. They also have highly mobile ears, enabling them to determine the exact location of a sound more quickly. Finally, the length of their metacarpals have been equalised to create a more solid punching surface.
One major advantage of the metahuman soldier program is how their genomes can be updated every generation to meet emergent battlefield demands, and take advantage of progress in biotechnology: Unlike homo sapiens, they will never become obsolete. There may be times when steering committees come up with useful adaptations that are too difficult for current technologys to implement. So even though super soldiers then in service wouldn't receive the enhancements, their children and grandchildren very well could!
They are significantly heavier than a human of comparable build, since they have alot more bone and skin. Because of their abdominal carapace, metahumans are visually perceived as overweight. Most body fat is not stored on the hips or stomachs, as that would lead to excessive waistlines (it goes to the forearms and lower leg instead). The average specimen is 5 feet 8-11 inchs tall, weighs 220-250 lbs, with a mesomorphic-endomorph body type.
Metahumans will easily beat humans in short distance sprints, but not anything over 400 meters. Human physiology dictates that a runner's near-top speed cannot be maintained for more than thirty seconds or so as lactic acid builds up and leg muscles begin to be deprived of oxygen. The only long distance races they win at are events where a weighted pack is mandatory (like the army speed marchs).
They would not train in any specific hand to hand combat system, mostly because their natural strength and speed enables them to quickly overwhelm a human opponent. In addition, metahumans are virtually impossible to knock unconscious, and cannot suffer broken noses or jaws. While they may be taught to execute basic strikes, even this is overkill: With homo validus, every punch is a knockout punch.
Metahumans should practise knife survival drills: Even though their abdominal carapace protects them from lethal stab wounds, they are still vulnerable to bio-mechanical cutting which traverses large sections of the body . Many courses which claim to offer such survival skills actually teach something akin to dueling, which is not useful at all. As one source put it: 'Criminal and military history reveals that a real world knife fight is more like football and less like fencing.'


[1] According to an old german adage: 'A great war leaves the country with three armies - an army of cripples, an army of mourners, and an army of thieves.'
[2] This decline was already evident in our ancestors of 200,000 years ago. Compare the homo sapiens of that time to homo ergaster: The difference in skeletal robustness is obvious.

[3] Experiments showed that adrenaline increases twitch, but not tetanic force, and not rate of force development (which is necessary for fast and high power movements).
[4] Internal bleeding is serious for two reasons: The excess blood can compress organs and cause their dysfunction (as can occur in hematoma). When the bleeding does not stop spontaneously, the loss of blood will cause hemorrhagic shock, which can lead to brain damage and death. If there is pressure, it may lead to death or a brain hemorrhage.
*edit made april 8, 2015

Thursday, 7 August 2014

No Glory in War: the only way to commemorate WW1

We must always question our ability to know, especially with regards to how millions of men can be swayed and pressured into fighting a senseless conflict.

The gruesome consequences of war

To what end do we fight? Who benefits from war? Who pays the ultimate price?

Friday, 18 April 2014

Computational incompressibility

Some of the things that go on in this mysterious reality of ours are totally beyond the ability of sapients to understand. Just as it would be impossible for a dog or cat to grasp the theory of relativity, it may well be impossible for a human to discern the true meaning of life. Even if god itself decided to step into our world and explain such things, the words would have no meaning to us. Some information is so densely encoded that it simply cannot be passed on from a higher toposophic to a lower. The superintelligent entitys which populate science fiction will often say and do things which simply cannot be explained to baseline humans. One example of this would be in the popular game mass effect, when commander shepard attempts to interrogate an AI named sovereign, and find out why it desires to exterminate all life in the galaxy. Shepard: “What do you want from us? Slaves, resources?” Sovereign: “My kind transcends your very understanding. We are each a nation, independent, free of all weakness. You cannot even grasp the nature of our existence.”

 A conversation with sovereign

Why would the game have a villain that can't explain its motives to the heroes? Isn't that just a lazy plot device to enable a giant battle? Well, no. In an example of fridge brilliance, the developers of mass effect had just hit upon one of the major issues that will complicate relations between man and god: Computational incompressibility. In a recent article [1], philosopher paul humphreys describes this as a behavioural facet which is underivable by a process simpler than whatever actualizes that behaviour. One manifestation of this is the inability of great apes and parrots (some of whom have a vocabulary of hundreds of words) to have a real conversation with their caretakers. These creatures may be able to vocalise/signal, but they cannot use language or any other human domain features. This is not surprising: Homo sapiens are the worlds only general intelligence, which means we have a quantitative and qualitative superiority over all other animal species in terms of cognition. There may be near-equals in one or two categorys, but none compete with us in all eight.
A necessary simplification
It goes without saying, but intelligence isn't isotropic. This becomes very obvious when observing individuals with savantism, who exhibit peak human abilitys in the realm of mathematics and aesthetics (particularly music), but are extremely deficient in all other areas of cognition. Many savants are not even able to dress themselves! This alone should be enough to put doubt upon charles spearmans g-factor hypothesis: When you throw in the possibility that there may be entire toposophic realms above the human level, then its usefulness as a universal intelligence test becomes nul and void. Anyway... Computational incompressability represents an anthropomorphic problem for us, in that even when we know that we are dealing with an alien mind far smarter than us, we tend to underestimate what its true capabilitys might be [2]. This tendancy explains why so many people either disregard the dangers posed by a superintelligence, or assume that such creatures could be easily bargained or reasoned with.


Video transcript

[Fighting and containing transapients, part 1. This video was released to youtube on october 8, 2012. It was eventually removed on behalf of fox broadcasting, so the contents will be reposted in text format. An archived copy is available here]

A superintelligent intellect is one that has the capacity to radically outperform the best human brains in practically every field, including problem solving, brute calculation, scientific creativity, general wisdom and social skills. Such entities may function as super-expert systems that work to execute on any goal it is given so long as it falls within the laws of physics and it has access to the requisite resources. Sometimes, a distinction is made between weak and strong superintelligence. Weak superintelligence is what you would get if you could run a human intellect at an accelerated clock speed, such as by uploading it to a fast computer. If the uploads clock-rate were a thousand times that of a biological brain, it would perceive reality as being slowed down by a factor of a thousand. It would think a thousand times more thoughts in a given time interval than its biological counterpart. Unfortunately, no matter how much you speed up the brain of a creature like a dog, you're not going to get the equivalent of a human intellect. Analogously, there might be kinds of smartness that wouldn't be accessible to even very fast human brains given their current capacities. Something else is needed for that. Strong superintelligence refers to an intellect that is not only faster than a human brain but also smarter in a qualitative sense. Something as simple as increasing the size or connectivity of our neuronal networks might give us some of these capacities. Other improvements may require wholesale reorganization of our cognitive architecture or the addition of new layers of cognition on top of the old ones. When discussion of increasing ones smartness comes up, the question often arises: Does intelligence progress linearly, exponentially, or both?
In other words, is intelligence something that is isotropic? Does it look the same when scaled up or improved? Current evidence suggests not. Because if it did, then problems that realistically require one genius to solve should also be solvable by two or three non-genius, and that clearly is not the case. The only benefit that comes from having multiple thinkers on a subject is that each individual usually has a different viewpoint, specialise in different things, and have more brute force to throw at the problem. Thats where the synergistic effect of multiple minds coming together stems from. But clearly, this has a limit. The feats that can be performed by one person with an IQ of 180 cannot, in practise, be replicated by two people with an IQ of just 90. There are interesting examples of this phenomenon. (The right genius in the right place) Dozens of philosophers had pondered the solution to the paradoxes raised by zeno of elea. Some of the greatest minds in recorded history tried their hand at cracking it, but the paradoxes did not budge. They managed to withstand two millenia of attempts at scrutinising them. It was not until recently that a definitive answer was provided by peter lynds, in the paper time and classical and quantum mechanics. This suggests a non linear intelligence gradient. But why stop at the human level? After all, from a cosmopolitan viewpoint, the difference in smartness between individual humans is tiny compared to the difference between a human and a primate, or a reptile, or an arthropod. Giant disparitys in intelligence are what is of interest to us, especially given the task of repelling a hostile force of transapients.

This is because no matter how many primates you assemble, they will still not be able to perform the feats of a human, them being unable to understand why three minus two equals one, or to utter rené descartes famous philosophical statement. Let us posit a brand new theory of intelligence differentials. In a nod to orions arm, it will henceforth be known as sophontology, a dicipline which shall concern hypothesising the natures of minds occupying all points on the great toposophic plane. What ought to be the main yardstick of this approach? One idea comes to mind. It will go under the name of domain thresholds. A domain is a landscape that encapsulates minds whose natures follow a certain pattern. This pattern corresponds to the kinds of thought that a being is capable of. For humans, this includes language, self awareness, rationality, abstractness, theory of mind, objective permanence, mathematics, aesthetics, and others. Domain features are the reason why it is impossible to say, for example, that humans are x amounts of times smarter than an animal:We simply posses cognitive abilitys that they do not (which makes numerical comparisons impossible). That begs the question, how can a reasonable comparison be made between something which is present, and something which is not? Theres no clear answer to this. Suffice to say, domain features are the point on the chart where the incremental curves into the exponential. That is something which has important ramifications in an intertoposophic war. After all, it is by definition not possible to compete with a being who exhibits domain features you lack. A reptile cannot compete with a human at arithmetic. It does not even have any concept of numbers.
Domain thresholds: Non-sapients occupy the 
1st rung, sapients occupy the 2nd rung, while 
transapients occupy the 3rd rung
There is every reason to believe that there are more domain features which the human archetype has not evolved to exploit. A superintelligence will be able to take advantage of this, and compete with us in behavioral categorys which we have no hope of responding to. There are a number of historical precedents for these sorts of things happening. Most recently, was the rise of the hominids. With only a mere tripling in brain size, the descendants of australopithecus were able to surpass all others and dominate the world: Without objective permanence or abstract thought, the animals that these ancient pre-humans competed against had no way to fight back strategically. They had no notions of area denial, of scorched earth, of distance weapons or physical traps. Why should we think the situation would be any different for us, going up against a band of transapients? In the orions arm encyclopedia, a wide variety of such conflicts are portrayed in realistic fasion. One area in which the OA are unique is the fact that they have multiple different stages of superintelligence, six in total, each more powerful and more foreign than the previous. This notion would have much in common with the concept of domain thresholds. Of particular interest is the encyclopedias rejection of the so called plucky baseline meme, which is the idea that ordinary unenhanced human beings can still give a good account of themselves in the face of overwhelming posthuman intelligence and firepower. The OA have determined it an impossibility for any non-superintelligent individual or group to carry out the following actions:
  • to hack into an Angelnet or transapient Operating System (so it is impossible, literally, for a baseline-equivalent to hack into even an SI:1 angelnet, no matter how infinitely lucky e might be)
  • Outwit or fool an Angelnet (e.g. smuggle in weapons, commit a murder, perform an act of sabotage, conceal one's position or motives, whatever)
  • Out think or outperform on its own terms a transapient
  • Correctly operate non-baseline friendly transapient tech (including weapons)
  • In any way comprehend or reverse engineer transapient tech
  • escape unaided from ahuman exploitation
  • have any victory against a transapient "pest extermination team" sent to get rid of you
  • outperform or beat or overwhelm in a military manner and/or by superior force of numbers an individual or group of transapients whose job is to get rid of you
  • in any way harm an archailect
So - for the purposes of this discussion - the only way that a mere sapient can match a transapient is by emself becoming a transapient. A flatworm in a muddy pond cannot appreciate works of art, or understand general relativity. But if it evolves or is provolved to human equivalence, and becomes human, then it can. It would be ridiculous to have a "plucky flatworm" beating up a human, or out-performing one in literary criticism or university calculus, while still remaining a flatworm. But for a flatworm to evolve into a human, that also means it would no longer be the same being, it would be changed, totally, in every way; ascended and transcended beyond its original condition.

Saturday, 29 March 2014

Paradox of the machine gun

Pop-history has been responsible for perpetuating the notion that the most effective machine gun employed in world war 2 was the MG 42. Most people know the specs of this fine weapon, but lets run through them again for clarity: It was a 30 caliber machine gun that was usually mounted on a tripod, fed from a disintegrating ammunition belt, and fired at the incredible rate of 1200-1500 rpm. (The bren gun used by the british was magazine fed, and only fired at 500 rpm by contrast) The wall of lead that came out of this weapons barrel left a deep mark on many allied soldiers, and the MG 42 made prominent appearances in many postwar movies and memoirs. What many people don't seem to have considered is why the german high command selected a machine gun with such an unusually high rate of fire. It went against their strict doctrine of ammo conservation, which demanded that a weapons cyclic fire rate be kept artificially low (so that even during a mad minute, ammunition quotas would not be exceeded).
Most sources on this matter will assert: ''The high rate of fire resulted from experiments with preceding weapons that concluded that since a soldier only has a short period of time to shoot at an enemy, it was imperative to fire the highest number of bullets possible to increase the likelihood of a hit... The disadvantage of applying this principle was that the weapon consumed exorbitant amounts of ammunition and quickly overheated its barrel, making sustained fire problematic.'' Or this: ''The germans however came to the conclusion that a soldier in combat only fires when he sees the enemy and has but a few seconds to do so, taking this as a medium for all combatants they deemed it was he who fired an increased amount of bullets had an increased chance of a kill.'' But is there any truth to these explanations? An essay penned by our resident military guru, jim storr, suggest there is a completely different criterion that is not being accounted for.
Pondering the vast number of rounds per kill that are expended over the course of an engagement, like the notorious 250,000 bullets per KIA in iraq and afghanistan, storr suggests that something is interfering with a modern armys ability to employ small arms fire. This goes beyond things like stress and return fire making our troops accuracy levels decreases. Indeed, even if we accept that most of the shots fired off in a battle are done only to supress the enemy and not to kill him (since soldiers conceal themselves, they are hard to hit), this cannot explain how a quarter million rounds are expended just to kill one soldier: The only explanation is that the suppressing fire itself is being delivered in a fundamentally incorrect manner. Back in 1944, a study carried out by the army operational research group concluded that projectiles must not only pass within a certain proximity of a human combatant, but also arrive in a certain volume before he will feel threatened enough to take cover.
These are quantifiable data points that allow us to measure how much success a weapon and its user are having in a firefight. Armed with this knowledge, a group of soldiers from the british army were sent onto the firing range to train on a new piece of equipment, the live fire intelligent target. The results showed that most rounds fired missed by too wide a margin to drive an enemy to the ground, and that ammunition was expended at too high a rate to be sustainable for more than a few minutes. Thats consistent with what we have seen of other armys engaged in armed conflict: Infantrymen are not trained to excel at the role of suppression, nor do they have the proper weapons. The training problem can be solved by offering courses like those available at the TTECG, and giving sergeants stricter protocols for fire control. Of course, this still leaves us wondering what kind of small arm is best suited to suppression. If you thought it would be a belt fed, sustained fire weapon (with a quick change barrel) like the MG 42 or minimi, storrs response will come as a dissapointment:
''The British L86 magazinefed SA 80 Light Support Weapon (LSW), with its bipod, is extremely good at suppressing targets out to 500m or more... That is principally because it is accurate enough for almost every shot fired to contribute to suppression. The L110 (Minimi) Light Machine Gun (LMG) performs far worse in such trials. At best, only the first shot of a short burst passes close enough to suppress. However, subsequent shots in that burst go anything up to 6m wide of the mark at battlefield ranges.'' There is something very important that mr storr neglected to consider, however. And that is the combat role that infantrymen have in mind for their machine guns. In WW2, the germans placed a greater emphasis on engaging area targets, while the british focused on suppressing point targets. Not surprisingly, they both selected very different weapons prior to entering the war. The MG 42s wide cone of fire enabled it to suppress groups, while the bren guns narrow cone of fire enabled it to suppress individuals. Both were successful in fulfilling their combat role, but that brings another question to mind: Which philosophy (area targets vs point targets) is the correct one?
Before we can answer this dilemma, there is an important reality of the battlefield that must be borne in mind. Ever since the adoption of the rifled musket in the mid 1800s (and then the breech loading rifle in the late 1800s), armys have been forced to abandon close order formations and disperse themselves in order to survive weapons fire. The level of dispersion that occurred in this time frame is surprising. A book by trevor dupuy indicates that during the american civil war, armys were dispersed to the tune of 257 square meters per man. [1] During world war 1, the dispersion increased to 2475 square meters per man. During world war 2, the dispersion increased to 27,500 square meters per man. In other words, the amount of men in a given space (force densitys) decreased by a factor of 107 from the civil war to WW2. The trend towards increased dispersion has continued to accelerate. With that fact in mind, its hardly surprising that the number of bullets needed to kill an soldier has increased to such ridiculous highs.

Soldier of fortune magazine provides some figures on how much ammunition was expended per enemy killed in both world wars and korea. [2] Roughly 5000 rounds were fired for every enemy KIA in WW1, whereas 25,000 rounds were fired for every enemy KIA in WW2. In the korean war, the ammunition expenditure went up to an incredible 100,000 rounds per KIA. Thanks to this data, we can safely conclude that there is a strong correlation between force densitys and rounds per KIA. The more dispersed the enemy is, the harder they are to suppress and to hit. It seems that the old german practise of having their machine guns fire at groups (area targets) was a better compromise, and more reflective of the type of battle a soldier was likely to encounter. They got more mileage out of their mg 42s than the british did with their bren guns, and we should follow suit by focusing on crew served weapons with a wide cone of fire, like the current MG 3. We should also change the marksmanship training for soldiers, so that they are better able to use their rifles to engage individuals (point targets).


[1] Numbers, Predictions and War: Using History to Evaluate Combat Factors and Predict the Outcome of Battles, by Trevor Dupuy.

[2] Soldier of Fortune Magazine Guide to Super Snipers, by Robert K. Brown.

Wednesday, 26 March 2014

An unfortunate development

Recently, a study was released by oxford scientists on how to enhance the traumatising effects of imprisonment on criminals. These individuals, led by rebecca roache, considered various mediums for how this goal could be achieved, in order to not only increase the duration of punishment, but also its intensity. This approach is not news to me. After watching the hypercube and hellraiser back in late 2010, I determined that such exotic methods of torture would inevitably be brought to fruition within a few decades (even though the actual inventor may not be human). The amateurish enthusiasm and naivety of these researchers is quite appalling. Miss roache and her colleagues apparently have no plans to give this penalty its appropriate status: Instead of using it as a monumental deterrent of last resort akin to nuclear tipped ICBMs, they instead want this torture parcelled out to common criminals, in the foolish hope that it will help eliminate all crime. That anyone (much less tenured scientists!) thinks society can not only engage in such haphazard and profoundly evil practices, but actually benefit from it, is nothing short of amazing.
 This is where the shit storm began

They don't even pause to consider the wisdom of having a punishment that meets the crime, or ask whether such inordinate sentences should only be reserved for the darkest of souls. No, they want to use it on petty criminals like magdelena luczak and mariusz krezolek, a couple who were found guilty of... Killing their son... By poes law, is this a joke? You do not apply that kind of horsepower against garden variety criminals. This treatment isn't reserved for mere rapists or killers, it is solely the forte' of those who engage in wholesale and wanton destruction of human civilisation at large. Magdelena luczak and mariusz krezolek? Don't make me laugh, they aren't even in the fu*king game. If you want to create hell pits and actually stick human beings into them, you'd better make sure that only top tier psychos like ben netenyahu or dick cheney are targeted. And even then, confinement should not be authorised unless the courts undergo the trial of the century, and prosecutors can put together an armor-plated case that leaves absolutely no stones unturned.

 One stupendous crime...

 And one stupendous punishment
Hopefully, my brief diatribe can properly convey the seriousness that lies behind this question, and dispel whatever horseshit notions have been peddled by these scientists. The logical fallacys are easy to identify if you spend enough time debating feminists and their absurd belief that we could (or even should) have commitment free sex with no regard for traditional ethics. The prognosis remains the same, because the disease is identical: Moral relativism. This repellent ideology is responsible for much of the decay that we see in modern society, and it needs to be dumped in the trash where it belongs. All that aside, I can't help but wonder what will become of miss roache and her peers. By releasing such a provocative study under their own names (rather than an alias), these individuals have unknowingly nominated themselves for a darwin award. Inventors of torture devices have a  long and ugly history of being reimbursed for their hubris. Two of the most karmic/ironic examples of this are perillos of athens and joseph guillotin.
Ideally, this is how a 911 war crimes tribunal would proceed:
"Ladies and gentlemen of the jury. Now that you have heard all the evidence from myself and my learned friends you will shortly retire to consider your verdict. The United States Of America must now wash it's hands of these vile, sadistic mass-murderers. America must now move on. The sooner we can forget George W. Bush ever existed the happier the whole wide world will be. I ask you now to find the accused. Bush, Cheney, Rumsfeld, Rice and their ninety-two main conspirators. Guilty on the following counts.

Guilty Of High Treason By Gross Election Fraud And Conducting Two Illegal Government's From 2001 - 2004 & 2004 - 2009.

Guilty Of High Treason By Premeditated Mass Murder For The Purpose Of Starting Illegal War’s For Base Personal Financial Gain.

Guilty Of High Treason By Spending $1 Trillion  American Taxpayer’s Money On Creating Anti-American Terrorists.

Guilty Of Unprecedented High Treason Against We The People By Ignoring Our Founding Father’s Constitution And Everything Our Founding Father’s Stand For.

There can be no other outcome to this trial than the death sentence for the bestial Bush Family Gang of cold-blooded-mass-murderers who brought shame and disgrace to America in a way none of us had previously thought possible."

Demented Bush apologist Bill O’Reilly, who had found it expedient to move aboard, predicted the trial would last two years and the Bush cabal would walk. The trial actually lasted five weeks. All ninety-six found Guilty of Unprecedented High Treason Against We The People were executed on live TV, so justice could be seen to have returned to The United States Of America.