It has been an unusually wet and rainy summer in my corner of the world, and it was a great pleasure to be able to combine a road trip with what has become a rare sunny day.
The Girlfriend and I were headed to see our two godsons and their family for a summer barbecue; and you would think the combination of family, a perfect day, and a smoking grill would be all you would need to guarantee happiness all around.
But not today.
Why?
Because we were going to say goodbye to one of our godsons.
He’s going to the Middle East with his Army unit, and this is his last day “in the world”.
It takes about three hours to drive from one house to the other, and neither The Girlfriend or I want to think about the purpose of the trip too much...I’m trying to lose myself in music, and she’s on the phone, just to stay distracted.
There are a couple stops to make, but we eventually arrive at the house.
The weekend before I seasoned some beef, let it slowly cook for about six hours, “pulled” it and sauced it, then tossed it back on heat for another hour or so. As a result, all I had to do to get dinner going was to start the oven and reheat the meat.
We picked fresh blueberries, had some veggies and salad cut up, and all there was to do was to wait for everyone else to arrive.
It’s weird how, on the surface, everything seems normal and mundane, but the proverbial elephant is very much in the room.
He’s been in the Army National Guard for long enough that he’s got some stripes on his uniform, so he and we all have a familiarity with him being in the service, but the idea that he will be going tomorrow puts a bit of a damper of everyone’s otherwise good mood.
There was a first part to this story, written three months ago when he was going to Fort Sill, Oklahoma for final training; and as we discussed then, the good news here is that he is unlikely to be serving in a direct combat role-at least based on today’s circumstances.
Of course, I’ve known that since at least 2004 the Army has been so short of combat soldiers that they have been forced to draw on the Navy and Air Force for ground combat troops, and that the problem continues to this day; but you can bet I don’t mention it much to The Girlfriend, and he and I only talk about it a little.
That said, I don’t like the safety glasses portion of the hardware store anymore (troops are wearing safety glasses they were sent by family members shopping at Home Depot and Lowe’s to protect them from flying glass), but I am impressed at the stylishness of the new designs.
There are now three members of the family there who have a military background-his Mom was in the Army (and when both kids were young we did tease ‘em by saying “your mother wears Army boots!”), and I have my own military association-so there’s not much in the way of wild-eyed idealism. As a matter of fact, his Mom just came back from a trip to Anatolia (the historic name for what is now Turkey) with her college class, so we’re particularly focused on the history of the neighborhood these days.
Some stupid movie is on, but no one’s really paying attention.
We talk about what might happen a little bit, and we talk about the Sunni and Shi’a and Kurds, and how strange it is that we’re having a more informed discussion than what our President seems capable of.
(Ottoman Empire is not a piece of period furniture, Mr. President.
I promise it isn’t. Really.)
I know we don’t have enough troops to keep this surge going, and we talk about how long it might take to pull out...what I don’t talk about is my gut feeling that these troops will be there longer than 15 months...or my deep conviction that Mr. Bush will declare that we can never give up, no matter what the cost, because we are finally about to turn the corner and victory is, once again, imminent.
Nor do I choose to expound on my Universal Theory of Soldiering-which says, in a nutshell, that all soldiers, on every side, in every war in history, return home wounded.
Instead I’m trying to soak in as much memory as I can-because I’m deathly afraid the most important thing in the world one day will be to be able to remember him as he was before he left.
By now it’s dark, and fairly late, and his brother has to go, so it’s hugs all around.
We sit and talk for a bit longer, but he’s tired, and eventually falls asleep on the couch.
We still have a three hour drive ahead of us, and so it’s well past time for us to go...so we say goodnight to the remaining awake family. I take one last look at my godson...none of us cry, but I’m crying now.
The next morning, his Mom drove him to the airport, and now he’s gone.
If all goes well, he may be home for next Christmas.
Between now and then, there’s not much to do but hope for the best, make sure that we keep in touch as best we can, and try to stay as upbeat as circumstances allow.
So that’s our story for today: a beautiful summer day, good food, the company of those who matter most...all balanced against the backdrop of an occasion I hope to never repeat again.
All in all, it was a hell of a barbecue, if you ask me.
advice from a fake consultant
out-of-the-box thinking about economics, politics, and more...
Wednesday, August 29, 2007
Saturday, August 25, 2007
On Space Radar, Or, Real Life Ain’t Like “24”
Those of you who are regular readers will know that I like to bring you stories that are not part of the conversation you might generally see at this site (or anywhere else, for that matter); and I have a good one for you today.
We will discuss a military “spy satellite” program that has great potential for use by other customers-including law enforcement and intelligence agencies.
A program, frankly, that has a huge “Big Brother” potential.
A program that may end up costing $150 billion of today’s dollars-or more-over the next 25 years.
And with that introduction complete, let’s talk about “Space Radar”.
This will be a fairly long, but not very technical, description of a development and procurement program that has the twin goals of allowing the US military to obtain an image of any location on Earth, and then employing that information to support a variety of missions; and secondly, to track individual vehicles from space so that they might be attacked, if needed.
At the end of the discussion you should have an understanding of what the system can-and can’t-do, and you should be able to do your own thinking regarding a rough cost-benefit analysis of the program.
The television show “24” is but one of many places where you can see the image of the Secret Government Intelligence Agency specialists hunched over computer screens, following the image of some vehicle that is driving in a distant desert in real time, and then-with the appropriate giant explosion-the Evil Terrorist and his Truck are destroyed by a pinpoint strike from a perfectly guided missile.
Another success for Our Side.
But as you might suspect, in real life it doesn’t always work out that way.
For a variety of reasons that we will flesh out as we go along, several parts of that scenario are very difficult to make happen, and especially exactly when you need them all to happen.
For example, you can’t just “steer” satellites to where you want them to go as you might a car-the nature of how this type of satellite orbits the Earth determines where it will be at a particular time of day. Therefore just because you located the Evil Terrorist Truck doesn’t mean you can just call up an image, right this instant, from the closest satellite.
It’s also very difficult to maintain contact with the Evil Terrorist Truck for more than a few minutes as it’s driving along, and one factor causing this difficulty is related to the orbital parameters of the satellite. Other factors relate to the design characteristics of the radar itself, in what direction and over what terrain the target vehicle travels, potential confusion caused by any other vehicles located nearby, and the computational and computer processing difficulties inherent in this type of work.
The number of satellites you have, and their angle relative to what they are trying to observe will also affect the ability to get the image.
If all that wasn’t enough, all the data generated has to be processed into a useable image, downloaded, and analyzed. Unfortunately, space today is essentially wired for DSL, and to make this program work we will also have to install a much bigger “pipe” for getting data down from space. As a result of today’s slower speed connections, the current reality often means waiting for data from a satellite before it can be acted upon-and that delay can be not just seconds or minutes, but sometimes even hours.
Is there any good news here?
Well, maybe.
Depending on how we design, we might be able to use Moore’s Law to leverage today’s investment by upgrading some components later.
There is a manufacturing development on the horizon that might substantially reduce the cost of producing the radar arrays themselves, and electronics do tend to get cheaper every year-but those are not the most expensive part of the satellite’s design. More on this later.
Before we get too far, a quick word about sourcing.
I will link liberally in the course of this discussion, but I owe a giant thanks to the Congressional Budget Office. In January of 2007 they released “Alternatives for Military Space Radar”, and the great majority of the information found here can be found, in greater detail, there.
So now, let’s talk generically about what these satellites do.
As we discussed above, these satellites are intended to perform two basic missions. In the first, they travel around the Earth taking pictures of strips of land as they pass overhead. This is what you might think of as a typical “spy satellite” mission-the comparison of images from some location to images of the same place, taken in a previous time. This is the raw material of how most folks might traditionally imagine the process of photographic analysis works. You can analyze, for example, if construction has occurred (are they building the reactor?) at a particular place, or the movement and composition of military forces (where is the enemy?) on a battlefield. There are other uses for this data as well, including military mapping.
Military mapping has two purposes: the production of maps for use by troops, sailors, or pilots, and the creation of the “maps” that are fed into the navigational systems of certain missiles. Once the map (actually a digital three-dimensional representation of a series of “waypoints”) is loaded, the missile can find its own way to the target.
The second mission is not so traditional: the goal of tracking the movements of individual vehicles from space as they move about on the Earth, in “near-real” time, so as to create the “actionable intelligence” we so often hear about. No acknowledged satellite performs this mission today for any country. Earth based systems such as the Predator, Global Hawk, and JSTARS have handled this mission since the 1990’s.
The biggest challenge for a designer tasked with making these two things happen…is that the “taking pictures” mission and the “actionable intelligence” missions fundamentally conflict with each other.
Here’s what I mean:
After going to the time, trouble, and expense of launching a satellite and putting the infrastructure in place to both keep it going and to use the data it creates, you need to ensure you collect the most data possible 24 hours a day. A satellite performing this type of mission travels around the Earth in an orbit that creates “strips” of image, one alongside the other as each orbit goes by, until a complete image of the Earth’s surface is created. If a satellite can complete one strip in 105 minutes (the orbit time for a satellite passing 1,000 kilometers above the Earth), 14 strips would cover the entire Earth’s surface in 24 hours.
To get a better idea of the “strip” concept, look at the cardboard roll in the center of your paper towels or toilet paper. That overlapping pattern, if applied to a more spherical shape (the Earth), is an excellent way to visualize what I’m talking about.
On the other hand, the longer you can stay over a target, the longer you can observe a particular object (the Evil Terrorist Truck, for example). The level of detail you can create from that image goes up as well. There are satellites that, if you were able to look up and see them, would always appear to be over the same spot on Earth. These are called “geosynchronous” satellites, and if you have DirecTV you benefit from such a satellite. This would theoretically create the most detail and longest time over target possible.
This is not a good design for a spy satellite, however, because you can only look at one place for that satellite’s entire lifetime.
This, as with much of life, is not 100% accurate-it is possible to move satellites to different orbits to some extent, but doing so will reduce the time they can be maintained in orbit, and a satellite that cannot maintain its orbit will eventually fall back to Earth. Since these satellites will likely be costing us more or less $1.5 billion each, keeping them up as long as possible matters.
One way this problem is resolved is by increasing the number of satellites, but this can still leave gaps in coverage. For example, 14 satellites that take 105 minutes to orbit would mean a satellite would be over any particular spot every 105 minutes…but that also means the Evil Terrorist Truck could have up to a 105 minute head start before we can get a camera on it, if a satellite had just passed by.
There are issues related to the satellite’s distance from the Earth’s surface as well. High altitude orbits (20,000 kilometers or higher) have advantages, especially in the amount of coverage at any given time, but they require exponentially larger amounts of power to operate, because the returning signal is so weak. (The CBO reports that doubling the range a signal travels makes it 16 times weaker.)
Medium Earth Orbit (5,000 to 15,000km) satellites have similar characteristics: large amounts of power and large radar antenna and solar arrays make design and construction technically challenging, but they offer large “footprints” of coverage.
Low Earth Orbit (500 to 1,000km) is risky because of the risk of orbital decay-the dragging of the satellite back to Earth because of the planet’s gravity. This orbital altitude offers the smallest viewing area, but the strongest signal return potential. It is the likely choice of any future Space Radar system.
You might expect that implementing power solutions would be the easiest for these Low earth Orbit satellites, but nothing’s ever that simple. And thus we need to take a moment to address the role of earthly eclipses on satellite batteries.
Because of the time spent in the Earth’s shadow every orbit it is not possible to get enough power from the sun to operate any single satellite’s radar at full power at all times. This requires the satellites to store solar power in onboard batteries for when it’s needed-but the more often you charge and discharge batteries, the faster you wear them out. Changing batteries is not an option, which is why the proposed satellites have a reported lifespan of about 10 years. (A Low Earth Orbit satellite spends about 25% of it’s time in shadow.)
The satellites we are talking about gather images through the use of radar. The common image of a radar installation is an exotic looking antenna of some sort rotating around at the top of a radar mast. The radar sends signals out from the antenna (the “aperture”), the system receives the signals as they return after bouncing off an object, and the time it takes for that to occur can be used as one input for a math problem (an algorithm) that is processed by a computer to create the “radar image” that the operator sees on a modern radar.
This is not, however, the only way a radar device can operate. The larger the aperture, the more detail the image can have. That’s because more signal sent out allows more signal to return, and that’s where detail and clarity comes from. It’s also true that a larger aperture allows you to see more area at any one time.
It’s possible to electronically manipulate radar “transmit/receive modules” laid out on a giant flat non-moving panel (an “array”) to create a giant “synthetic” aperture-and “Synthetic Aperture Radar” (or SAR) will be used on the Space Radar satellites. This electronic manipulation capability allows for fancy tricks never imagined by the “old school” radar designers-for example, part of the radar can scan a large area with lower detail, while part of the radar scans a small area with very fine detail.
Two other handy characteristics of the design are the ability to “re-aim” any part of the array at any other area it’s pointed at instantaneously, and the ability to “re-view” several spots that the array is facing in a repeating pattern over and over (10 seconds on six locations every minute, as an example).
The Defense Support System uses giant SAR installations, and they are also used on US Navy ships (note the large flat panel just below the mast).
This brings us to nomenclature.
I promise I’ll be gentle, but there are a few more terms you need to know for all of this to make better sense.
To help simplify what might otherwise seem a bit obtuse, I’m going to ask you to play a mental game with me. Imagine you’re sitting in the driver’s seat of a car.
In this example, the car will represent the satellite, and you, sitting in the driver’s seat, will represent the radar.
Now imagine that you are driving that car on the freeway (or motorway for my UK friends.)
The view out the front window would represent the “Satellite Ground Track”.
This can also be called the “Along-Track” or “Azumith” direction.
The radar is pointing out the passenger window, and the window represents the aperture that we discussed above. (That direction is known as the “Cross-Track”,”Range”, or “Elevation” direction.)
As you might imagine, the size and shape of the window affects what can be seen. Picture a window two feet tall by four feet wide in size, with you looking out the window down at the ground. Now consider how that view would change if the window was four feet tall by two feet wide
Here’s what else might affect your view:
--How much does your head have to turn to look out the side window?
That angle is called the “Azumith Angle”.
--How much do you have to tilt your head down to see the spot you want to see on the ground? That angle is called the “Elevation Angle”.
--The reverse of that (the angle someone on the ground would have to look up to see you) is called the “Grazing Angle”.
--The area of ground that you can see looking out the side window would be the area of the “Range-Swath Width”
--You can’t look straight down through the car’s floor to see the road-and a satellite can’t either. This zone that can’t be seen immediately below the satellite is called the “Nadir”.
We’ve covered a lot so far, and I think with just a couple exceptions the terms we need to learn are now out of the way, so how about we take a short break?
Go walk away, let your head clear a bit, pour yourself a refreshing beverage, and come on back. We’ll pick up the discussion by looking at the factors that limit what this sort of system can accomplish.
Our break over, let‘s continue our discussion of what keeps radar designers up late at night.
For starters, consider the challenges of tracking the Evil Terrorist Truck (or mobile SCUD transport erector launcher [TEL], for that matter). A satellite, traveling more or less 17,000 miles an hour, is trying to find a vehicle traveling maybe 30 miles an hour on a planet passing hundreds of kilometers underneath at 15,000 miles an hour on its voyage around the Sun. This vehicle might be on a road surrounded by other vehicles at varying speeds, or it might be in the mountains, where valleys can block your view. Patterns of vegetation are also confusing.
Designers resolve some of these problems by attempting to “teach” the computers that interpret the data how to filter out the “clutter”. Unfortunately, this is an exercise in guessing (if the vehicle is traveling on a road, the computer might attempt to extrapolate the location of the vehicle from the surrounding “clutter” based on information it has already received about the target’s previous activities, for example), and guessing leads to guessing and...
To make a long story short, the CBO estimates current “state of the art” technology could only maintain any single vehicle’s tracking for less than 10 minutes before the clutter overwhelms the system’s ability to correctly guess what’s what. The best results are achieved in a grid environment (a plowed farmer’s field, for example), where the vehicle moves in the Cross-Track direction. The more rapidly a target is traveling, the easier it is to locate. A vehicle moving exactly in the Along-Track direction cannot be detected.
Another means of resolving some of these problems is to employ many satellites. As we mentioned above sending one satellite, in the same orbit, over a location over and over throughout the day can require many satellites in order to constantly observe any particular spot on Earth. In fact, if you have 14 strips that take 105 minutes to orbit, viewing one location every 9 minutes requires roughly 150 satellites. This would provide you the ability to have near real-time images of any location on Earth nearly 90% of the time, as one of the 150 satellites is always somewhere nearby overhead, and with a large range-swath width you could theoretically achieve nearly overlapping coverage. (Because of the nadir below every satellite, it is nearly impossible to achieve 100% coverage.)
Of course, who can afford 150 satellites?
But there is another way: remember the paper towel roll example we discussed before? Imagine if the “seams” on that roll went in two directions-the seam you see running to the right, and a second seam, crossing over the first, going to the left. That would be an example of satellites on two “Orbital Planes”, and the constellations of satellites that are envisioned for Space Radar operate on one, two, or three orbital planes, depending on the alternative you’re talking about. (Picture two seams, not overlapping, going to the left, and one to the right on our cardboard roll, and you have three orbital planes.) If you picture satellites paralleling or crossing each other’s paths on these orbital planes, you can see new opportunities to cover ground more quickly with fewer “birds” in space.
In the end, however, the limitations of real world budgets will require compromises, and the first of those is to accept that you can’t be everywhere at every second. Instead, the goal of a constellation designer is to create a pattern of orbiting satellites that offers the most:
--Access (what percentage of the time can any particular location be observed)
--Response Time (how soon can you get images from any particular location)
--Coverage (how large an area can you view every hour)
--Mean Track Life (how long, on average, can you track a particular target)
Another challenge in providing coverage is to design a satellite that can view the largest area possible with the greatest detail required. This is a bit like looking through a pair of binoculars: the greater the enlargement, the smaller the area you can see through the lens. To do this a terrestrial SAR uses enormous arrays, but that is not possible in a space-based system because of the weight and size limits imposed by launch vehicles.
As a result, the systems being considered would have arrays covering 40 square meters (more or less 9 feet high by 50 feet long) or 100 square meters (about 75 feet long by 12 feet high). Essentially, you have to decide if you want a smaller number of very large radars, or a larger number of smaller radars.
Each has its tradeoffs: as we said earlier, larger satellites are extremely expensive to design, build, and launch (that giant-and therefore heavy-array has to be folded up for launch, which requires lots of extra engineering; it’s also more likely to flex in space, and thus must be designed with a heavier, more rigid structure, and the greater demands for power require heavier equipment than smaller designs), but a larger number of satellites means more expenses down the road for maintenance, data collection and processing, and required spare satellites (about 10% of satellites experience “catastrophic failure”).
Most of the Earth’s interesting “targets”, the CBO reports, are located between 20 and 60 degrees north latitude, and this is where grazing angle comes into play.
Let’s try another mental game: imagine you are a satellite, and you are standing near a model car on the floor. The top-down view you would have of the car is much more informative than the one you would have if you were laying on the floor looking at the car. In reality, it is impossible to “look across the floor” using a satellite (known as a “zero grazing angle”) because of ground clutter (trees, buildings, hills…) creating obstructions and other such issues. (Eight degrees of grazing angle is considered the absolute minimum for any currently proposed design.)
Placing the satellite’s orbits so that targets in the 20 to 60 degree latitude range are well covered, therefore, is of paramount importance.
Now it’s time to more fully address data transfer.
Everyone who has switched from dialup to broadband understands what better connections can mean, and this system generates huge outputs of data.
The amount of data can be reduced by doing some of the computer processing on the satellite, but this means more power and weight, plus the concern that failure of an onboard computer might render an entire satellite useless. Instead, it is likely that raw data will be sent to ground stations for processing. This model also offers the advantage of allowing for easy upgrades of processing hardware and software, since all the equipment performing these tasks is located on Earth.
Of course, communication between a satellite and a ground station requires a “line of sight” view between the two, and that’s not always possible. This creates delays in getting data to those who need it. NASA has the same problem with their satellites, and they created a “backbone network” of linked communications satellites that orbit the earth today.
The idea is that one of the satellites in the backbone network is always connected to a ground station, and when data needs to be downlinked a satellite connects to the network and passes its data. At that point, much like the cell phone network, the backbone satellites pass the data amongst themselves until the ground station connected satellite is reached, at which point the downlink occurs.
Today the NASA system has six channels that can pass 800 Mb/second, which is equivalent to six DSL connections-not much when you have many satellites trying to pass video and other data all at once. Any future system will require a radically improved “backbone” to support it; and my uneducated guess is that this could represent another 30-50% added to any other cost estimates.
And so, at long last, we come to the heart of the matter: just what should we expect from such a system, and how much should we expect it to cost?
To answer those questions, we need to identify just what sort of a system we are talking about. I will pick out two of the options the CBO discussed and focus on them, as I believe they are the options most likely to be adopted.
System 1 is a constellation of nine satellites on two orbital planes. The radars are the larger 100 square meter aperture design, and they are in Low Earth Orbit.
System 2 has 21 satellites with 40 square meter aperture. The satellites are also in Low Earth Orbit, and are on three orbital planes.
The next thing we need is a scenario. The CBO developed two: the ability of each system to track a single vehicle target in North Korea; and the mission of observing locations on the Korean Peninsula over a period of time.
They also made assumptions about two technologies that are not yet known to actually exist:
It is theoretically possible to send multiple frequencies from a single transmit/receive module simultaneously, and then separate the frequencies again when the return echoes are received. If this is possible, the area that could be imaged in any time period would be multiplied by the number of additional frequencies transmitted (three frequencies, triple the area observed, for example). No system currently is known to have this capability.
It is also hoped that a process called “STAP processing” will improve the performance of the proposed radar systems when tracking vehicle targets through more effective “clutter removing” algorithms. Because the CBO cannot today know how effective this processing will be, they made a conservative and an aggressive assumption, which we will discuss as we go along.
First, let’s discuss the “picture taking” (SAR) mission.
You may recall that the more detail you require, the smaller an area you can image. More detail also lengthens response times, but in the case of our North Korean mission this is not too severe: both Systems 1 and 2 would be able to provide images at .01 meter (3”, sufficient to determine if a crop is growing at an expected pace) resolution in less than 15 minutes; and in the case of System 2, if you could settle for images of .07 meter resolution (not quite two feet, and sufficient to tell the difference between a truck and a tank) you could obtain an image in about seven minutes anywhere in North Korea.
Coverage is the next metric to be examined.
To help give you a bit of perspective, consider these facts:
A Division is a massing of about 10-15,000 troops and they typically operate in an area of about 1,000 square kilometers.
During the Persian Gulf War of 1991, the US Air Force created so-called “kill boxes” of about 2,500 square kilometers for the purposes of locating SCUD TELs.
The Korean Demilitarized Zone and an area extending about 80km into North Korea encompasses about 11, 000 square km; this area equals about 10% of the total area of North Korea, which is in turn about half of the Korean Peninsula.
System 1 could survey the DMZ region, about five kill boxes, or the operating areas of 10 to 11 Divisions daily at .01 meter resolution, and the entire Korean Peninsula at .06 meter resolution. System 1 can survey roughly twice as much land at .01 meter resolution as System 2. The amount of land surveyed at 1 meter resolution is about 10 times that which can be imaged at .01 meter resolution.
System 2 can only cover about 60% of the area of System 1 at .01 meter resolution, but at 1 meter resolution this 21 satellite constellation can cover about 3 times as much as the 9 satellite System 1-over 1,000,000 square km daily. The amount of land surveyed at 1 meter resolution is about 20 times that which can be imaged at .01 meter resolution. System 2 could therefore provide five complete images of the entire Korean Peninsula daily at 1 meter resolution, compared to two images daily with System1.
System 1 could image any target located between 40 and 60 degrees North or South latitude between 15% and 20% of the time at .01 image detail, and roughly 20% at 1 meter resolution.
System 2 could image any target located between 40 and 60 degrees North or South latitude between 10% and 20% of the time at .01 image detail, and above 30% at 1 meter resolution.
Keep in mind that grazing angle counts with all of this-an angle approaching 90 degrees yields no image (the “laying on the floor” example we discussed above), which is why the best results are obtained in the latitude range we’ve discussed above.
The next item to assess is the effectiveness of the two Systems in tracking a target vehicle (officially known as Ground Moving Target Indication, or GMTI).
Before we can examine the numbers, a quick word about steering.
We don’t want to cause our satellites to change their orbits, because we will use fuel that we will need later to maintain the satellite’s orbit. However, we might choose to “spin” our satellite (turn it on its yaw axis, for the aerospace engineers still reading) in order to follow a single vehicle, and the CBO, as they did about STAP processing, made a “fixed” and “variable” yaw angle assumption.
There are disadvantages to varying the yaw angle: the fuel use, of course, but also the risk of flexing the radar array, which will drastically reduce the radar’s effectiveness.
With that said, here’s some numbers…
First, let’s examine access. It is estimated that one of the nine System 1 satellites would be available to track a vehicle traveling about 20 mph 30% of the time with a fixed yaw angle, and making a conservative assumption as to the effectiveness of STAP processing. Because of the aperture size, there is no real improvement if we assume STAP processing is more effective. The variable yaw angle makes the system about 30% more effective.
The 21 satellites of System 2 would fare better, and if STAP processing lived up to the aggressive assumption, System 2 would be roughly twice as effective as System 1. However, the conservative STAP assumption only yields a small improvement over System 1, no matter if we are comparing fixed or variable yaw designs. The lowest predicted assumption was for 40% access, and the most optimistic suggests access could be maintained almost 70% of the time.
In any case, vehicles moving less than 2 meters per second (about 5 mph) are virtually invisible to any of the radars we are examining. If the aggressive STAP assumption is made, vehicles traveling over 4 meters per second are probably going to be tracked about 40% of the time for System 1, above 60% of the time for System 2.
Under the conservative assumption, neither System can be counted on to be able to track a particular target more than 20% of the time if the target is traveling less that 6 meters per second (about 15 mph), and System 2 can’t hit the 20% number unless the vehicle is traveling 8 meters per second (20 mph).
How quickly can our Systems respond once the order is given to track a vehicle?
Assuming either a fixed or variable yaw angle, it would take one of the nine System 1 satellites more or less 15 minutes to respond to a target between 40 and 60 degrees North or South latitude. System 2’s 21 satellites could respond in less than 10 minutes, possibly as quickly as 5 if aggressive STAP assumptions are used.
That response time, however, is not possible for vehicles traveling less that 4 meters per second-System 2 requires up to 60 minutes to locate such a target, although System 1 can do it in about 10 minutes. If you called in a sighting of a high value target driving away, even a 10 minute response time may be too slow.
How long can we maintain tracking on a particular target?
Here’s some bad news. The CBO estimates that System 2 could only maintain a track on a target for a period of 1 to 4 minutes using the conservative STAP assumptions, and only 2 to 8 minutes using the aggressive assumptions. That means even if you were able to respond to the tasking to track a particular high value target, the target would likely be lost before any aircraft or other weapons system could be brought to bear on that target. Even the larger radars of System 1 would be only likely to hold the track, in the most optimistic case, for about 19 minutes, with 5 to 6 minutes being the more conservative estimate.
More bad news: the CBO estimates that if we want a 95% confidence that we can keep response time under 4 minutes for our hypothetical Korean Peninsula targeting we would require somewhere between 35 and 50 satellites (depending on fixed or variable yaw angle).
So what would all this cost?
To deploy these Systems, we would first have to fund a development process to attempt to design the STAP software, then we would also have to fund certain other development work on the satellites themselves.
At that point we would be ready to purchase the actual satellites, the launch vehicles that put them in orbit, the ground equipment to support them, and we would be ready to train and equip the analysts, engineers and technicians we would need.
Our costs would include maintenance, the second set of satellites we would need to launch after 10 years or so, and the processing of the data sent to Earth by the Systems.
With all this in mind, it is estimated that System 1 might cost between $53.4 and $77.1 billion. System 2 will likely cost between $66.2 and $94.4 billion. (50 satellites would likely cost 2.5 times the System 2 estimate, or roughly $150 to $250 billion.)
These estimates do not consider the “space network backbone”, which will add a lot more to any costs we are discussing here.
And now, at last, we have come to the end.
By now you should have a better understanding what Space Radar can and can’t do and what it’s likely to cost. From where I sit, I suspect we have five choices:
--Do nothing.
--Adopt System1.
--Adopt System 2.
--Go for the 50 satellite option.
--Deploy for the SAR mission, but leave GMTI to the currently deployed Predator, Global Hawk, and JSTARS.
This was an especially long conversation, and I do appreciate that you would take the time to get to this point. I hope I made it worth your time, and I look forward to hearing some ideas about how we should proceed.
We will discuss a military “spy satellite” program that has great potential for use by other customers-including law enforcement and intelligence agencies.
A program, frankly, that has a huge “Big Brother” potential.
A program that may end up costing $150 billion of today’s dollars-or more-over the next 25 years.
And with that introduction complete, let’s talk about “Space Radar”.
This will be a fairly long, but not very technical, description of a development and procurement program that has the twin goals of allowing the US military to obtain an image of any location on Earth, and then employing that information to support a variety of missions; and secondly, to track individual vehicles from space so that they might be attacked, if needed.
At the end of the discussion you should have an understanding of what the system can-and can’t-do, and you should be able to do your own thinking regarding a rough cost-benefit analysis of the program.
''I'm really trying to help keep this revised assault on schedule!''
--CTU Analyst Chloe, on the television show “24”
The television show “24” is but one of many places where you can see the image of the Secret Government Intelligence Agency specialists hunched over computer screens, following the image of some vehicle that is driving in a distant desert in real time, and then-with the appropriate giant explosion-the Evil Terrorist and his Truck are destroyed by a pinpoint strike from a perfectly guided missile.
Another success for Our Side.
But as you might suspect, in real life it doesn’t always work out that way.
For a variety of reasons that we will flesh out as we go along, several parts of that scenario are very difficult to make happen, and especially exactly when you need them all to happen.
For example, you can’t just “steer” satellites to where you want them to go as you might a car-the nature of how this type of satellite orbits the Earth determines where it will be at a particular time of day. Therefore just because you located the Evil Terrorist Truck doesn’t mean you can just call up an image, right this instant, from the closest satellite.
It’s also very difficult to maintain contact with the Evil Terrorist Truck for more than a few minutes as it’s driving along, and one factor causing this difficulty is related to the orbital parameters of the satellite. Other factors relate to the design characteristics of the radar itself, in what direction and over what terrain the target vehicle travels, potential confusion caused by any other vehicles located nearby, and the computational and computer processing difficulties inherent in this type of work.
The number of satellites you have, and their angle relative to what they are trying to observe will also affect the ability to get the image.
If all that wasn’t enough, all the data generated has to be processed into a useable image, downloaded, and analyzed. Unfortunately, space today is essentially wired for DSL, and to make this program work we will also have to install a much bigger “pipe” for getting data down from space. As a result of today’s slower speed connections, the current reality often means waiting for data from a satellite before it can be acted upon-and that delay can be not just seconds or minutes, but sometimes even hours.
Is there any good news here?
Well, maybe.
Depending on how we design, we might be able to use Moore’s Law to leverage today’s investment by upgrading some components later.
There is a manufacturing development on the horizon that might substantially reduce the cost of producing the radar arrays themselves, and electronics do tend to get cheaper every year-but those are not the most expensive part of the satellite’s design. More on this later.
Before we get too far, a quick word about sourcing.
I will link liberally in the course of this discussion, but I owe a giant thanks to the Congressional Budget Office. In January of 2007 they released “Alternatives for Military Space Radar”, and the great majority of the information found here can be found, in greater detail, there.
So now, let’s talk generically about what these satellites do.
As we discussed above, these satellites are intended to perform two basic missions. In the first, they travel around the Earth taking pictures of strips of land as they pass overhead. This is what you might think of as a typical “spy satellite” mission-the comparison of images from some location to images of the same place, taken in a previous time. This is the raw material of how most folks might traditionally imagine the process of photographic analysis works. You can analyze, for example, if construction has occurred (are they building the reactor?) at a particular place, or the movement and composition of military forces (where is the enemy?) on a battlefield. There are other uses for this data as well, including military mapping.
Military mapping has two purposes: the production of maps for use by troops, sailors, or pilots, and the creation of the “maps” that are fed into the navigational systems of certain missiles. Once the map (actually a digital three-dimensional representation of a series of “waypoints”) is loaded, the missile can find its own way to the target.
The second mission is not so traditional: the goal of tracking the movements of individual vehicles from space as they move about on the Earth, in “near-real” time, so as to create the “actionable intelligence” we so often hear about. No acknowledged satellite performs this mission today for any country. Earth based systems such as the Predator, Global Hawk, and JSTARS have handled this mission since the 1990’s.
The biggest challenge for a designer tasked with making these two things happen…is that the “taking pictures” mission and the “actionable intelligence” missions fundamentally conflict with each other.
Here’s what I mean:
After going to the time, trouble, and expense of launching a satellite and putting the infrastructure in place to both keep it going and to use the data it creates, you need to ensure you collect the most data possible 24 hours a day. A satellite performing this type of mission travels around the Earth in an orbit that creates “strips” of image, one alongside the other as each orbit goes by, until a complete image of the Earth’s surface is created. If a satellite can complete one strip in 105 minutes (the orbit time for a satellite passing 1,000 kilometers above the Earth), 14 strips would cover the entire Earth’s surface in 24 hours.
To get a better idea of the “strip” concept, look at the cardboard roll in the center of your paper towels or toilet paper. That overlapping pattern, if applied to a more spherical shape (the Earth), is an excellent way to visualize what I’m talking about.
On the other hand, the longer you can stay over a target, the longer you can observe a particular object (the Evil Terrorist Truck, for example). The level of detail you can create from that image goes up as well. There are satellites that, if you were able to look up and see them, would always appear to be over the same spot on Earth. These are called “geosynchronous” satellites, and if you have DirecTV you benefit from such a satellite. This would theoretically create the most detail and longest time over target possible.
This is not a good design for a spy satellite, however, because you can only look at one place for that satellite’s entire lifetime.
This, as with much of life, is not 100% accurate-it is possible to move satellites to different orbits to some extent, but doing so will reduce the time they can be maintained in orbit, and a satellite that cannot maintain its orbit will eventually fall back to Earth. Since these satellites will likely be costing us more or less $1.5 billion each, keeping them up as long as possible matters.
One way this problem is resolved is by increasing the number of satellites, but this can still leave gaps in coverage. For example, 14 satellites that take 105 minutes to orbit would mean a satellite would be over any particular spot every 105 minutes…but that also means the Evil Terrorist Truck could have up to a 105 minute head start before we can get a camera on it, if a satellite had just passed by.
There are issues related to the satellite’s distance from the Earth’s surface as well. High altitude orbits (20,000 kilometers or higher) have advantages, especially in the amount of coverage at any given time, but they require exponentially larger amounts of power to operate, because the returning signal is so weak. (The CBO reports that doubling the range a signal travels makes it 16 times weaker.)
Medium Earth Orbit (5,000 to 15,000km) satellites have similar characteristics: large amounts of power and large radar antenna and solar arrays make design and construction technically challenging, but they offer large “footprints” of coverage.
Low Earth Orbit (500 to 1,000km) is risky because of the risk of orbital decay-the dragging of the satellite back to Earth because of the planet’s gravity. This orbital altitude offers the smallest viewing area, but the strongest signal return potential. It is the likely choice of any future Space Radar system.
You might expect that implementing power solutions would be the easiest for these Low earth Orbit satellites, but nothing’s ever that simple. And thus we need to take a moment to address the role of earthly eclipses on satellite batteries.
Because of the time spent in the Earth’s shadow every orbit it is not possible to get enough power from the sun to operate any single satellite’s radar at full power at all times. This requires the satellites to store solar power in onboard batteries for when it’s needed-but the more often you charge and discharge batteries, the faster you wear them out. Changing batteries is not an option, which is why the proposed satellites have a reported lifespan of about 10 years. (A Low Earth Orbit satellite spends about 25% of it’s time in shadow.)
The satellites we are talking about gather images through the use of radar. The common image of a radar installation is an exotic looking antenna of some sort rotating around at the top of a radar mast. The radar sends signals out from the antenna (the “aperture”), the system receives the signals as they return after bouncing off an object, and the time it takes for that to occur can be used as one input for a math problem (an algorithm) that is processed by a computer to create the “radar image” that the operator sees on a modern radar.
This is not, however, the only way a radar device can operate. The larger the aperture, the more detail the image can have. That’s because more signal sent out allows more signal to return, and that’s where detail and clarity comes from. It’s also true that a larger aperture allows you to see more area at any one time.
It’s possible to electronically manipulate radar “transmit/receive modules” laid out on a giant flat non-moving panel (an “array”) to create a giant “synthetic” aperture-and “Synthetic Aperture Radar” (or SAR) will be used on the Space Radar satellites. This electronic manipulation capability allows for fancy tricks never imagined by the “old school” radar designers-for example, part of the radar can scan a large area with lower detail, while part of the radar scans a small area with very fine detail.
Two other handy characteristics of the design are the ability to “re-aim” any part of the array at any other area it’s pointed at instantaneously, and the ability to “re-view” several spots that the array is facing in a repeating pattern over and over (10 seconds on six locations every minute, as an example).
The Defense Support System uses giant SAR installations, and they are also used on US Navy ships (note the large flat panel just below the mast).
This brings us to nomenclature.
I promise I’ll be gentle, but there are a few more terms you need to know for all of this to make better sense.
To help simplify what might otherwise seem a bit obtuse, I’m going to ask you to play a mental game with me. Imagine you’re sitting in the driver’s seat of a car.
In this example, the car will represent the satellite, and you, sitting in the driver’s seat, will represent the radar.
Now imagine that you are driving that car on the freeway (or motorway for my UK friends.)
The view out the front window would represent the “Satellite Ground Track”.
This can also be called the “Along-Track” or “Azumith” direction.
The radar is pointing out the passenger window, and the window represents the aperture that we discussed above. (That direction is known as the “Cross-Track”,”Range”, or “Elevation” direction.)
As you might imagine, the size and shape of the window affects what can be seen. Picture a window two feet tall by four feet wide in size, with you looking out the window down at the ground. Now consider how that view would change if the window was four feet tall by two feet wide
Here’s what else might affect your view:
--How much does your head have to turn to look out the side window?
That angle is called the “Azumith Angle”.
--How much do you have to tilt your head down to see the spot you want to see on the ground? That angle is called the “Elevation Angle”.
--The reverse of that (the angle someone on the ground would have to look up to see you) is called the “Grazing Angle”.
--The area of ground that you can see looking out the side window would be the area of the “Range-Swath Width”
--You can’t look straight down through the car’s floor to see the road-and a satellite can’t either. This zone that can’t be seen immediately below the satellite is called the “Nadir”.
We’ve covered a lot so far, and I think with just a couple exceptions the terms we need to learn are now out of the way, so how about we take a short break?
Go walk away, let your head clear a bit, pour yourself a refreshing beverage, and come on back. We’ll pick up the discussion by looking at the factors that limit what this sort of system can accomplish.
Our break over, let‘s continue our discussion of what keeps radar designers up late at night.
For starters, consider the challenges of tracking the Evil Terrorist Truck (or mobile SCUD transport erector launcher [TEL], for that matter). A satellite, traveling more or less 17,000 miles an hour, is trying to find a vehicle traveling maybe 30 miles an hour on a planet passing hundreds of kilometers underneath at 15,000 miles an hour on its voyage around the Sun. This vehicle might be on a road surrounded by other vehicles at varying speeds, or it might be in the mountains, where valleys can block your view. Patterns of vegetation are also confusing.
Designers resolve some of these problems by attempting to “teach” the computers that interpret the data how to filter out the “clutter”. Unfortunately, this is an exercise in guessing (if the vehicle is traveling on a road, the computer might attempt to extrapolate the location of the vehicle from the surrounding “clutter” based on information it has already received about the target’s previous activities, for example), and guessing leads to guessing and...
To make a long story short, the CBO estimates current “state of the art” technology could only maintain any single vehicle’s tracking for less than 10 minutes before the clutter overwhelms the system’s ability to correctly guess what’s what. The best results are achieved in a grid environment (a plowed farmer’s field, for example), where the vehicle moves in the Cross-Track direction. The more rapidly a target is traveling, the easier it is to locate. A vehicle moving exactly in the Along-Track direction cannot be detected.
Another means of resolving some of these problems is to employ many satellites. As we mentioned above sending one satellite, in the same orbit, over a location over and over throughout the day can require many satellites in order to constantly observe any particular spot on Earth. In fact, if you have 14 strips that take 105 minutes to orbit, viewing one location every 9 minutes requires roughly 150 satellites. This would provide you the ability to have near real-time images of any location on Earth nearly 90% of the time, as one of the 150 satellites is always somewhere nearby overhead, and with a large range-swath width you could theoretically achieve nearly overlapping coverage. (Because of the nadir below every satellite, it is nearly impossible to achieve 100% coverage.)
Of course, who can afford 150 satellites?
But there is another way: remember the paper towel roll example we discussed before? Imagine if the “seams” on that roll went in two directions-the seam you see running to the right, and a second seam, crossing over the first, going to the left. That would be an example of satellites on two “Orbital Planes”, and the constellations of satellites that are envisioned for Space Radar operate on one, two, or three orbital planes, depending on the alternative you’re talking about. (Picture two seams, not overlapping, going to the left, and one to the right on our cardboard roll, and you have three orbital planes.) If you picture satellites paralleling or crossing each other’s paths on these orbital planes, you can see new opportunities to cover ground more quickly with fewer “birds” in space.
In the end, however, the limitations of real world budgets will require compromises, and the first of those is to accept that you can’t be everywhere at every second. Instead, the goal of a constellation designer is to create a pattern of orbiting satellites that offers the most:
--Access (what percentage of the time can any particular location be observed)
--Response Time (how soon can you get images from any particular location)
--Coverage (how large an area can you view every hour)
--Mean Track Life (how long, on average, can you track a particular target)
Another challenge in providing coverage is to design a satellite that can view the largest area possible with the greatest detail required. This is a bit like looking through a pair of binoculars: the greater the enlargement, the smaller the area you can see through the lens. To do this a terrestrial SAR uses enormous arrays, but that is not possible in a space-based system because of the weight and size limits imposed by launch vehicles.
As a result, the systems being considered would have arrays covering 40 square meters (more or less 9 feet high by 50 feet long) or 100 square meters (about 75 feet long by 12 feet high). Essentially, you have to decide if you want a smaller number of very large radars, or a larger number of smaller radars.
Each has its tradeoffs: as we said earlier, larger satellites are extremely expensive to design, build, and launch (that giant-and therefore heavy-array has to be folded up for launch, which requires lots of extra engineering; it’s also more likely to flex in space, and thus must be designed with a heavier, more rigid structure, and the greater demands for power require heavier equipment than smaller designs), but a larger number of satellites means more expenses down the road for maintenance, data collection and processing, and required spare satellites (about 10% of satellites experience “catastrophic failure”).
Most of the Earth’s interesting “targets”, the CBO reports, are located between 20 and 60 degrees north latitude, and this is where grazing angle comes into play.
Let’s try another mental game: imagine you are a satellite, and you are standing near a model car on the floor. The top-down view you would have of the car is much more informative than the one you would have if you were laying on the floor looking at the car. In reality, it is impossible to “look across the floor” using a satellite (known as a “zero grazing angle”) because of ground clutter (trees, buildings, hills…) creating obstructions and other such issues. (Eight degrees of grazing angle is considered the absolute minimum for any currently proposed design.)
Placing the satellite’s orbits so that targets in the 20 to 60 degree latitude range are well covered, therefore, is of paramount importance.
Now it’s time to more fully address data transfer.
Everyone who has switched from dialup to broadband understands what better connections can mean, and this system generates huge outputs of data.
The amount of data can be reduced by doing some of the computer processing on the satellite, but this means more power and weight, plus the concern that failure of an onboard computer might render an entire satellite useless. Instead, it is likely that raw data will be sent to ground stations for processing. This model also offers the advantage of allowing for easy upgrades of processing hardware and software, since all the equipment performing these tasks is located on Earth.
Of course, communication between a satellite and a ground station requires a “line of sight” view between the two, and that’s not always possible. This creates delays in getting data to those who need it. NASA has the same problem with their satellites, and they created a “backbone network” of linked communications satellites that orbit the earth today.
The idea is that one of the satellites in the backbone network is always connected to a ground station, and when data needs to be downlinked a satellite connects to the network and passes its data. At that point, much like the cell phone network, the backbone satellites pass the data amongst themselves until the ground station connected satellite is reached, at which point the downlink occurs.
Today the NASA system has six channels that can pass 800 Mb/second, which is equivalent to six DSL connections-not much when you have many satellites trying to pass video and other data all at once. Any future system will require a radically improved “backbone” to support it; and my uneducated guess is that this could represent another 30-50% added to any other cost estimates.
And so, at long last, we come to the heart of the matter: just what should we expect from such a system, and how much should we expect it to cost?
To answer those questions, we need to identify just what sort of a system we are talking about. I will pick out two of the options the CBO discussed and focus on them, as I believe they are the options most likely to be adopted.
System 1 is a constellation of nine satellites on two orbital planes. The radars are the larger 100 square meter aperture design, and they are in Low Earth Orbit.
System 2 has 21 satellites with 40 square meter aperture. The satellites are also in Low Earth Orbit, and are on three orbital planes.
The next thing we need is a scenario. The CBO developed two: the ability of each system to track a single vehicle target in North Korea; and the mission of observing locations on the Korean Peninsula over a period of time.
They also made assumptions about two technologies that are not yet known to actually exist:
It is theoretically possible to send multiple frequencies from a single transmit/receive module simultaneously, and then separate the frequencies again when the return echoes are received. If this is possible, the area that could be imaged in any time period would be multiplied by the number of additional frequencies transmitted (three frequencies, triple the area observed, for example). No system currently is known to have this capability.
It is also hoped that a process called “STAP processing” will improve the performance of the proposed radar systems when tracking vehicle targets through more effective “clutter removing” algorithms. Because the CBO cannot today know how effective this processing will be, they made a conservative and an aggressive assumption, which we will discuss as we go along.
First, let’s discuss the “picture taking” (SAR) mission.
You may recall that the more detail you require, the smaller an area you can image. More detail also lengthens response times, but in the case of our North Korean mission this is not too severe: both Systems 1 and 2 would be able to provide images at .01 meter (3”, sufficient to determine if a crop is growing at an expected pace) resolution in less than 15 minutes; and in the case of System 2, if you could settle for images of .07 meter resolution (not quite two feet, and sufficient to tell the difference between a truck and a tank) you could obtain an image in about seven minutes anywhere in North Korea.
Coverage is the next metric to be examined.
To help give you a bit of perspective, consider these facts:
A Division is a massing of about 10-15,000 troops and they typically operate in an area of about 1,000 square kilometers.
During the Persian Gulf War of 1991, the US Air Force created so-called “kill boxes” of about 2,500 square kilometers for the purposes of locating SCUD TELs.
The Korean Demilitarized Zone and an area extending about 80km into North Korea encompasses about 11, 000 square km; this area equals about 10% of the total area of North Korea, which is in turn about half of the Korean Peninsula.
System 1 could survey the DMZ region, about five kill boxes, or the operating areas of 10 to 11 Divisions daily at .01 meter resolution, and the entire Korean Peninsula at .06 meter resolution. System 1 can survey roughly twice as much land at .01 meter resolution as System 2. The amount of land surveyed at 1 meter resolution is about 10 times that which can be imaged at .01 meter resolution.
System 2 can only cover about 60% of the area of System 1 at .01 meter resolution, but at 1 meter resolution this 21 satellite constellation can cover about 3 times as much as the 9 satellite System 1-over 1,000,000 square km daily. The amount of land surveyed at 1 meter resolution is about 20 times that which can be imaged at .01 meter resolution. System 2 could therefore provide five complete images of the entire Korean Peninsula daily at 1 meter resolution, compared to two images daily with System1.
System 1 could image any target located between 40 and 60 degrees North or South latitude between 15% and 20% of the time at .01 image detail, and roughly 20% at 1 meter resolution.
System 2 could image any target located between 40 and 60 degrees North or South latitude between 10% and 20% of the time at .01 image detail, and above 30% at 1 meter resolution.
Keep in mind that grazing angle counts with all of this-an angle approaching 90 degrees yields no image (the “laying on the floor” example we discussed above), which is why the best results are obtained in the latitude range we’ve discussed above.
The next item to assess is the effectiveness of the two Systems in tracking a target vehicle (officially known as Ground Moving Target Indication, or GMTI).
Before we can examine the numbers, a quick word about steering.
We don’t want to cause our satellites to change their orbits, because we will use fuel that we will need later to maintain the satellite’s orbit. However, we might choose to “spin” our satellite (turn it on its yaw axis, for the aerospace engineers still reading) in order to follow a single vehicle, and the CBO, as they did about STAP processing, made a “fixed” and “variable” yaw angle assumption.
There are disadvantages to varying the yaw angle: the fuel use, of course, but also the risk of flexing the radar array, which will drastically reduce the radar’s effectiveness.
With that said, here’s some numbers…
First, let’s examine access. It is estimated that one of the nine System 1 satellites would be available to track a vehicle traveling about 20 mph 30% of the time with a fixed yaw angle, and making a conservative assumption as to the effectiveness of STAP processing. Because of the aperture size, there is no real improvement if we assume STAP processing is more effective. The variable yaw angle makes the system about 30% more effective.
The 21 satellites of System 2 would fare better, and if STAP processing lived up to the aggressive assumption, System 2 would be roughly twice as effective as System 1. However, the conservative STAP assumption only yields a small improvement over System 1, no matter if we are comparing fixed or variable yaw designs. The lowest predicted assumption was for 40% access, and the most optimistic suggests access could be maintained almost 70% of the time.
In any case, vehicles moving less than 2 meters per second (about 5 mph) are virtually invisible to any of the radars we are examining. If the aggressive STAP assumption is made, vehicles traveling over 4 meters per second are probably going to be tracked about 40% of the time for System 1, above 60% of the time for System 2.
Under the conservative assumption, neither System can be counted on to be able to track a particular target more than 20% of the time if the target is traveling less that 6 meters per second (about 15 mph), and System 2 can’t hit the 20% number unless the vehicle is traveling 8 meters per second (20 mph).
How quickly can our Systems respond once the order is given to track a vehicle?
Assuming either a fixed or variable yaw angle, it would take one of the nine System 1 satellites more or less 15 minutes to respond to a target between 40 and 60 degrees North or South latitude. System 2’s 21 satellites could respond in less than 10 minutes, possibly as quickly as 5 if aggressive STAP assumptions are used.
That response time, however, is not possible for vehicles traveling less that 4 meters per second-System 2 requires up to 60 minutes to locate such a target, although System 1 can do it in about 10 minutes. If you called in a sighting of a high value target driving away, even a 10 minute response time may be too slow.
How long can we maintain tracking on a particular target?
Here’s some bad news. The CBO estimates that System 2 could only maintain a track on a target for a period of 1 to 4 minutes using the conservative STAP assumptions, and only 2 to 8 minutes using the aggressive assumptions. That means even if you were able to respond to the tasking to track a particular high value target, the target would likely be lost before any aircraft or other weapons system could be brought to bear on that target. Even the larger radars of System 1 would be only likely to hold the track, in the most optimistic case, for about 19 minutes, with 5 to 6 minutes being the more conservative estimate.
More bad news: the CBO estimates that if we want a 95% confidence that we can keep response time under 4 minutes for our hypothetical Korean Peninsula targeting we would require somewhere between 35 and 50 satellites (depending on fixed or variable yaw angle).
So what would all this cost?
To deploy these Systems, we would first have to fund a development process to attempt to design the STAP software, then we would also have to fund certain other development work on the satellites themselves.
At that point we would be ready to purchase the actual satellites, the launch vehicles that put them in orbit, the ground equipment to support them, and we would be ready to train and equip the analysts, engineers and technicians we would need.
Our costs would include maintenance, the second set of satellites we would need to launch after 10 years or so, and the processing of the data sent to Earth by the Systems.
With all this in mind, it is estimated that System 1 might cost between $53.4 and $77.1 billion. System 2 will likely cost between $66.2 and $94.4 billion. (50 satellites would likely cost 2.5 times the System 2 estimate, or roughly $150 to $250 billion.)
These estimates do not consider the “space network backbone”, which will add a lot more to any costs we are discussing here.
And now, at last, we have come to the end.
By now you should have a better understanding what Space Radar can and can’t do and what it’s likely to cost. From where I sit, I suspect we have five choices:
--Do nothing.
--Adopt System1.
--Adopt System 2.
--Go for the 50 satellite option.
--Deploy for the SAR mission, but leave GMTI to the currently deployed Predator, Global Hawk, and JSTARS.
This was an especially long conversation, and I do appreciate that you would take the time to get to this point. I hope I made it worth your time, and I look forward to hearing some ideas about how we should proceed.
Monday, August 13, 2007
On Term Limits, Or, Rove Needs A New Puppet
Seasoned audiences of presidential scandal know that there's only one certainty ahead: the timing of a Karl Rove resignation. As always in this genre, the knight takes the fall at exactly that moment when it's essential to protect the king.
--Frank Rich, via the New York Times, July 17th, 2005
Karl Rove, in a move Sir Lancelot would be proud of, has announced that he will “leave the building” August 31st.
Does this mean Mr. Bush’s recent “colonoscopy” was merely a cover story for a procedure more closely resembling the removal of a hand from a puppet?
Don’t bet on it.
Consider it instead an evolutionary step in Rove’s career-and a chance to shut off some of the controversy created by his use of Republican National Committee email accounts.
Here’s what I mean:
Mr. Bush and Rove have been essentially “joined at the hip” since Texas days, but that’s now over, because of term limits.
Mr. Bush has reached the end of his political career (unless Laura decides to run), but Rove has no reason to retire-after all, why give up the power he worked so hard to get?
So where is Rove to go? He either has to “hitch his wagon” to a new Presidential contender, take over the Republican Congressional political command, or become an independent voice, much as Gingrich is today.
Despite today’s announcements that Rove would like to help get Congressional candidates elected, my suspicion is that he wants another Presidential candidate.
After all, who wants the irritation of trying to control the Republican National Congressional (RNCC) or Senatorial (RNSC) Committees? Those jobs have too much of a “frogs in a wheelbarrow” aspect to them-and why would a “unitary executive” guy take up with legislators?
And then there’s the money. Why would the current “commanders” of the RNCC or RNSC let Rove take over the distribution all those PAC donations? Those donations today are one of the major levers the Party uses to enforce discipline, and giving control to Rove would severely upset the Congressional apple cart.
On the other hand, a Presidential candidate-especially in this year’s Republican field-is much more easily drawn into the Rovian orbit. The message management and coordination issues are simpler as well-and the infighting is more readily controlled than in a Congressional environment. Not to mention the advantages of having to massage only one ego, rather than 535.
A reasonable person can imagine that Rove will be raising and spending most of the money for such a candidate; and by extension, directing that candidate’s message and image. This would seem much more attractive for a manager than the Congressional environment we just examined, and my guess is Rove feels the same way. Rove’s likely calculus reveals an additional advantage: the President’s political coordinator would be a likely head of the RNC, if Rove chose to accept the gig.
To me the real question is: has Rove selected his new pony-and has that pony yet made it to the starting gate?
It would be possible to lay out any number of scenarios, but here’s a quick five:
Rove rescues McCain.
Rove and the RNC have anointed Romney.
Giuliani has made an offer that has caught Rove’s interest.
Rove is the “missing link” that Thompson has been waiting for.
Gingrich thinks Rove can get him “over the hump”.
I’ll leave all this to the community to evaluate, so that we might take a minute to discuss Rove and his Blackberry.
Although Mr. Bush reports he never uses a computer and does not send emails, Rove Blackberries like crazy. An unknown number of those messages were related to White House business, and some were of a political nature. It is now known that many of those communications were made through an account operated by the Republican National Committee, and as we mentioned above there is considerable controversy as to the applicability of the Presidential Records Act and the Hatch Act over the contents of those accounts.
But much of that controversy disappears if Rove is no longer a White House employee. The legal issues remain, of course, but going forward the damage can be minimized, especially if the RNC servers have “accidentally” or “as a routine maintenance procedure” had the account records removed (“we’re shocked, shocked to discover the servers with those records had a catastrophic failure…we’re so disappointed we can’t prove Rove’s innocence”…). If this occurs, you can expect any investigation will be stonewalled beyond November 2008, with the hope no conclusion is issued in time to hurt the RNC, the Candidate, or Rove.
My impression is that this process is already underway.
Rove can’t be afraid of criminal sanctions, after the Scooter Libby “pardon”, suggesting, stealing from Shakespeare: “the stonewall’s the thing”.
Finally, a quick word about that “joined at the hip” thing: there’s no reason why Rove has to end his relationship with Mr. Bush’s Administration-all he has to do is place a “consigliere” in the White House to pass the messages back and forth, and the connection stays in place. And Mr. Bush still doesn’t have to use email.
I do expect an effort to create a new extension of “Executive Privilege” applicable to “non-employee advisors” of the Executive Branch.
All that being said, it’s time to get to the summary:
I suspect Rove feels he can simultaneously run a campaign and an Administration, and my guess is he’ll be trying to do just that-much to Cheney’s disappointment.
I further suspect a lot of today’s news is also based on a desire to contain the controversy over Rove’s emails-both their contents and actual existence-and that can be easily stonewalled from outside the White House.
And finally, I suspect that Rove will continue to be Mr. Bush’s close and trusted advisor-and that his freedom to act will be enhanced based on his new status.
Saturday, August 11, 2007
On Evolution, Or, Political Robots Compared
Some of you may recall my previous exclusive reporting of the crash (and successful restoration) of the Hilary® Mark II Mod. 7 Robotic Candidate Device just before the start of Tuesday’s Democratic debate.
Because of the community interest, we will today expand upon that discussion by walking through some of the differences in design philosophy between the two major categories of political robots: the Robotocan™ .1 Series and .3 Series of Devices and the Democrobot Device Program robots.
The reader should also expect technical comments on the execution of those philosophies, and some consideration of the day-to-day funding, operating, and maintenance issues that impact the entire Robotic Political Device establishment.
Let’s start with design philosophy, shall we?
Those familiar with ballistic missile warhead design and targeting tactics will recall that there were two basic operational concepts for ballistic missile attack during the Cold War: the Soviet Union tended to rely on a single, large warhead-for example, an attack on Detroit might feature a single 30-megaton device fused for ground burst in the center of the downtown area. On the other hand, a US target package delivered on St. Petersburg might rely on a MIRVed delivery of perhaps 9 payloads of 500 kiloton size, dispersed over the city and fused for detonation at 1000 feet or so to accomplish a similar task.
As a result of the design decisions that went into the initial weapon designs each side has specific operating needs that drive the manner in which they direct and disburse their forces. The same is true for the Parties and their Robotic Programs.
The most important design differentiation-and the one that affects every element of campaign operations and strategic management-was the decision, starting with the Robotican™ v1994.1 and .3 series, to eliminate the autonomous speech and thought capabilities that were embedded in the Robotican™ OS kernel.
The Recent History of Democrobots
Perhaps because of their long-term Congressional successes, the Democrobot design parameters instead called for a fully autonomous Device-an adaptation of the “fire and forget” philosophy that defines many weapon systems.
Of course, what works well in Congressional elections might not work so well when dealing with a “big-tent” constituency in a Presidential race. Candidates such as a BarneyFrank or a BoxerBot were great in District and Statewide races, but they just could not get traction nationally-and this was evident to everyone with the failures of the DukakadroidCSE and the Mondaleotron.
Even the CarterSys Bicentennial Edition and The Al Gore Robotic Candidate Device Company’s Al Gore (so dull his model name was...Al Gore) were unable to achieve re-election. These models represented early experiments in applying “liberalism filters” and a new “political correctness” adapter, and it was clear that more work was required.
Democrobot designers were therefore extremely excited, to say the least, when the first pre-release testing data was returned for the two “BillBot” prototypes.
BillBot 1.0 and BillBot 1.0A both featured the two most important additions to a Democrobot Presidential Candidate Device in at least two decades-a new class of liberalism controls called a “center stabilizer service” that kept the ‘bot from veering off to the “left” too excessively; and an improved “revenue agitator”, which is designed to improve fundraising performance.
Both designs also included the newly updated WonkWare Enterprise application (now standard on all Democrobot Devices) with the Rhodes SCH snap-in installed.
The biggest difference between the 1.0 and the 1.0A was the “Carvilleator” add-on card used in the 1.0A, and it was this revolutionary device that created such excitement for Democrobot designers. This “everyman regulator” (reverse engineered from the Robotican™ RoboReagan v1980.3 after a nearly ten year effort) caused such behavior as fast food cravings, the ability to engage crowds, and a “Q” score higher that any Democrobot Candidate since KennedyCorp.’s legendary Model 109. When paired with the experimental “binary ‘bot” Hilary® Mark I, the system had extraordinary electoral and fundraising success.
Despite the success of the design, the BillBot 1.0A M3.1002 (the actual Device that was elected) had certain idiosyncrasies which lead to the decisions to remove the Carvilleator and ultimately to rework the Al Gore design for 2000. All of this rethinking (and the Al Gore’s vote gathering success) eventually led to the introduction of the Boston Robotic Group’s McKerry in 2004 as an offshoot of the Al Gore architecture.
After 2004 it was evident that, despite the risks, a BillBot-class design would have to be reintroduced, at least in the primaries, so that the BillBot and Al Gore environments could be compared side-by-side in front of live voters.
As a result, the two front-running systems in the Democratic national polling today are the BillBot Group’s ObamaBot 2 (build M1.2245) and the most current incarnation of The Al Gore Robotic Candidate Device Company’s line (and the redesign of the aforementioned 1990’s “binary ‘bot”): the Hilary® Mark II Mod. 7.
That Al Gore lineage, as we well know, has created a vastly improved but highly “moderated” Candidate. After early testing of the Mods. 5 and 6 she was determined to need an update-and the Mod. 7 included personality emulator and Iraq War response software “tweaks”. Recent evaluations have suggested that this new code, while highly successful, may have contributed to the crash we discussed in the first installment of this story.
Meanwhile, ObamaBot engineers also appear to be in a “field development” process, with alterations to the aggression controls causing recent anomalies when trying to run the %root%\foreign_affairs.com program’s command set, among other issues. Because of the efficiency of the improved revenue agitator ’07 software and chipset, however, sufficient funding exists to ensure the program can continue to exist as long as is necessary. Downtuning of the reintegrated Carvilleator may occur in the future as well if the operators determine an increased “gravitas output” is required.
Robotican™ Devices considered
Earlier we had touched upon the decision to remove the autonomous speech/thought system from the Robotican™ operating system, and it is now time to explore this topic further.
It was well known that the speech/thought system had led to many duds and misfires of Robotican™ Congressional Candidate ‘bots (and a Democrobot majority in several Congresses) throughout the previous several election cycles leading up to 1994; and it was hoped that by adopting a “push” software/firmware update system and a new “talking point service” for the series a more consistent “ideological display” output could be achieved.
As a result, all Robotican™ Congressional Candidate Devices (the so-called .1 Series) include the capability to receive “push” updates through a simple SATCOM modem. Interoperability exists with the Robotican™ Murdoch series of Media Robots, meaning all Republican robotic assets can be software updated with the same “talking points” simultaneously.
The GWBmatic3000 v2000.3 Device Series, developed for the 2000 Presidential election cycle, is a variation on the highly reliable and robust Barbara6000 (there were motherboard issues, and a redesign was needed for Presidential Service); and features an additional “remote user” capability that allows for live operation of the Device by a connected operator.
The evolution of that capability is in itself amazing.
Robotican™ engineers were tasked with the development of an on-board chipset that would incorporate the capabilities of Harris Corp.’s AN/VRC-103(V)2 radio system-a unit often carried as a backpack by US Army troops today-to allow the remote operator 100% reliability of communication with the GWBmatic 3000 in any conceivable threat environment; particularly debates; with absolute system availability guaranteed by the installation of radiation isolated and redundant data busses emanating from redundant neck antennae.
To prevent Democratic political operators from “listening in” on the transmissions, the SINCGARS and Havequick I/II frequency hopping systems were included, along with Fascinator 128kbps support-unique among acknowledged military COMSEC systems. (Extensive encryption key storage, to support a variety of operating modes, is also provided for.) Additionally, should voice communications fail, the GWBmatic3000 can track and locate up to 12 SATCOM networks and download data via SATCOM modem, using the SATCOM Situational Awareness software.
As a final fail-safe, a “remote emergency boot/operate” receiver mode exists which can exercise Device control using a second program that runs above the Robotican™ Operating System (similar to the relationship between DOS and the Windows 9x. OS programs). This will allow a remote operator to call and execute programs, services, and functions of the GWBmatic 3000-even if the OS has “frozen” or cannot be otherwise operated. Unfortunately, it is not possible to immediately transfer between program modes, a failure that has been noted under certain emergency conditions.
A last minute design compromise was the external emergency battery pack used on the GWBmatic 3000 v2004.3 Debate Edition, which is not present on the GWBmatic 3000 v2004.3 COMMAND/POTUS Edition. By hot-swapping the battery just before the debate it was guaranteed that all system functions would be maintained even in the event of onboard primary and secondary power system failures.
And the future?
Besides the issues we have just addressed, a major problem for both sides has been the Uncanny Valley factor, which is driving research for both sides.
Additionally, as we all know, the massive effort on both sides continues into finding the solution to the Holy Grail of political device execution-a reliable, effective “Truth Enhancer” that will allow for the more effective dissemination of political concepts and positions.
So that’s our story for the weekend-a tale of two groups of researchers, maintenance technicians, and customer-operators working hard to create ‘bots that inspire, motivate...and keep you in a constant state of suspended disbelief until the morning after Election Day.
After which, the cycle begins again.
Because of the community interest, we will today expand upon that discussion by walking through some of the differences in design philosophy between the two major categories of political robots: the Robotocan™ .1 Series and .3 Series of Devices and the Democrobot Device Program robots.
The reader should also expect technical comments on the execution of those philosophies, and some consideration of the day-to-day funding, operating, and maintenance issues that impact the entire Robotic Political Device establishment.
Let’s start with design philosophy, shall we?
Those familiar with ballistic missile warhead design and targeting tactics will recall that there were two basic operational concepts for ballistic missile attack during the Cold War: the Soviet Union tended to rely on a single, large warhead-for example, an attack on Detroit might feature a single 30-megaton device fused for ground burst in the center of the downtown area. On the other hand, a US target package delivered on St. Petersburg might rely on a MIRVed delivery of perhaps 9 payloads of 500 kiloton size, dispersed over the city and fused for detonation at 1000 feet or so to accomplish a similar task.
As a result of the design decisions that went into the initial weapon designs each side has specific operating needs that drive the manner in which they direct and disburse their forces. The same is true for the Parties and their Robotic Programs.
The most important design differentiation-and the one that affects every element of campaign operations and strategic management-was the decision, starting with the Robotican™ v1994.1 and .3 series, to eliminate the autonomous speech and thought capabilities that were embedded in the Robotican™ OS kernel.
The Recent History of Democrobots
Perhaps because of their long-term Congressional successes, the Democrobot design parameters instead called for a fully autonomous Device-an adaptation of the “fire and forget” philosophy that defines many weapon systems.
Of course, what works well in Congressional elections might not work so well when dealing with a “big-tent” constituency in a Presidential race. Candidates such as a BarneyFrank or a BoxerBot were great in District and Statewide races, but they just could not get traction nationally-and this was evident to everyone with the failures of the DukakadroidCSE and the Mondaleotron.
Even the CarterSys Bicentennial Edition and The Al Gore Robotic Candidate Device Company’s Al Gore (so dull his model name was...Al Gore) were unable to achieve re-election. These models represented early experiments in applying “liberalism filters” and a new “political correctness” adapter, and it was clear that more work was required.
Democrobot designers were therefore extremely excited, to say the least, when the first pre-release testing data was returned for the two “BillBot” prototypes.
BillBot 1.0 and BillBot 1.0A both featured the two most important additions to a Democrobot Presidential Candidate Device in at least two decades-a new class of liberalism controls called a “center stabilizer service” that kept the ‘bot from veering off to the “left” too excessively; and an improved “revenue agitator”, which is designed to improve fundraising performance.
Both designs also included the newly updated WonkWare Enterprise application (now standard on all Democrobot Devices) with the Rhodes SCH snap-in installed.
The biggest difference between the 1.0 and the 1.0A was the “Carvilleator” add-on card used in the 1.0A, and it was this revolutionary device that created such excitement for Democrobot designers. This “everyman regulator” (reverse engineered from the Robotican™ RoboReagan v1980.3 after a nearly ten year effort) caused such behavior as fast food cravings, the ability to engage crowds, and a “Q” score higher that any Democrobot Candidate since KennedyCorp.’s legendary Model 109. When paired with the experimental “binary ‘bot” Hilary® Mark I, the system had extraordinary electoral and fundraising success.
Despite the success of the design, the BillBot 1.0A M3.1002 (the actual Device that was elected) had certain idiosyncrasies which lead to the decisions to remove the Carvilleator and ultimately to rework the Al Gore design for 2000. All of this rethinking (and the Al Gore’s vote gathering success) eventually led to the introduction of the Boston Robotic Group’s McKerry in 2004 as an offshoot of the Al Gore architecture.
After 2004 it was evident that, despite the risks, a BillBot-class design would have to be reintroduced, at least in the primaries, so that the BillBot and Al Gore environments could be compared side-by-side in front of live voters.
As a result, the two front-running systems in the Democratic national polling today are the BillBot Group’s ObamaBot 2 (build M1.2245) and the most current incarnation of The Al Gore Robotic Candidate Device Company’s line (and the redesign of the aforementioned 1990’s “binary ‘bot”): the Hilary® Mark II Mod. 7.
That Al Gore lineage, as we well know, has created a vastly improved but highly “moderated” Candidate. After early testing of the Mods. 5 and 6 she was determined to need an update-and the Mod. 7 included personality emulator and Iraq War response software “tweaks”. Recent evaluations have suggested that this new code, while highly successful, may have contributed to the crash we discussed in the first installment of this story.
Meanwhile, ObamaBot engineers also appear to be in a “field development” process, with alterations to the aggression controls causing recent anomalies when trying to run the %root%\foreign_affairs.com program’s command set, among other issues. Because of the efficiency of the improved revenue agitator ’07 software and chipset, however, sufficient funding exists to ensure the program can continue to exist as long as is necessary. Downtuning of the reintegrated Carvilleator may occur in the future as well if the operators determine an increased “gravitas output” is required.
Robotican™ Devices considered
Earlier we had touched upon the decision to remove the autonomous speech/thought system from the Robotican™ operating system, and it is now time to explore this topic further.
It was well known that the speech/thought system had led to many duds and misfires of Robotican™ Congressional Candidate ‘bots (and a Democrobot majority in several Congresses) throughout the previous several election cycles leading up to 1994; and it was hoped that by adopting a “push” software/firmware update system and a new “talking point service” for the series a more consistent “ideological display” output could be achieved.
As a result, all Robotican™ Congressional Candidate Devices (the so-called .1 Series) include the capability to receive “push” updates through a simple SATCOM modem. Interoperability exists with the Robotican™ Murdoch series of Media Robots, meaning all Republican robotic assets can be software updated with the same “talking points” simultaneously.
The GWBmatic3000 v2000.3 Device Series, developed for the 2000 Presidential election cycle, is a variation on the highly reliable and robust Barbara6000 (there were motherboard issues, and a redesign was needed for Presidential Service); and features an additional “remote user” capability that allows for live operation of the Device by a connected operator.
The evolution of that capability is in itself amazing.
Robotican™ engineers were tasked with the development of an on-board chipset that would incorporate the capabilities of Harris Corp.’s AN/VRC-103(V)2 radio system-a unit often carried as a backpack by US Army troops today-to allow the remote operator 100% reliability of communication with the GWBmatic 3000 in any conceivable threat environment; particularly debates; with absolute system availability guaranteed by the installation of radiation isolated and redundant data busses emanating from redundant neck antennae.
To prevent Democratic political operators from “listening in” on the transmissions, the SINCGARS and Havequick I/II frequency hopping systems were included, along with Fascinator 128kbps support-unique among acknowledged military COMSEC systems. (Extensive encryption key storage, to support a variety of operating modes, is also provided for.) Additionally, should voice communications fail, the GWBmatic3000 can track and locate up to 12 SATCOM networks and download data via SATCOM modem, using the SATCOM Situational Awareness software.
As a final fail-safe, a “remote emergency boot/operate” receiver mode exists which can exercise Device control using a second program that runs above the Robotican™ Operating System (similar to the relationship between DOS and the Windows 9x. OS programs). This will allow a remote operator to call and execute programs, services, and functions of the GWBmatic 3000-even if the OS has “frozen” or cannot be otherwise operated. Unfortunately, it is not possible to immediately transfer between program modes, a failure that has been noted under certain emergency conditions.
A last minute design compromise was the external emergency battery pack used on the GWBmatic 3000 v2004.3 Debate Edition, which is not present on the GWBmatic 3000 v2004.3 COMMAND/POTUS Edition. By hot-swapping the battery just before the debate it was guaranteed that all system functions would be maintained even in the event of onboard primary and secondary power system failures.
And the future?
Besides the issues we have just addressed, a major problem for both sides has been the Uncanny Valley factor, which is driving research for both sides.
Additionally, as we all know, the massive effort on both sides continues into finding the solution to the Holy Grail of political device execution-a reliable, effective “Truth Enhancer” that will allow for the more effective dissemination of political concepts and positions.
So that’s our story for the weekend-a tale of two groups of researchers, maintenance technicians, and customer-operators working hard to create ‘bots that inspire, motivate...and keep you in a constant state of suspended disbelief until the morning after Election Day.
After which, the cycle begins again.
Labels:
2008 elections,
Democrobot,
Humor,
Robotican
Tuesday, August 7, 2007
A Fake Consultant Exclusive: Hilary® Crash Technical Analysis
In a startling development, members of the Hillary Clinton campaign admitted Hilary® crashed just before the Democratic debate today in Chicago.
The incident occurred in her van on the way to the event.
Fortunately, technicians in the van were able to reboot her, and she was able to complete the debate under her own power, with no one in attendance the wiser.
This reporter was able to speak with a technician on the scene this evening. She reports that the current thinking among the engineering staff that operates Hilary® on a daily basis is that the source of the anomaly has to be either a failed memory card, or an issue related to newly installed software.
As you may know, the current Hilary® Mark II, Mod. 7 device uses a memory card array that consists of custom RAM modules that are soldered onto a larger card. Even though the solder is applied by dipping, there’s a concern that the impurities might be damaging the RAM and causing memory dropouts.
The Hilary® Mark II, Mod. 8 is expected to use an all-etched custom memory array that will be more stable, which should prevent some of the slow responses that were noticeable throughout the first few months of the campaign.
The software issue, however, is more vexing.
As many of you know, the Mods. 5 and 6 had serious problems in their personality emulator and Iraq War response software. Rather than trying to fix each problem with its own Mod., the Mod. 7 attempted to integrate both fixes at the same time, and that is the other potential problem that may have occurred today.
“Hilary® tries to look as human as possible” my engineer source told me this evening “and her personality emulator has been learning very well over the past two weeks.” She further reported that “bug-eye” incidents were reduced by over 75% during Hilary’s® last 100 operating hours due to a better Iraq response algorithm.
“If we can just nail down the source of the crash, we’ll be 90% ready for the election, and if the Mod. 8 works as expected that will be achieved.”
There were two cautionary notes to the conversation, however-and a warning for the weeks ahead:
“The Mark II Hilarys® have not been stress-tested as thoroughly as the earlier release, and we do not know if she will hold up over the next few months…the only way to know if she can hold up is to field test her. After the Mod. 8 is released we plan a destructive testing process on the Mod. 7 to better understand the potential operating parameters.” She also told me: “A second goal of stress-testing will be experimental-we are still trying to determine what causes her unnatural ‘cheerleader’ response, and finding the source of that suboptimal reaction is our highest priority.”
The incident occurred in her van on the way to the event.
Fortunately, technicians in the van were able to reboot her, and she was able to complete the debate under her own power, with no one in attendance the wiser.
This reporter was able to speak with a technician on the scene this evening. She reports that the current thinking among the engineering staff that operates Hilary® on a daily basis is that the source of the anomaly has to be either a failed memory card, or an issue related to newly installed software.
As you may know, the current Hilary® Mark II, Mod. 7 device uses a memory card array that consists of custom RAM modules that are soldered onto a larger card. Even though the solder is applied by dipping, there’s a concern that the impurities might be damaging the RAM and causing memory dropouts.
The Hilary® Mark II, Mod. 8 is expected to use an all-etched custom memory array that will be more stable, which should prevent some of the slow responses that were noticeable throughout the first few months of the campaign.
The software issue, however, is more vexing.
As many of you know, the Mods. 5 and 6 had serious problems in their personality emulator and Iraq War response software. Rather than trying to fix each problem with its own Mod., the Mod. 7 attempted to integrate both fixes at the same time, and that is the other potential problem that may have occurred today.
“Hilary® tries to look as human as possible” my engineer source told me this evening “and her personality emulator has been learning very well over the past two weeks.” She further reported that “bug-eye” incidents were reduced by over 75% during Hilary’s® last 100 operating hours due to a better Iraq response algorithm.
“If we can just nail down the source of the crash, we’ll be 90% ready for the election, and if the Mod. 8 works as expected that will be achieved.”
There were two cautionary notes to the conversation, however-and a warning for the weeks ahead:
“The Mark II Hilarys® have not been stress-tested as thoroughly as the earlier release, and we do not know if she will hold up over the next few months…the only way to know if she can hold up is to field test her. After the Mod. 8 is released we plan a destructive testing process on the Mod. 7 to better understand the potential operating parameters.” She also told me: “A second goal of stress-testing will be experimental-we are still trying to determine what causes her unnatural ‘cheerleader’ response, and finding the source of that suboptimal reaction is our highest priority.”
Sunday, August 5, 2007
On Exploration, Or, Be An Astronomer In The Privacy Of Your Own Home
I love to explore.
I’m the kind of person who loves to take a new road, or hear a new song, or consider a point of view different than my own.
It leads me to read a lot of foreign blogs, as well, and that’s how we get to today’s story.
I’m a member of the Blogpower community of bloggers, the majority of whom are located in countries of the British Commonwealth, including Onyx Stone, who blogs from the UK; and it was in the course of visiting his blog that I found a story of do-it-yourself space exploration that deserves to be retold.
The Sloan Digital Sky Survey, in their own words, “...is the most ambitious astronomical survey ever undertaken.” The goal is to gather image data which will be used to create a 3-D map of about a quarter of the sky. It’s estimated that when complete the map will include about a million galaxies and quasars.
Sloan reports the first phase (SDSS-I) of the project, completed in 2005, gathered data on over 200 million objects-including 675,000 galaxies and 90,000 quasars. Phase two (SDSS-II, but I bet you saw that coming), underway until 2008, has expanded into a three-part project with 25 institutional partners, including The University of Chicago, The United States Naval Observatory, a Japanese consortium, Oxford University, and the University of Washington.
The primary instrument used to gather the images is a 2.5 meter telescope located in the city of (irony alert!) Sunspot, New Mexico at the Apache Point Observatory and operated by the Astrophysical Research Consortium in cooperation with the University of New Mexico.
Here’s the big question that was being pursued:
If the Big Bang theory is correct, you would expect matter to be more or less evenly distributed throughout the known universe-but that’s not the case. Matter is clustered together in groups, with large “void” areas that have no visible matter. So why didn’t matter distribute itself evenly?
What else might be learned from the survey?
Here’s a stunner:
Our own galaxy, the Milky Way, appears to be colliding with another, smaller galaxy located “near” the constellation Virgo-and that galaxy may be one of several that appear to be in the process of being absorbed into our own.
Here’s another:
The Milky Way may be surrounded by a ring of stars left over from a previous encounter, much as Saturn is surrounded by rings. (Here’s an example of another galaxy surrounded by a ring.)
More?
The vast majority of the mass of the Milky Way is invisible to our current detection tools, and is currently inferred to exist as a “halo” of an unknown type of matter that surrounds our galaxy, and perhaps other potential formations within the galaxy. This is the current incarnation of “dark matter” thinking; and getting to the bottom of this mystery is a sort of astrophysical Holy Grail. It is hoped that SDSS-I and –II will provide more data that can lead to better answers to this most fundamental “who are we and what surrounds us?” question.
Now here’s the good part.
Uncle Astronomer wants you.
It turns out humans are better than computers at identifying types of galaxies based on the existing digital images, and there are so many images to classify that outside help is needed.
A special website, GalaxyZoo.org, has been created to facilitate the process.
How does it work?
I’m glad you asked.
After registering with GalaxyZoo, you are given a tutorial that teaches you to identify a spiral from an elliptical galaxy and other basic facts. You’re then given a simple test (I promise, no need for test anxiety on this one), and after that, you start evaluating images of actual unclassified galaxies, stars, and other objects.
I’ve done it myself, and it can take anywhere from a few seconds to a few minutes to classify a galaxy; and before you know it you’ve done 10, and then 100.
Anyone reading this a parent?
This is a great project for a kid who has an interest in the sky, and all of the content is extremely “G rated”.
It’s this process, multiplied times all of you, that astronomers hope will allow them to make faster progress through this mountain of data (10 terrabytes so far, released in annual increments. The 6th data release-DR6-home page is here).
So that’s our Sunday “day off from politics” story-advanced astronomy is now a living room endeavor, and you-and the kids-are invited to help make science history.
Did you ever want to be an amateur astronomer, and have fun doing it?
Head over to GalaxyZoo and give it a shot.
I’m the kind of person who loves to take a new road, or hear a new song, or consider a point of view different than my own.
It leads me to read a lot of foreign blogs, as well, and that’s how we get to today’s story.
I’m a member of the Blogpower community of bloggers, the majority of whom are located in countries of the British Commonwealth, including Onyx Stone, who blogs from the UK; and it was in the course of visiting his blog that I found a story of do-it-yourself space exploration that deserves to be retold.
The Sloan Digital Sky Survey, in their own words, “...is the most ambitious astronomical survey ever undertaken.” The goal is to gather image data which will be used to create a 3-D map of about a quarter of the sky. It’s estimated that when complete the map will include about a million galaxies and quasars.
Sloan reports the first phase (SDSS-I) of the project, completed in 2005, gathered data on over 200 million objects-including 675,000 galaxies and 90,000 quasars. Phase two (SDSS-II, but I bet you saw that coming), underway until 2008, has expanded into a three-part project with 25 institutional partners, including The University of Chicago, The United States Naval Observatory, a Japanese consortium, Oxford University, and the University of Washington.
The primary instrument used to gather the images is a 2.5 meter telescope located in the city of (irony alert!) Sunspot, New Mexico at the Apache Point Observatory and operated by the Astrophysical Research Consortium in cooperation with the University of New Mexico.
Here’s the big question that was being pursued:
If the Big Bang theory is correct, you would expect matter to be more or less evenly distributed throughout the known universe-but that’s not the case. Matter is clustered together in groups, with large “void” areas that have no visible matter. So why didn’t matter distribute itself evenly?
What else might be learned from the survey?
Here’s a stunner:
Our own galaxy, the Milky Way, appears to be colliding with another, smaller galaxy located “near” the constellation Virgo-and that galaxy may be one of several that appear to be in the process of being absorbed into our own.
Here’s another:
The Milky Way may be surrounded by a ring of stars left over from a previous encounter, much as Saturn is surrounded by rings. (Here’s an example of another galaxy surrounded by a ring.)
More?
The vast majority of the mass of the Milky Way is invisible to our current detection tools, and is currently inferred to exist as a “halo” of an unknown type of matter that surrounds our galaxy, and perhaps other potential formations within the galaxy. This is the current incarnation of “dark matter” thinking; and getting to the bottom of this mystery is a sort of astrophysical Holy Grail. It is hoped that SDSS-I and –II will provide more data that can lead to better answers to this most fundamental “who are we and what surrounds us?” question.
Now here’s the good part.
Uncle Astronomer wants you.
It turns out humans are better than computers at identifying types of galaxies based on the existing digital images, and there are so many images to classify that outside help is needed.
A special website, GalaxyZoo.org, has been created to facilitate the process.
How does it work?
I’m glad you asked.
After registering with GalaxyZoo, you are given a tutorial that teaches you to identify a spiral from an elliptical galaxy and other basic facts. You’re then given a simple test (I promise, no need for test anxiety on this one), and after that, you start evaluating images of actual unclassified galaxies, stars, and other objects.
I’ve done it myself, and it can take anywhere from a few seconds to a few minutes to classify a galaxy; and before you know it you’ve done 10, and then 100.
Anyone reading this a parent?
This is a great project for a kid who has an interest in the sky, and all of the content is extremely “G rated”.
It’s this process, multiplied times all of you, that astronomers hope will allow them to make faster progress through this mountain of data (10 terrabytes so far, released in annual increments. The 6th data release-DR6-home page is here).
So that’s our Sunday “day off from politics” story-advanced astronomy is now a living room endeavor, and you-and the kids-are invited to help make science history.
Did you ever want to be an amateur astronomer, and have fun doing it?
Head over to GalaxyZoo and give it a shot.
Thursday, August 2, 2007
On Smarter Voting, Or, It's Time To Toss Those Old, Cold, Freedom Fries
France.
The United States.
Such a complex relationship we have.
Of course, who doesn’t gratefully recall Lafayette and the Statue of Liberty?
And who doesn’t love a glass of wine?
On the other hand, here we were, all ready to go invade Iraq; and here was France, in the United Nations, trying to stop us from going ahead.
In the years before the invasion, Iraq had spent a little money in France, which meant in some quarters, France was the enemy.
And by God, if France was the enemy, America had to act.
And with that, along came to the rescue Congressman (and, amazingly, a fluent French speaker with a French family lineage) Bob Ney (Inmate 28882-016-OH), and Congressman Walter Jones, (R-NC).
The Plan?
To save Cubby’s, and by extension, America, through Freedom Toast and Freedom Fries.
Apparently the thinking was that if we showed that dastardly France how America supports its troops, we would be propelled to victory in Iraq because our troops would know we were so strongly behind them.
Now you might say to yourselves: “selves, there’s just something fundamentally wrong with people who think like that”...and you’d be right.
Of course, Congressman Ney went and got himself caught up in the Abrahamoff scandal-but what about Walter Jones?
Ex-Democratic candidate Jones made that promise to his brand-new Republican constituency just two years after losing the Primary election to fill his (Democratic) father’s own Congressional seat to Eva Clayton, who won the General and served as a Member until 2003.
After making the promise, he won his seat in North Carolina’s 3rd District in 1994 on the back of the Republican Revolution. Mother Jones quoted the Congressional Quarterly’s “Politics in America”, who described him as: "one of the unreconstructed ‘true believers' of the GOP Class of 1994."
His record suggests that he does fit that description (lots of defense, supported “reforming” bankruptcy, all “flag burning” bills, cutting student aid), more or less, and The John Birch Society (click on the “Freedom Index” link in the middle of the page) seems to like him.
Of course, it’s hard not to flip-flop sometimes.
Consider the books in your school libraries.
Now most conservatives would say local school districts should control what books local school districts buy, and that the districts should set their own policies about those purchases. Most conservatives would never support ”one-size-fits-all” Federal mandates for our local schools.
And Congressman Jones agrees.
Unless, of course, he doesn’t like the contents of the books.
That’s why he introduced legislation to require all school districts in the US to follow “certain procedures” mandated by the Federal Government before they could purchase books. Basically he wanted every school district to create “parent review councils” whether the districts wanted them or not, or he would see that the district’s Federal funding would be cut off.
So after flip-flopping from Democrat to Republican, and flip-flopping on “State’s Rights” issues, it should be no surprise that Jones has come to see what a fool he was to buy the Administration’s war stories, hook, line, and sinker-and to the great consternation of the Rs, he’s flip-flopped again.
But at least it’s in the right direction.
At the moment, his Congressional website seems to report he opposes the surge, but not withdrawal, despite his informing the Charlotte News & Observer “the US went to war “with no justification”” (is this guy a gymnast, or what?).
He has been given credit for his change of mind, but that’s not good enough for me.
I want Members of Congress to be independent thinkers who see this stuff in advance, not cheerleaders who figure out they missed the boat 3500 US lives too late.
I want a Member that can have the flexibility of mind to know in advance that they will make mistakes, will have to change position...who are, frankly, smart enough to know that they will have to flip-flop from time to time- if only because change inevitably demands it.
If I were Walter Jones, and I had to go to sleep every night thinking of those thousands of dead people I led the charge to kill, and I had to think about my role in their deaths every time I wrote letters to family members on Saturday afternoons; I think my own soul might tell me that the time had come to leave Congress.
That I had done enough damage.
That I, who had been so blind, might now see that I am not qualified to serve in a position that requires vision and good judgment.
That blind obedience to ideology, until the death toll, and the waste, and the lies behind it all shock even me is no measure of a good public servant.
And if I was voting in North Carolina’s Third District, I would never vote for Walter Jones again.
There’s no reason to support a candidate who broke his flip-flop promise so many times he probably spells it plif-polf; and next time we’ll talk about a better choice-retired Marine, and former Airport Director of Basra, Iraq’s International Airport, Marshall Adame.
I don’t know about you, but all that writing makes me hungry.
Think I’ll go have a few French Fries.
The United States.
Such a complex relationship we have.
Of course, who doesn’t gratefully recall Lafayette and the Statue of Liberty?
And who doesn’t love a glass of wine?
“How can you govern a country which has 246 varieties of cheese?”
--Charles De Gaulle, in Ernest Mignon's “Les Mots du General”
On the other hand, here we were, all ready to go invade Iraq; and here was France, in the United Nations, trying to stop us from going ahead.
“It felt comfortable to be in a country where it is so simple to make people happy...Everything is on such a clear financial basis in France. It is the simplest country to live in. No one makes things complicated by becoming your friend for any obscure reason. If you want people to like you, you have only to spend a little money.”
--Ernest Hemingway, The Sun Also Rises
In the years before the invasion, Iraq had spent a little money in France, which meant in some quarters, France was the enemy.
And by God, if France was the enemy, America had to act.
“It's not ludicrous. We in the nation's capital need to send a signal. We need to tell our troops we're with you. This is symbolic. Walter knows that. I know it. But I think it's a wonderful gesture and our veterans are very proud. If somebody thinks it's ludicrous, they can go privately eat their French toast. We're going to eat our freedom toast.”
--Congressman Bob Ney (R-OH), on CNN
And with that, along came to the rescue Congressman (and, amazingly, a fluent French speaker with a French family lineage) Bob Ney (Inmate 28882-016-OH), and Congressman Walter Jones, (R-NC).
The Plan?
To save Cubby’s, and by extension, America, through Freedom Toast and Freedom Fries.
Apparently the thinking was that if we showed that dastardly France how America supports its troops, we would be propelled to victory in Iraq because our troops would know we were so strongly behind them.
Now you might say to yourselves: “selves, there’s just something fundamentally wrong with people who think like that”...and you’d be right.
Of course, Congressman Ney went and got himself caught up in the Abrahamoff scandal-but what about Walter Jones?
“I, Walter Jones, promise the voters of eastern North Carolina and the 3rd District I will not flip-flop”...
--The Virginian-Pilot newspaper, September 1st, 1994
Ex-Democratic candidate Jones made that promise to his brand-new Republican constituency just two years after losing the Primary election to fill his (Democratic) father’s own Congressional seat to Eva Clayton, who won the General and served as a Member until 2003.
After making the promise, he won his seat in North Carolina’s 3rd District in 1994 on the back of the Republican Revolution. Mother Jones quoted the Congressional Quarterly’s “Politics in America”, who described him as: "one of the unreconstructed ‘true believers' of the GOP Class of 1994."
His record suggests that he does fit that description (lots of defense, supported “reforming” bankruptcy, all “flag burning” bills, cutting student aid), more or less, and The John Birch Society (click on the “Freedom Index” link in the middle of the page) seems to like him.
Of course, it’s hard not to flip-flop sometimes.
Consider the books in your school libraries.
Now most conservatives would say local school districts should control what books local school districts buy, and that the districts should set their own policies about those purchases. Most conservatives would never support ”one-size-fits-all” Federal mandates for our local schools.
And Congressman Jones agrees.
Unless, of course, he doesn’t like the contents of the books.
That’s why he introduced legislation to require all school districts in the US to follow “certain procedures” mandated by the Federal Government before they could purchase books. Basically he wanted every school district to create “parent review councils” whether the districts wanted them or not, or he would see that the district’s Federal funding would be cut off.
So after flip-flopping from Democrat to Republican, and flip-flopping on “State’s Rights” issues, it should be no surprise that Jones has come to see what a fool he was to buy the Administration’s war stories, hook, line, and sinker-and to the great consternation of the Rs, he’s flip-flopped again.
But at least it’s in the right direction.
At the moment, his Congressional website seems to report he opposes the surge, but not withdrawal, despite his informing the Charlotte News & Observer “the US went to war “with no justification”” (is this guy a gymnast, or what?).
He has been given credit for his change of mind, but that’s not good enough for me.
I want Members of Congress to be independent thinkers who see this stuff in advance, not cheerleaders who figure out they missed the boat 3500 US lives too late.
I want a Member that can have the flexibility of mind to know in advance that they will make mistakes, will have to change position...who are, frankly, smart enough to know that they will have to flip-flop from time to time- if only because change inevitably demands it.
If I were Walter Jones, and I had to go to sleep every night thinking of those thousands of dead people I led the charge to kill, and I had to think about my role in their deaths every time I wrote letters to family members on Saturday afternoons; I think my own soul might tell me that the time had come to leave Congress.
That I had done enough damage.
That I, who had been so blind, might now see that I am not qualified to serve in a position that requires vision and good judgment.
That blind obedience to ideology, until the death toll, and the waste, and the lies behind it all shock even me is no measure of a good public servant.
And if I was voting in North Carolina’s Third District, I would never vote for Walter Jones again.
There’s no reason to support a candidate who broke his flip-flop promise so many times he probably spells it plif-polf; and next time we’ll talk about a better choice-retired Marine, and former Airport Director of Basra, Iraq’s International Airport, Marshall Adame.
I don’t know about you, but all that writing makes me hungry.
Think I’ll go have a few French Fries.
Author's Note: Although I'm in no way associated with, or compensated by his campaign, it should be fairly evident I'm an Adame supporter.
Labels:
Election '08,
Freedom Fries,
Marshall Adame,
NC-03,
Walter Jones
Subscribe to:
Posts (Atom)