We will discuss a military “spy satellite” program that has great potential for use by other customers-including law enforcement and intelligence agencies.
A program, frankly, that has a huge “Big Brother” potential.
A program that may end up costing $150 billion of today’s dollars-or more-over the next 25 years.
And with that introduction complete, let’s talk about “Space Radar”.
This will be a fairly long, but not very technical, description of a development and procurement program that has the twin goals of allowing the US military to obtain an image of any location on Earth, and then employing that information to support a variety of missions; and secondly, to track individual vehicles from space so that they might be attacked, if needed.
At the end of the discussion you should have an understanding of what the system can-and can’t-do, and you should be able to do your own thinking regarding a rough cost-benefit analysis of the program.
''I'm really trying to help keep this revised assault on schedule!''
--CTU Analyst Chloe, on the television show “24”
The television show “24” is but one of many places where you can see the image of the Secret Government Intelligence Agency specialists hunched over computer screens, following the image of some vehicle that is driving in a distant desert in real time, and then-with the appropriate giant explosion-the Evil Terrorist and his Truck are destroyed by a pinpoint strike from a perfectly guided missile.
Another success for Our Side.
But as you might suspect, in real life it doesn’t always work out that way.
For a variety of reasons that we will flesh out as we go along, several parts of that scenario are very difficult to make happen, and especially exactly when you need them all to happen.
For example, you can’t just “steer” satellites to where you want them to go as you might a car-the nature of how this type of satellite orbits the Earth determines where it will be at a particular time of day. Therefore just because you located the Evil Terrorist Truck doesn’t mean you can just call up an image, right this instant, from the closest satellite.
It’s also very difficult to maintain contact with the Evil Terrorist Truck for more than a few minutes as it’s driving along, and one factor causing this difficulty is related to the orbital parameters of the satellite. Other factors relate to the design characteristics of the radar itself, in what direction and over what terrain the target vehicle travels, potential confusion caused by any other vehicles located nearby, and the computational and computer processing difficulties inherent in this type of work.
The number of satellites you have, and their angle relative to what they are trying to observe will also affect the ability to get the image.
If all that wasn’t enough, all the data generated has to be processed into a useable image, downloaded, and analyzed. Unfortunately, space today is essentially wired for DSL, and to make this program work we will also have to install a much bigger “pipe” for getting data down from space. As a result of today’s slower speed connections, the current reality often means waiting for data from a satellite before it can be acted upon-and that delay can be not just seconds or minutes, but sometimes even hours.
Is there any good news here?
Well, maybe.
Depending on how we design, we might be able to use Moore’s Law to leverage today’s investment by upgrading some components later.
There is a manufacturing development on the horizon that might substantially reduce the cost of producing the radar arrays themselves, and electronics do tend to get cheaper every year-but those are not the most expensive part of the satellite’s design. More on this later.
Before we get too far, a quick word about sourcing.
I will link liberally in the course of this discussion, but I owe a giant thanks to the Congressional Budget Office. In January of 2007 they released “Alternatives for Military Space Radar”, and the great majority of the information found here can be found, in greater detail, there.
So now, let’s talk generically about what these satellites do.
As we discussed above, these satellites are intended to perform two basic missions. In the first, they travel around the Earth taking pictures of strips of land as they pass overhead. This is what you might think of as a typical “spy satellite” mission-the comparison of images from some location to images of the same place, taken in a previous time. This is the raw material of how most folks might traditionally imagine the process of photographic analysis works. You can analyze, for example, if construction has occurred (are they building the reactor?) at a particular place, or the movement and composition of military forces (where is the enemy?) on a battlefield. There are other uses for this data as well, including military mapping.
Military mapping has two purposes: the production of maps for use by troops, sailors, or pilots, and the creation of the “maps” that are fed into the navigational systems of certain missiles. Once the map (actually a digital three-dimensional representation of a series of “waypoints”) is loaded, the missile can find its own way to the target.
The second mission is not so traditional: the goal of tracking the movements of individual vehicles from space as they move about on the Earth, in “near-real” time, so as to create the “actionable intelligence” we so often hear about. No acknowledged satellite performs this mission today for any country. Earth based systems such as the Predator, Global Hawk, and JSTARS have handled this mission since the 1990’s.
The biggest challenge for a designer tasked with making these two things happen…is that the “taking pictures” mission and the “actionable intelligence” missions fundamentally conflict with each other.
Here’s what I mean:
After going to the time, trouble, and expense of launching a satellite and putting the infrastructure in place to both keep it going and to use the data it creates, you need to ensure you collect the most data possible 24 hours a day. A satellite performing this type of mission travels around the Earth in an orbit that creates “strips” of image, one alongside the other as each orbit goes by, until a complete image of the Earth’s surface is created. If a satellite can complete one strip in 105 minutes (the orbit time for a satellite passing 1,000 kilometers above the Earth), 14 strips would cover the entire Earth’s surface in 24 hours.
To get a better idea of the “strip” concept, look at the cardboard roll in the center of your paper towels or toilet paper. That overlapping pattern, if applied to a more spherical shape (the Earth), is an excellent way to visualize what I’m talking about.
On the other hand, the longer you can stay over a target, the longer you can observe a particular object (the Evil Terrorist Truck, for example). The level of detail you can create from that image goes up as well. There are satellites that, if you were able to look up and see them, would always appear to be over the same spot on Earth. These are called “geosynchronous” satellites, and if you have DirecTV you benefit from such a satellite. This would theoretically create the most detail and longest time over target possible.
This is not a good design for a spy satellite, however, because you can only look at one place for that satellite’s entire lifetime.
This, as with much of life, is not 100% accurate-it is possible to move satellites to different orbits to some extent, but doing so will reduce the time they can be maintained in orbit, and a satellite that cannot maintain its orbit will eventually fall back to Earth. Since these satellites will likely be costing us more or less $1.5 billion each, keeping them up as long as possible matters.
One way this problem is resolved is by increasing the number of satellites, but this can still leave gaps in coverage. For example, 14 satellites that take 105 minutes to orbit would mean a satellite would be over any particular spot every 105 minutes…but that also means the Evil Terrorist Truck could have up to a 105 minute head start before we can get a camera on it, if a satellite had just passed by.
There are issues related to the satellite’s distance from the Earth’s surface as well. High altitude orbits (20,000 kilometers or higher) have advantages, especially in the amount of coverage at any given time, but they require exponentially larger amounts of power to operate, because the returning signal is so weak. (The CBO reports that doubling the range a signal travels makes it 16 times weaker.)
Medium Earth Orbit (5,000 to 15,000km) satellites have similar characteristics: large amounts of power and large radar antenna and solar arrays make design and construction technically challenging, but they offer large “footprints” of coverage.
Low Earth Orbit (500 to 1,000km) is risky because of the risk of orbital decay-the dragging of the satellite back to Earth because of the planet’s gravity. This orbital altitude offers the smallest viewing area, but the strongest signal return potential. It is the likely choice of any future Space Radar system.
You might expect that implementing power solutions would be the easiest for these Low earth Orbit satellites, but nothing’s ever that simple. And thus we need to take a moment to address the role of earthly eclipses on satellite batteries.
Because of the time spent in the Earth’s shadow every orbit it is not possible to get enough power from the sun to operate any single satellite’s radar at full power at all times. This requires the satellites to store solar power in onboard batteries for when it’s needed-but the more often you charge and discharge batteries, the faster you wear them out. Changing batteries is not an option, which is why the proposed satellites have a reported lifespan of about 10 years. (A Low Earth Orbit satellite spends about 25% of it’s time in shadow.)
The satellites we are talking about gather images through the use of radar. The common image of a radar installation is an exotic looking antenna of some sort rotating around at the top of a radar mast. The radar sends signals out from the antenna (the “aperture”), the system receives the signals as they return after bouncing off an object, and the time it takes for that to occur can be used as one input for a math problem (an algorithm) that is processed by a computer to create the “radar image” that the operator sees on a modern radar.
This is not, however, the only way a radar device can operate. The larger the aperture, the more detail the image can have. That’s because more signal sent out allows more signal to return, and that’s where detail and clarity comes from. It’s also true that a larger aperture allows you to see more area at any one time.
It’s possible to electronically manipulate radar “transmit/receive modules” laid out on a giant flat non-moving panel (an “array”) to create a giant “synthetic” aperture-and “Synthetic Aperture Radar” (or SAR) will be used on the Space Radar satellites. This electronic manipulation capability allows for fancy tricks never imagined by the “old school” radar designers-for example, part of the radar can scan a large area with lower detail, while part of the radar scans a small area with very fine detail.
Two other handy characteristics of the design are the ability to “re-aim” any part of the array at any other area it’s pointed at instantaneously, and the ability to “re-view” several spots that the array is facing in a repeating pattern over and over (10 seconds on six locations every minute, as an example).
The Defense Support System uses giant SAR installations, and they are also used on US Navy ships (note the large flat panel just below the mast).
This brings us to nomenclature.
I promise I’ll be gentle, but there are a few more terms you need to know for all of this to make better sense.
To help simplify what might otherwise seem a bit obtuse, I’m going to ask you to play a mental game with me. Imagine you’re sitting in the driver’s seat of a car.
In this example, the car will represent the satellite, and you, sitting in the driver’s seat, will represent the radar.
Now imagine that you are driving that car on the freeway (or motorway for my UK friends.)
The view out the front window would represent the “Satellite Ground Track”.
This can also be called the “Along-Track” or “Azumith” direction.
The radar is pointing out the passenger window, and the window represents the aperture that we discussed above. (That direction is known as the “Cross-Track”,”Range”, or “Elevation” direction.)
As you might imagine, the size and shape of the window affects what can be seen. Picture a window two feet tall by four feet wide in size, with you looking out the window down at the ground. Now consider how that view would change if the window was four feet tall by two feet wide
Here’s what else might affect your view:
--How much does your head have to turn to look out the side window?
That angle is called the “Azumith Angle”.
--How much do you have to tilt your head down to see the spot you want to see on the ground? That angle is called the “Elevation Angle”.
--The reverse of that (the angle someone on the ground would have to look up to see you) is called the “Grazing Angle”.
--The area of ground that you can see looking out the side window would be the area of the “Range-Swath Width”
--You can’t look straight down through the car’s floor to see the road-and a satellite can’t either. This zone that can’t be seen immediately below the satellite is called the “Nadir”.
We’ve covered a lot so far, and I think with just a couple exceptions the terms we need to learn are now out of the way, so how about we take a short break?
Go walk away, let your head clear a bit, pour yourself a refreshing beverage, and come on back. We’ll pick up the discussion by looking at the factors that limit what this sort of system can accomplish.
Our break over, let‘s continue our discussion of what keeps radar designers up late at night.
For starters, consider the challenges of tracking the Evil Terrorist Truck (or mobile SCUD transport erector launcher [TEL], for that matter). A satellite, traveling more or less 17,000 miles an hour, is trying to find a vehicle traveling maybe 30 miles an hour on a planet passing hundreds of kilometers underneath at 15,000 miles an hour on its voyage around the Sun. This vehicle might be on a road surrounded by other vehicles at varying speeds, or it might be in the mountains, where valleys can block your view. Patterns of vegetation are also confusing.
Designers resolve some of these problems by attempting to “teach” the computers that interpret the data how to filter out the “clutter”. Unfortunately, this is an exercise in guessing (if the vehicle is traveling on a road, the computer might attempt to extrapolate the location of the vehicle from the surrounding “clutter” based on information it has already received about the target’s previous activities, for example), and guessing leads to guessing and...
To make a long story short, the CBO estimates current “state of the art” technology could only maintain any single vehicle’s tracking for less than 10 minutes before the clutter overwhelms the system’s ability to correctly guess what’s what. The best results are achieved in a grid environment (a plowed farmer’s field, for example), where the vehicle moves in the Cross-Track direction. The more rapidly a target is traveling, the easier it is to locate. A vehicle moving exactly in the Along-Track direction cannot be detected.
Another means of resolving some of these problems is to employ many satellites. As we mentioned above sending one satellite, in the same orbit, over a location over and over throughout the day can require many satellites in order to constantly observe any particular spot on Earth. In fact, if you have 14 strips that take 105 minutes to orbit, viewing one location every 9 minutes requires roughly 150 satellites. This would provide you the ability to have near real-time images of any location on Earth nearly 90% of the time, as one of the 150 satellites is always somewhere nearby overhead, and with a large range-swath width you could theoretically achieve nearly overlapping coverage. (Because of the nadir below every satellite, it is nearly impossible to achieve 100% coverage.)
Of course, who can afford 150 satellites?
But there is another way: remember the paper towel roll example we discussed before? Imagine if the “seams” on that roll went in two directions-the seam you see running to the right, and a second seam, crossing over the first, going to the left. That would be an example of satellites on two “Orbital Planes”, and the constellations of satellites that are envisioned for Space Radar operate on one, two, or three orbital planes, depending on the alternative you’re talking about. (Picture two seams, not overlapping, going to the left, and one to the right on our cardboard roll, and you have three orbital planes.) If you picture satellites paralleling or crossing each other’s paths on these orbital planes, you can see new opportunities to cover ground more quickly with fewer “birds” in space.
In the end, however, the limitations of real world budgets will require compromises, and the first of those is to accept that you can’t be everywhere at every second. Instead, the goal of a constellation designer is to create a pattern of orbiting satellites that offers the most:
--Access (what percentage of the time can any particular location be observed)
--Response Time (how soon can you get images from any particular location)
--Coverage (how large an area can you view every hour)
--Mean Track Life (how long, on average, can you track a particular target)
Another challenge in providing coverage is to design a satellite that can view the largest area possible with the greatest detail required. This is a bit like looking through a pair of binoculars: the greater the enlargement, the smaller the area you can see through the lens. To do this a terrestrial SAR uses enormous arrays, but that is not possible in a space-based system because of the weight and size limits imposed by launch vehicles.
As a result, the systems being considered would have arrays covering 40 square meters (more or less 9 feet high by 50 feet long) or 100 square meters (about 75 feet long by 12 feet high). Essentially, you have to decide if you want a smaller number of very large radars, or a larger number of smaller radars.
Each has its tradeoffs: as we said earlier, larger satellites are extremely expensive to design, build, and launch (that giant-and therefore heavy-array has to be folded up for launch, which requires lots of extra engineering; it’s also more likely to flex in space, and thus must be designed with a heavier, more rigid structure, and the greater demands for power require heavier equipment than smaller designs), but a larger number of satellites means more expenses down the road for maintenance, data collection and processing, and required spare satellites (about 10% of satellites experience “catastrophic failure”).
Most of the Earth’s interesting “targets”, the CBO reports, are located between 20 and 60 degrees north latitude, and this is where grazing angle comes into play.
Let’s try another mental game: imagine you are a satellite, and you are standing near a model car on the floor. The top-down view you would have of the car is much more informative than the one you would have if you were laying on the floor looking at the car. In reality, it is impossible to “look across the floor” using a satellite (known as a “zero grazing angle”) because of ground clutter (trees, buildings, hills…) creating obstructions and other such issues. (Eight degrees of grazing angle is considered the absolute minimum for any currently proposed design.)
Placing the satellite’s orbits so that targets in the 20 to 60 degree latitude range are well covered, therefore, is of paramount importance.
Now it’s time to more fully address data transfer.
Everyone who has switched from dialup to broadband understands what better connections can mean, and this system generates huge outputs of data.
The amount of data can be reduced by doing some of the computer processing on the satellite, but this means more power and weight, plus the concern that failure of an onboard computer might render an entire satellite useless. Instead, it is likely that raw data will be sent to ground stations for processing. This model also offers the advantage of allowing for easy upgrades of processing hardware and software, since all the equipment performing these tasks is located on Earth.
Of course, communication between a satellite and a ground station requires a “line of sight” view between the two, and that’s not always possible. This creates delays in getting data to those who need it. NASA has the same problem with their satellites, and they created a “backbone network” of linked communications satellites that orbit the earth today.
The idea is that one of the satellites in the backbone network is always connected to a ground station, and when data needs to be downlinked a satellite connects to the network and passes its data. At that point, much like the cell phone network, the backbone satellites pass the data amongst themselves until the ground station connected satellite is reached, at which point the downlink occurs.
Today the NASA system has six channels that can pass 800 Mb/second, which is equivalent to six DSL connections-not much when you have many satellites trying to pass video and other data all at once. Any future system will require a radically improved “backbone” to support it; and my uneducated guess is that this could represent another 30-50% added to any other cost estimates.
And so, at long last, we come to the heart of the matter: just what should we expect from such a system, and how much should we expect it to cost?
To answer those questions, we need to identify just what sort of a system we are talking about. I will pick out two of the options the CBO discussed and focus on them, as I believe they are the options most likely to be adopted.
System 1 is a constellation of nine satellites on two orbital planes. The radars are the larger 100 square meter aperture design, and they are in Low Earth Orbit.
System 2 has 21 satellites with 40 square meter aperture. The satellites are also in Low Earth Orbit, and are on three orbital planes.
The next thing we need is a scenario. The CBO developed two: the ability of each system to track a single vehicle target in North Korea; and the mission of observing locations on the Korean Peninsula over a period of time.
They also made assumptions about two technologies that are not yet known to actually exist:
It is theoretically possible to send multiple frequencies from a single transmit/receive module simultaneously, and then separate the frequencies again when the return echoes are received. If this is possible, the area that could be imaged in any time period would be multiplied by the number of additional frequencies transmitted (three frequencies, triple the area observed, for example). No system currently is known to have this capability.
It is also hoped that a process called “STAP processing” will improve the performance of the proposed radar systems when tracking vehicle targets through more effective “clutter removing” algorithms. Because the CBO cannot today know how effective this processing will be, they made a conservative and an aggressive assumption, which we will discuss as we go along.
First, let’s discuss the “picture taking” (SAR) mission.
You may recall that the more detail you require, the smaller an area you can image. More detail also lengthens response times, but in the case of our North Korean mission this is not too severe: both Systems 1 and 2 would be able to provide images at .01 meter (3”, sufficient to determine if a crop is growing at an expected pace) resolution in less than 15 minutes; and in the case of System 2, if you could settle for images of .07 meter resolution (not quite two feet, and sufficient to tell the difference between a truck and a tank) you could obtain an image in about seven minutes anywhere in North Korea.
Coverage is the next metric to be examined.
To help give you a bit of perspective, consider these facts:
A Division is a massing of about 10-15,000 troops and they typically operate in an area of about 1,000 square kilometers.
During the Persian Gulf War of 1991, the US Air Force created so-called “kill boxes” of about 2,500 square kilometers for the purposes of locating SCUD TELs.
The Korean Demilitarized Zone and an area extending about 80km into North Korea encompasses about 11, 000 square km; this area equals about 10% of the total area of North Korea, which is in turn about half of the Korean Peninsula.
System 1 could survey the DMZ region, about five kill boxes, or the operating areas of 10 to 11 Divisions daily at .01 meter resolution, and the entire Korean Peninsula at .06 meter resolution. System 1 can survey roughly twice as much land at .01 meter resolution as System 2. The amount of land surveyed at 1 meter resolution is about 10 times that which can be imaged at .01 meter resolution.
System 2 can only cover about 60% of the area of System 1 at .01 meter resolution, but at 1 meter resolution this 21 satellite constellation can cover about 3 times as much as the 9 satellite System 1-over 1,000,000 square km daily. The amount of land surveyed at 1 meter resolution is about 20 times that which can be imaged at .01 meter resolution. System 2 could therefore provide five complete images of the entire Korean Peninsula daily at 1 meter resolution, compared to two images daily with System1.
System 1 could image any target located between 40 and 60 degrees North or South latitude between 15% and 20% of the time at .01 image detail, and roughly 20% at 1 meter resolution.
System 2 could image any target located between 40 and 60 degrees North or South latitude between 10% and 20% of the time at .01 image detail, and above 30% at 1 meter resolution.
Keep in mind that grazing angle counts with all of this-an angle approaching 90 degrees yields no image (the “laying on the floor” example we discussed above), which is why the best results are obtained in the latitude range we’ve discussed above.
The next item to assess is the effectiveness of the two Systems in tracking a target vehicle (officially known as Ground Moving Target Indication, or GMTI).
Before we can examine the numbers, a quick word about steering.
We don’t want to cause our satellites to change their orbits, because we will use fuel that we will need later to maintain the satellite’s orbit. However, we might choose to “spin” our satellite (turn it on its yaw axis, for the aerospace engineers still reading) in order to follow a single vehicle, and the CBO, as they did about STAP processing, made a “fixed” and “variable” yaw angle assumption.
There are disadvantages to varying the yaw angle: the fuel use, of course, but also the risk of flexing the radar array, which will drastically reduce the radar’s effectiveness.
With that said, here’s some numbers…
First, let’s examine access. It is estimated that one of the nine System 1 satellites would be available to track a vehicle traveling about 20 mph 30% of the time with a fixed yaw angle, and making a conservative assumption as to the effectiveness of STAP processing. Because of the aperture size, there is no real improvement if we assume STAP processing is more effective. The variable yaw angle makes the system about 30% more effective.
The 21 satellites of System 2 would fare better, and if STAP processing lived up to the aggressive assumption, System 2 would be roughly twice as effective as System 1. However, the conservative STAP assumption only yields a small improvement over System 1, no matter if we are comparing fixed or variable yaw designs. The lowest predicted assumption was for 40% access, and the most optimistic suggests access could be maintained almost 70% of the time.
In any case, vehicles moving less than 2 meters per second (about 5 mph) are virtually invisible to any of the radars we are examining. If the aggressive STAP assumption is made, vehicles traveling over 4 meters per second are probably going to be tracked about 40% of the time for System 1, above 60% of the time for System 2.
Under the conservative assumption, neither System can be counted on to be able to track a particular target more than 20% of the time if the target is traveling less that 6 meters per second (about 15 mph), and System 2 can’t hit the 20% number unless the vehicle is traveling 8 meters per second (20 mph).
How quickly can our Systems respond once the order is given to track a vehicle?
Assuming either a fixed or variable yaw angle, it would take one of the nine System 1 satellites more or less 15 minutes to respond to a target between 40 and 60 degrees North or South latitude. System 2’s 21 satellites could respond in less than 10 minutes, possibly as quickly as 5 if aggressive STAP assumptions are used.
That response time, however, is not possible for vehicles traveling less that 4 meters per second-System 2 requires up to 60 minutes to locate such a target, although System 1 can do it in about 10 minutes. If you called in a sighting of a high value target driving away, even a 10 minute response time may be too slow.
How long can we maintain tracking on a particular target?
Here’s some bad news. The CBO estimates that System 2 could only maintain a track on a target for a period of 1 to 4 minutes using the conservative STAP assumptions, and only 2 to 8 minutes using the aggressive assumptions. That means even if you were able to respond to the tasking to track a particular high value target, the target would likely be lost before any aircraft or other weapons system could be brought to bear on that target. Even the larger radars of System 1 would be only likely to hold the track, in the most optimistic case, for about 19 minutes, with 5 to 6 minutes being the more conservative estimate.
More bad news: the CBO estimates that if we want a 95% confidence that we can keep response time under 4 minutes for our hypothetical Korean Peninsula targeting we would require somewhere between 35 and 50 satellites (depending on fixed or variable yaw angle).
So what would all this cost?
To deploy these Systems, we would first have to fund a development process to attempt to design the STAP software, then we would also have to fund certain other development work on the satellites themselves.
At that point we would be ready to purchase the actual satellites, the launch vehicles that put them in orbit, the ground equipment to support them, and we would be ready to train and equip the analysts, engineers and technicians we would need.
Our costs would include maintenance, the second set of satellites we would need to launch after 10 years or so, and the processing of the data sent to Earth by the Systems.
With all this in mind, it is estimated that System 1 might cost between $53.4 and $77.1 billion. System 2 will likely cost between $66.2 and $94.4 billion. (50 satellites would likely cost 2.5 times the System 2 estimate, or roughly $150 to $250 billion.)
These estimates do not consider the “space network backbone”, which will add a lot more to any costs we are discussing here.
And now, at last, we have come to the end.
By now you should have a better understanding what Space Radar can and can’t do and what it’s likely to cost. From where I sit, I suspect we have five choices:
--Do nothing.
--Adopt System1.
--Adopt System 2.
--Go for the 50 satellite option.
--Deploy for the SAR mission, but leave GMTI to the currently deployed Predator, Global Hawk, and JSTARS.
This was an especially long conversation, and I do appreciate that you would take the time to get to this point. I hope I made it worth your time, and I look forward to hearing some ideas about how we should proceed.
4 comments:
As you quite rightly say - the system is not dedicated for mobile surveillance of that kind - it's purpose seems more for the citizenry.
Well, the technology is there and, as with so many other things, it cannot be "undiscovered". So we must fond the most beneficial use for it. I don't know what the answer is but we cannot "do nothing".
in response to sir james:
it seems reasonable to seek a cost sharing arrangement with nasa or other civil customers, and there are two ways that i can see that this could be easily achieved:
--create an upfront partnership with a cost sharing protocol, or
--"sell" the datasets you'll be creating anyway, and set up a customer driven way to task the system to create datasets customers might order for "special purpose" needs.
as to "undiscovery":
as i have said before, these are the kind of decisions that keep perfectly nice political leaders up very, very late at night.
the deployment of earth based systems for the "tracking" mission has advantages in terms of cost, but space-based systems potentially offer faster response-if you want to pay for enough satellites.
but even then, the question becomes...if you had a target, could you get a weapons system to that target within the window of tracking time?
Post a Comment