I have never read or heard him say single
In his article:

The best approach to strategy starts with a future picture, determines the systems and centers of gravity that must change to realize that picture,
takes into account the impact of time, and preplans an exit.
And in his popularization thee "Prometheus Process"( from Prometheus website:

The first (and importantly first) step in strategic thinking is to identify the future that you want to create. In the Prometheus Process, the desired future is called the Future Picture. It is a very hard, very objective, very measurable picture of the future you want to create. In the Prometheus Process, you break the Future Picture into 12 separate elements that ensure a balanced future embraced by all.
Not "identify a set of futures", or "characteristics applicable to desirable futures" (plural) but "a" or "the" (singular) future. If he intends that "the" future picture is actually a set of pictures, one would think he would not cast the discussion in singular terms.

If you want to look at it that way I guess you can, but do you know the Sun will rise tomorrow until it happens...No, but it is a really good bet it is going to.
How much in military operations can you know with the level of certainty that you know the sun will rise? What happens for instance AFTER you take out an adversary's leadership? SUch knowledge is crucial to a Warden inspired decapitation strike. If the only thing we are thinking about is separating the C2 head from the military body, how do we know the result will be "paralysis" and not a whole host of insurgents going to ground and taking on a long term irregular strategy?

In such a case a parallel decapitation strike may create a larger, longer term problem than if we allow the leader to stay in charge and lose his army in a sequential series of battles. Has the Iraq war been cheaper in the long run because than if we had destroyed Saddam's army in the field rather than convince the core of it to go underground?

I did not see the word "sequential" in your original post, if it is there I stand corrected. If you do it in parallel it would be pure Warden.
Sequential operations is implicit in the need to "evaluate over time" i.e you poke the system, see how it responds, poke it again, see how it responds, and eventually you can produce a theory of action for how the system responds. You can't in general have that kind of knowledge a priori (as the muddling through paper argues) which highlites an important clarification in my argument you have helped me understand.

Where does Warden say you have to have complete knowledge? The whole idea behind parallel attack is to realize that not everything in the system is knowable, but there are certain elements that will have to be there in order for the System to accomplish it's purpose. If you are going to attack someone you have to have person(s), weapon(s),location(s),etc. they have to be there or nothing will happen.
You are correct - the issue is not about "completeness" of knowledge, it about "a priori" knowledge or the knowledge that you can have before you try to affect the system about how the system will respond. My previous use of "complete" has been from the point of view of how much knowldege you can have about how a system will respond to a stimulus, before you apply the stimulus.

The implication of Warden's parallel operations in a short period of time is a tremendous amount of a priori information about the system. Information that the alternative strategic methods believe is not discernible until you "experiment" on the system - forcing a sequential engagement, evaluated over time.

I think this is at the crux of much of the criticism - the "correlation equals causality" assumption implicit in:

If we want to change our opponent as a system to conform to our objectives, then the most direct approach entails affecting opponent centers of gravity closely related to the objective
.

In the actual usage of EBO in my experience, the "relatedness" between objectives and centers of gavrity (and what effect we will have on CoGs) is not based on "real systems analysis" but on establishment of a Bayesian belief network of what the planners BELIEVE these relationships are. This beliefe network is then substituted for actually a priori KNOWLEDGE about the system where it is not available.

I've figured out that much of my criticism comes from seeing this problem first hand enough times to question the sanity at times of those operating this way. When questioned, one gets disturbingly close to "if I believe strongly enough that these realtionships exist, then they actually must exist".

So the issue becomes perhaps not a problem with Warden's theory, but in the practicality of implementing it, and the problems with how I've seen otherwise really smart people goon it up.

SO how would Warden recommend dealing with situations where there is insufficient a priori knowledge to actually know with high confidence (not high 'BELIEF') to bee comfortable that a parallel operation will actually achieve the desired end state?

AT that point does he shift over to a more "evolutionary" strategic method, as you seem to indicate, or would he argue to canx the operation because of the known unknowns, let alone the unknown unknowns?.

How does that decision calculus work in Warden's theory?