Exegesis Volume 4 Issue #27


From: Andre Donnell
Subject: Re: Exegesis Digest V4 #26


Exegesis Digest Thu, 08 Apr 1999


Date: Thu, 08 Apr 1999 05:35:07 +1200
From: Andre Donnell
To: Exegesis
Subject: Re: Exegesis Digest V4 #26
 

Francis wrote:


 > >Until I
 > >read this paper I was of the opinion that all astrologers could only evaluate techniques
 > >subjectively (if they bother to do any evaluation at all). From that perspective we want to
 > >give astrologers lots of examples and context when looking at a new technique. But now
 > >I am starting to consider the possibility that the utility of some techniques CAN be tested,
 > >some MIGHT be testable, and some CANNOT be tested. If that is the case then the first
 > >thing I want to know about a new technique is if it can be tested or demonstrated in a
 > >simple manner and yield consistent results. If not, then I'd like all of that background
 > >and context.

I have strongly felt that (at least some of) our techniques SHOULD be tested for well over a decade, for ethical as well as "validity" (see below) reasons. I have, in recent years, decided that they CAN be as well.

I think it is important to understand that whether or not there is any overlap between psychology (as a field I know something about) and astrology, the development of theory and practices in astrology faces much the same problems as does psychology. The main problem is that there is no such thing as a simple, clear cause for any person's behaviour. Instead, psychology has the problem of disentangling one usually *tiny* effect (in evaluating a particular hypothesis in relation to some theory or treatment) from a myriad other influences. I suggest the same problem applies in testing astrological techniques. I also suggest that astrologers have thus been forced to "evaluate...subjectively", because of ignorance of helpful methods from other fields that share the same problems.


 > >I, for one, have clearly stated an number of times that I think that seeking "validity" is a
 > >big mistake, and that astrology does not have what it takes to be a science (in the
 > >current sense of the word). However, that doesn't mean that we cannot turn a critical
 > >eye on the field nor that we are barred from reasoning about the field.

I think there have been semantic difficulties when science and astrology have been discussed together, and about the use of the term "validity". The search for "validity" has a number of agendas. Some are: (1) the wish to obtain respect and approval (from the scientific community; from the community at large); (2) the wish to show that astrology makes scientifically testable claims; (3) the wish to develop methods for critically evaluating astrology (as you have just pointed out).

I consider (1) specious, and make no further comment about it. You, I, and others believe that (3) is either possible or necessary. The question is whether - or to what extent - (2) and (3) overlap. My tentative view is that (2) and (3) may overlap considerably. If so, then I (and others, such as Dale Huckeby) further suggest that at least some techniques used in the social sciences are of considerable relevance to accomplishing (3).

In particular, it appears to me that some of the techniques used as a matter of course in *psychology* can be applied more or less directly, that others require adaptation, and that possibly astrology may demand the development of additional, *novel* methods (note that some tenets of astrology - such as those pertaining to elections and to mundane astrology - imply a possible critique of the use of chance or statistical methods). Of course, the work of the Gauquelins' is one example of "scientific" technique applied to astrology. But it does not appear to me that the general possibility that their work opened up has been pursued in any concerted way.

It may be that (2) and (3) do not overlap after all. If that is found to be the case, it would not mean that astrology is "invalid", merely that is is not "a science". This would certainly be in line with Roger's thesis. And it would still leave (3) intact: in much the same way as the Arts have their methods of analysis.


 > >We have other possibilities. There are many recurring events in the mundane world for
 > >which definite outcomes are visible. Of these, there is a class of events that we cannot
 > >greatly influence, but which are mostly visible to us. For instance, legislation is presented
 > >to parliamentary bodies and decided upon and many instances of this process reach a
 > >definite outcome (yea or nay) and occur in a definite place and time. Given that we have
 > >the makings of mundane or horary test. I can imagine testing the known horary yes-no
 > >techniques to see which can be stated in an algorithmic fashion and testing those on
 > >large pubic events.

Presumably you meant "public" events! I agree in principle. But this is where we need to be careful in the selection and use of our test methods. Unless astrology produces "gross" (extremely powerful) effects that significantly outweigh all or most others (such as social influence), then we are in the same ballpark as psychology, and need to use the same subtle methods.

This is probably a good point to mention Science's emphasis on theory-building. One reason why this is important is that it accellerates the development of the field, compared to just testing every notion that arises or has ever arisen in astrologers' minds. Once one particular theory - that makes non-overlapping claims compared to other theories - has been disproved, then an entire class of hypotheses or expectations deductively associated with that theory can also be discarded and need not be tested (unless some radical new thesis or paradigm shift occurs).

It almost certainly cannot be claimed that astrology is, or has, a theory or theories. I have seen it described as a belief-system. It is probably fair to describe it then as a set of assumptions, with a large degree of variability or fuzziness - considered historically and even contemporaneously - around the application and in some cases the values (e.g. orb) of those assumptions. I believe however that there is no reason why this cannot be rapidly turned into theory - the difference is partly semantic - by simply starting to perform proper testing.

Personally, I would prefer that *fundamental* assumptions are tested first, rather than relatively complex algorithms. If the algorithm fails, are *all* it's assumptions invalid, or was it just one link? Or two links? Or have the steps been applied in the wrong order? A 5-step algorithm that failed and that required a simple binary choice at each step would yield 2^5-1 (2x2x2x2x2 -1) possibilities that needed to be checked - in this case 31 tests (one has already been carried out when the alorithm originally failed, hence the '-1' term). If there are three choices at each step, there are 3^5-1 (242) possibilities. If the *order* of the steps is questioned then things get rapidly worse! Moreover, if statistical methods are being used, the more tests that are carried out the greater the possibility of getting a positive result "by chance".

If the algorithm succeeds (and does so repeatably), that is certainly a good thing. It supports the case that *all* it's steps say something valid. But if it fails, it *does not* mean that we can safely assume all the parts are wrong (above); and so move onto other territory.

Thus, to build theory (which will help us accellerate development of astrology), it is important that we begin at the most fundamental levels possible.


 > >>Also, having pointed to p < .05 as a
 > >>possible significance level, they did
 > >>not directly discuss those (several)
 > >>results that failed to meet the level.
 > >
 > >Well, yes and no. I believe your above characterization of the analysis is as incomplete as
 > >is the analysis in the paper. They didn't make (what I consider) strong claims for the
 > >data nor did they hide the "failures" in the data. They said they wanted more samples
 > >and that they believed the "medieval model" would hold up over time. I read this as an
 > >explicit invitation for other to examine their work and work with the model.

In general your criticism is fair (indeed, I commented that I was offering my "first impressions", and felt slightly guilty afterwards for offering them at all).

Regarding the matter of "strong claims", I was reacting to an implication in their use of the phrase "These distributions...are impressive, (any results of p < .05". If (IF) they wrote this because some of their p-values were well under p < .05, this is an error. The psychological literature is riddled with such claims, and nowdays trainee research psychologists are warned well away from making them. I covered this point in my first communication last year, but briefly it is that one can only conclude from a positive statistical test that either (a) one has found a valid effect; or (b) the finding was the result of chance. However, the *size* (improbability) of (b) has nothing to do with whether IT is true or (a) is true. 'p=.001' simply means that "IF chance was responsible here, you have been the beneficiary (or victim!) of a 1 in a thousand event". But such things happen, and we have absolutely *no* way to know whether it did, or didn't, happen here. It is like winning a million to one lottery and saying "well, this was so unlikely - my chances were only p=.000001 - that I cannot believe it was chance. Someone must have DELIBERATELY arranged it so that I would win!". The fallacy of course is that *someone* had to win, and the same fallacy applies in statistical significance testing: someone, sometime, is always going to get the positive finding that is purely chance.

Nevertheless, I don't wish to belabour this point (again), as I agree that the algorithm seems worth following up.


 > >If there is a problem in their statistics, then it needs to be made
 > >clear. Is there a problem?

I am still thinking this through. Their approach is unusual. If no-one else obliges, I will try to analyse this point with care in a future post - but it cannot be for at least a month, probably (p < .05) longer.

Andre.


-----e-----

End of Exegesis Digest Volume 4 Issue 27

[Exegesis Top][Table of Contents][Prior Issue][Next Issue]

Unless otherwise indicated, articles and submissions above are copyright © 1996-1999 their respective authors.