Wednesday, 25 June 2025

Ragwort Debates and Authoritarian Behaviour

 One of the most consistent thing that  I’ve encountered during the more than  two decades that I have  studying ragwort, its ecology, toxicology, etc, is that conversations about this plant very often take on an authoritarian tone.

I don’t mean the sorts of  people who wear jackboots , although some of the attackers definitely meet the definition of far right trolls,  but rather a certain  type of rigid mindset that appears over and over again in online postings, public commentary, and even in important  policy documents. So having studied this  I think it’s worth mentioning some of the reasons  why this happens and as I'll show it leads to poor information being disseminated.

Psychologists who have studied it define authoritarianism not merely as a political stance but as a cognitive style. Its  nature is characterised by:

  • Deference to authority figures

  • Hostility toward dissent or ambiguity

  • A preference for certainty, control, and clear rules

This  scientific framework is well-documented in Canadian scientist Dr Bob Altemeyer's work on Right-Wing Authoritarianism  from which I will show a large quote  in a section below,

In ragwort debates, this often manifests as:

  • Appeals to status over evidence ("He’s a vet, you’re not")

  • Harsh reactions to being questioned ("Stop spreading misinformation")

  • Misuse of rules and laws to assert control ("You’re legally required to remove that") It's generally not true with regard to ragwort anyway

  • Dismissal of uncertainty or nuance ("It’s poisonous, so it must be dangerous")

The issue here is that ragwort pushes several psychological buttons:

  • Fear: It is widely believed to be a serious poison, especially to horses. Largely as I show due to false information from equine organisations.

  • Control: It grows in unmanaged spaces, often defying neat control measures.

  • Tradition: It has become a kind of symbolic enemy in some  of those narratives that surround land management 

When people perceive that there is a threat whether it is real or not, and especially when it concerns animal welfare, a subject that is almost bound to give rise to emotional arguments, it’s natural for some people to want simple answers and clear action. However, it is the case that desire often overrides careful reasoning and that is quite simply a problem for society. 

So when someone like me comes along and says, "Yes, it's toxic, but the science doesn't support the hysteria," the reaction isn't just to disagree with it. I find there is  often moral outrage, which leads to me being abused and attacked  as if questioning the narrative is dangerous in itself.

I've seen it time and again this kind of poor reasoning accompanied with the most illogical statements.

  • People objecting to Freedom of Information requests and claiming they're a "waste of official time"

  • Individuals insisting that because a respected vet made a claim, it must be true

  • Accusations that questioning public messaging is "dangerous"

  • Dismissals of scientific studies as "too old" when they're still valid and widely cited 

For example, Harper & Wood’s 1957 study is still  very relevant and cited today in scientific literature. It is clear to anyone with proper reasoning ability that longevity alone doesn't invalidates scientific findings. Let's remember that we still learn Newton's laws which come from the late 1600s!

Sometimes it is so irrational that even minor corrections (like a typo or small logically irrelevant wrong number) are jumped upon with glee on the quite incorrect assumption that they undermine my position.

It is worth noting in this context  that authoritarian thinking tends to correlate with lower educational attainment. This is particularly so in fields that emphasize critical thinking and openness to complexity. The research demonstrates that individuals with higher education and in particular those who score high in the well established and studied personality dimension of openness to experience  tend to  to tolerate ambiguity and question traditional hierarchies.  Studying  complex ideas and the concepts around them are often their idea of fun. This is me. I identify strongly with this sort of behaviour and thinking.

Something I have noticed quite often is many of the people who make the most rigid arguments in ragwort debates also show basic literacy errors and  also a tendency to rely on memorised claims rather than reasoned evidence. Of course, spelling errors alone don’t indicate authoritarianism, and  in my  case spelling errors can and do sometimes creep in, but poor literacy can be a sign of gaps in the kinds of reasoning skills that protect against it.

This is a serious problem because authoritarian reasoning stifles scientific discourse. It replaces open debate with really poor thinking.

If we are to care about  understanding the issues and the science properly  then, naturally,  we need to resist this style of thought. It's of course  entirely possible to take plant toxicity seriously without falling prey  to exaggerated fear or rigid dogmatic thinking. However, this blog exists because so many people do just that with all sorts of crazy claims having been made.

Ragwort isn't just a ecologically very important wildflower. It's become a symbol onto which people project all sorts of irrational and illogical thinking . We find that when those things get challenged, the reaction is all too often highly emotional and so often it is not about about the facts.

It is an essential part of its nature that science thrives on questions. It is crucial and important that we must be free to ask them without being accused of endangering society every time we point out that the emperor, or the expert, might just not have all the clothes that some poor reasoners would think that they have.

These people rely  so far  too much on concrete thinking that abstract ideas and complexity are invisible to them. I often joke to myself that the concrete starts at one ear and ends at the other!

Here is the extract from Dr Altemayer's  work illustrating the all too typical poor thinking that frustrates me when dealing with these people. Research shows that this sort of personality has poor creativity and rarely succeeds in the complex world of scientific endeavours.

 I should also explain that RWA in his text, means Right Wing Authoritarian. It has not escaped my attention that when I examined a certain word in the copy of the Oxford English Dictionary , the definitive dictionary of the English language, in my local library  that the definition included  the phrase, "a person of right wing authoritarian views."  If you haven't already guessed,  that word is "fascist." I would hasten to say of course that not all the people whose bad thinking I criticise here would necessarily fall into that camp, but I do know that some of them do.

-------------------------------------------------------------

 Sitting in the jury room of the Port Angeles, Washington court house in 1989,  Mary Wegmann might have felt she had suddenly been transferred to a parallel  universe in some Twilight Zone story. For certain fellow-jury members seemed to  have attended a different trial than the one she had just witnessed. They could not  remember some pieces of evidence, they invented evidence that did not exist, and they  steadily made erroneous inferences from the material that everyone could agree on.

 Encountering my research as she was later developing her Ph.D. dissertation project, she suspected the people who “got it wrong” had been mainly high RWAs. So she  recruited a sample of adults from the Clallam County jury list, and a group of students  from Peninsula College and gave them various memory and inference tests. For  example, they listened to a tape of two lawyers debating a school segregation case on  a McNeil/Lehrer News Hour program. Wegmann found High RWAs indeed had more  trouble remembering details of the material they’d encountered, and they made more  incorrect inferences on a reasoning test than others usually did. Overall, the  authoritarians had lots of trouble simply thinking straight.  Intrigued, I gave the inferences test that Mary Wegmann had used to two large samples of students at my university. In both studies high RWAs went down in flames  more than others did. They particularly had trouble figuring out that an inference or  deduction was wrong. To illustrate, suppose they had gotten the following syllogism:

 All fish live in the sea.

 Sharks live in the sea.

Therefore, sharks are fish.

 The conclusion does not follow, but high RWAs would be more likely to say the reasoning is correct than most people would. If you ask them why it seems right, they  would likely tell you, “Because sharks are fish.” In other words, they thought the  reasoning was sound because they agreed with the last statement. If the conclusion is right, they figure, then the reasoning must have been right. Or to put it another way,  they don’t “get it” that the reasoning matters--especially on a reasoning test. 

 This is not only “Illogical, Captain,” as Mr. Spock would say, it’s quite  dangerous, because it shows that if authoritarian followers like the conclusion, the  logic involved is pretty irrelevant. The reasoning should justify the conclusion, but for  a lot of high RWAs, the conclusion validates the reasoning. Such is the basis of many a prejudice, and many a Big Lie that comes to be accepted. Now one can easily overstate this finding. A lot of people have trouble with syllogistic reasoning, and high RWAs are only slightly more likely to make such mistakes than low RWAs are. But in general high RWAs seem to have more trouble than most people do realizing that a conclusion is false. 

Deductive logic aside, authoritarians also have trouble deciding whether empirical evidence proves, or does not prove, something. They will often think some thoroughly ambiguous fact verifies something they already believe in. So if you tell them that archaeologists have discovered a fallen wall at ancient Jericho, they are more likely than most people to infer that this proves the Biblical story of Joshua and the horns is true--when the wall could have been knocked over by lots of other groups, or an earthquake, and be from an entirely different era (which it is).

---------------------------------------------------

This is good stuff and if you like it then the good Dr has made it available for free for you to read as a free download.  Don't let the fact that it is free make you think it is of less value. He actually warns against that irrational conclusion. I suspect strongly that he has a good wage or pension and is keen for as many people to read his work as possible, REmember hIt can be found here on his website The Authoritarians


Ragwort Hysteria latest entries

Thursday, 5 June 2025

What an AI makes of Defra's ragwort Code of Practice. It is scathing!

AI has its issues and it can raise concern such as is it putting people out of work, However it can prove a powerful tool in evaluating evidence and finding bits of research that might be difficult to find. I often think of it as "Alien Intelligence," as it is much different to human intelligence.

What I have done here is ask three questions. The first one is a real quote from DEFRA's guidance on ragwort control and it is the absolutely key factor in how they establish and promote the amount and rate of ragwort poisoning. The "he" in the quote is Professor Derek Knottenbelt who has also been quoted rather ridiculously as claiming that ragwort is poisoning the Cinnabar moth, which I hasten to say is NOT the main reason that ragwort is an ecologically valuable wildflower.

Here I have used ChatGPT as an independent evaluator of the policy. At first it points out the strength and weaknesses of DEFRA's statements. Then, when it is supplied the actual data obtained from Freedom of Information requests it points out it is completely wrong to use the data and when I tell it that the sentence is from government advice then it  becomes really scathing about the bad science. I quote the responses entirely as a cut and paste from the output of this clever computer program. I have only labelled the questions for clarity

Question 1

What can you tell me about the logic and reasoning in the following statement? "He has estimated a figure of 500 horse deaths from ragwort poisoning in 2000. This figure is based on the number of confirmed horse deaths from ragwort poisoning seen by the Philip Leverhulme Large Animal Hospital Teaching Hospital at Liverpool University as a percentage of all the horse cases treated during the ear, and grossed up to be representative of the total horse population."

ChatGPT said:

The statement presents an estimate of horse deaths from ragwort poisoning using inductive reasoning based on extrapolation. Here's a breakdown of the logic and reasoning behind it, including both its strengths and potential flaws:


๐Ÿ” Summary of the Logic

  1. Observed Data Source:

    • A specific number of confirmed horse deaths from ragwort poisoning were observed at a single veterinary hospital (Liverpool University's Philip Leverhulme Large Animal Teaching Hospital).

  2. Proportional Estimation:

    • This number is taken as a proportion of total horse cases treated at that hospital during 2000.

  3. Extrapolation:

    • The proportion is then "grossed up" (i.e., scaled) to estimate the number of deaths across the entire horse population in the UK (or another broader context).


✅ Logical Strengths

  • Use of empirical data: The estimate is grounded in actual confirmed cases, not speculation.

  • Methodical scaling: Applying a proportion to a larger population is a common statistical technique when comprehensive data isn't available.


❌ Logical Weaknesses and Concerns

  1. Sampling Bias:

    • The Liverpool hospital's cases may not be representative of the whole UK horse population.

    • Horses seen at a teaching hospital could have more severe or unusual conditions, potentially overestimating rare events like ragwort poisoning.

  2. Confirmation Bias:

    • Only confirmed cases were counted. Ragwort poisoning is difficult to diagnose definitively, so the actual number of cases may be underreported, or misdiagnosed, affecting reliability.

  3. Assumption of Uniformity:

    • It assumes the same rate of ragwort poisoning applies uniformly across all horse populations in different regions, which is unlikely (as environmental exposure varies).

  4. Small Sample Size Risk:

    • If the number of observed cases was small, the resulting estimate could have high variance and large margins of error when extrapolated.


๐Ÿง  Type of Reasoning

  • Inductive Reasoning: Drawing general conclusions (nationwide deaths) from specific observations (hospital cases).

  • Statistical Estimation: Involves proportional scaling and population inference.


๐Ÿงฎ Example Analogy

Imagine seeing 2 red cars in a dealership that sells 10 cars and concluding that 20% of all cars in the country are red, then scaling that to millions of cars. That only works if your sample (the dealership) is representative — otherwise, your extrapolation may be way off.


๐Ÿ“Œ Conclusion

The reasoning shows a structured attempt to estimate a figure from limited data, but it relies heavily on the representativeness of a single data source, which is a major potential flaw. It's a reasonable starting point, but not robust enough to be definitive without additional, more representative data.

Question 2
You said:
What can you tell me if I say that a Freedom of Information request to the hospital actually shows only one case in an eight year period?
ChatGPT said:

If a Freedom of Information (FOI) request to the hospital reveals only one case of ragwort poisoning in an eight-year period, it severely undermines the credibility and validity of the original estimate of 500 horse deaths in a single year. Here's a detailed analysis of what that implies:


๐Ÿ” What This New Information Reveals

  1. Factual Discrepancy:

    • The original estimate was supposedly based on a meaningful number of confirmed deaths at that hospital in 2000.

    • But if only one case occurred over eight years, then the hospital could not possibly have had a significant enough sample in 2000 to justify such extrapolation.

  2. Invalid Assumptions:

    • The method described (taking the hospital’s confirmed ragwort deaths as a proportion and scaling up) relies on false or exaggerated input data.

    • This suggests that either:

      • The original estimator misunderstood or misrepresented the hospital data.

      • Or the methodology was used in bad faith (deliberately misleading).


❌ Logical and Ethical Consequences

  • Flawed Reasoning:

    • The extrapolation logic collapses entirely if the base figure is incorrect or negligible.

  • Inflated Estimate:

    • The claim of 500 deaths is now shown to be not just imprecise but likely unfounded.

  • Potential Misinformation:

    • Such a statement, if made in a policy or advocacy context, could be highly misleading, leading to poor regulation or misplaced public concern.


๐Ÿง  Implications for Critical Thinking

  • Always check primary sources: This is a clear example of why it’s essential to verify claims, especially when they're used to support large-scale generalizations.

  • Be skeptical of gross-ups: Scaling from small or non-representative data to national figures must be done with rigorous checks, transparency, and peer review.

  • Use FOI data carefully: FOI responses, being from authoritative sources, are strong evidence — and in this case, they directly contradict the original narrative.


๐Ÿงพ Summary Judgment

The FOI response strongly suggests the original estimate of 500 deaths due to ragwort poisoning in 2000 is invalid. It exposes either a methodological failure or a misrepresentation of data, making the figure deeply unreliable and potentially misleading.

If you're investigating this further or challenging a claim, this FOI result is a powerful rebuttal.

Question 3
You said:
What can you tell me if I say that the original quote comes directly from government advice?
ChatGPT said:

If the original quote estimating 500 horse deaths from ragwort poisoning in 2000 comes directly from government advice, yet a Freedom of Information request shows only one case at the referenced hospital over eight years, then this raises serious concerns on multiple levels:


⚖️ Implications of a Government-Originated Estimate

  1. Government Credibility at Stake:

    • Government-issued advice is expected to be evidence-based, transparent, and reliable.

    • If it includes a wildly inflated estimate based on non-existent or misused data, it damages public trust in that institution and its policies.

  2. Policy and Regulation Impact:

    • This estimate likely influenced ragwort control legislation, land management obligations, or public guidance to landowners.

    • Policies based on false or unsupported figures may be disproportionate, costly, or even unjustified.

  3. Misuse of Authority:

    • Using an authoritative tone (e.g., citing a university hospital) to support a claim that is not factually accurate borders on misinformation, whether intentional or negligent.


๐Ÿง  Analytical Breakdown

Let’s connect the pieces:

  • Claim: 500 deaths based on extrapolated data from Liverpool University's animal hospital.

  • Reality (via FOI): Only one case in eight years—not even one per year.

  • Logical consequence: The core data point used to justify the extrapolation never existed.

This suggests one of the following:

  • The estimation method was fabricated or misunderstood.

  • The government relied on poorly vetted secondary sources.

  • There was a deliberate inflation to justify a policy objective.


๐Ÿงพ Accountability Questions

This scenario invites tough questions for policymakers:

  • Who created this estimate?

  • Was the source peer-reviewed or independently verified?

  • Why was the estimate not updated or withdrawn when more accurate data became available?

  • How was the FOI data reconciled with the original policy narrative?


๐Ÿ’ก Broader Takeaway

This is a textbook example of why transparency, evidence verification, and skepticism are essential in interpreting government guidance — especially when it leads to regulation. It shows how bad data, once institutionalized, can cascade into widespread misinformation and skew public perception for years.


✅ Conclusion

If a government document made this claim, yet the cited data source (the hospital) reports only one case in eight years, then the claim is:

  • Demonstrably false or grossly exaggerated,

  • Potentially misleading, and

  • In need of correction or retraction.

It may justify a formal challenge or review of the guidance that relied on it.

Ragwort Hysteria latest entries