AI has its issues and it can raise concern such as is it putting people out of work, However it can prove a powerful tool in evaluating evidence and finding bits of research that might be difficult to find. I often think of it as "Alien Intelligence," as it is much different to human intelligence.
What I have done here is ask three questions. The first one is a real quote from DEFRA's guidance on ragwort control and it is the absolutely key factor in how they establish and promote the amount and rate of ragwort poisoning. The "he" in the quote is Professor Derek Knottenbelt who has also been quoted rather ridiculously as claiming that ragwort is poisoning the Cinnabar moth, which I hasten to say is NOT the main reason that ragwort is an ecologically valuable wildflower.
Here I have used ChatGPT as an independent evaluator of the policy. At first it points out the strength and weaknesses of DEFRA's statements. Then, when it is supplied the actual data obtained from Freedom of Information requests it points out it is completely wrong to use the data and when I tell it that the sentence is from government advice then it becomes really scathing about the bad science. I quote the responses entirely as a cut and paste from the output of this clever computer program. I have only labelled the questions for clarity
Question 1
What can you tell me about the logic and reasoning in the following statement? "He has estimated
a figure of 500 horse deaths from ragwort poisoning in 2000. This figure is based
on the number of confirmed horse deaths from ragwort poisoning seen by the
Philip Leverhulme Large Animal Hospital Teaching Hospital at Liverpool University
as a percentage of all the horse cases treated during the ear, and grossed up
to be representative of the total horse population."
ChatGPT said:
The statement presents an estimate of horse deaths from ragwort poisoning using inductive reasoning based on extrapolation. Here's a breakdown of the logic and reasoning behind it, including both its strengths and potential flaws:
๐ Summary of the Logic
-
Observed Data Source:
-
Proportional Estimation:
-
Extrapolation:
✅ Logical Strengths
-
Use of empirical data: The estimate is grounded in actual confirmed cases, not speculation.
-
Methodical scaling: Applying a proportion to a larger population is a common statistical technique when comprehensive data isn't available.
❌ Logical Weaknesses and Concerns
-
Sampling Bias:
-
The Liverpool hospital's cases may not be representative of the whole UK horse population.
-
Horses seen at a teaching hospital could have more severe or unusual conditions, potentially overestimating rare events like ragwort poisoning.
-
Confirmation Bias:
-
Only confirmed cases were counted. Ragwort poisoning is difficult to diagnose definitively, so the actual number of cases may be underreported, or misdiagnosed, affecting reliability.
-
Assumption of Uniformity:
-
Small Sample Size Risk:
๐ง Type of Reasoning
๐งฎ Example Analogy
Imagine seeing 2 red cars in a dealership that sells 10 cars and concluding that 20% of all cars in the country are red, then scaling that to millions of cars. That only works if your sample (the dealership) is representative — otherwise, your extrapolation may be way off.
๐ Conclusion
The reasoning shows a structured attempt to estimate a figure from limited data, but it relies heavily on the representativeness of a single data source, which is a major potential flaw. It's a reasonable starting point, but not robust enough to be definitive without additional, more representative data.
Question 2
You said:
What can you tell me if I say that a Freedom of Information request to the hospital actually shows only one case in an eight year period?
ChatGPT said:
If a Freedom of Information (FOI) request to the hospital reveals only one case of ragwort poisoning in an eight-year period, it severely undermines the credibility and validity of the original estimate of 500 horse deaths in a single year. Here's a detailed analysis of what that implies:
๐ What This New Information Reveals
-
Factual Discrepancy:
-
The original estimate was supposedly based on a meaningful number of confirmed deaths at that hospital in 2000.
-
But if only one case occurred over eight years, then the hospital could not possibly have had a significant enough sample in 2000 to justify such extrapolation.
-
Invalid Assumptions:
❌ Logical and Ethical Consequences
๐ง Implications for Critical Thinking
-
Always check primary sources: This is a clear example of why it’s essential to verify claims, especially when they're used to support large-scale generalizations.
-
Be skeptical of gross-ups: Scaling from small or non-representative data to national figures must be done with rigorous checks, transparency, and peer review.
-
Use FOI data carefully: FOI responses, being from authoritative sources, are strong evidence — and in this case, they directly contradict the original narrative.
๐งพ Summary Judgment
The FOI response strongly suggests the original estimate of 500 deaths due to ragwort poisoning in 2000 is invalid. It exposes either a methodological failure or a misrepresentation of data, making the figure deeply unreliable and potentially misleading.
If you're investigating this further or challenging a claim, this FOI result is a powerful rebuttal.
Question 3
You said:
What can you tell me if I say that the original quote comes directly from government advice?
ChatGPT said:
If the original quote estimating 500 horse deaths from ragwort poisoning in 2000 comes directly from government advice, yet a Freedom of Information request shows only one case at the referenced hospital over eight years, then this raises serious concerns on multiple levels:
⚖️ Implications of a Government-Originated Estimate
-
Government Credibility at Stake:
-
Government-issued advice is expected to be evidence-based, transparent, and reliable.
-
If it includes a wildly inflated estimate based on non-existent or misused data, it damages public trust in that institution and its policies.
-
Policy and Regulation Impact:
-
This estimate likely influenced ragwort control legislation, land management obligations, or public guidance to landowners.
-
Policies based on false or unsupported figures may be disproportionate, costly, or even unjustified.
-
Misuse of Authority:
๐ง Analytical Breakdown
Let’s connect the pieces:
-
Claim: 500 deaths based on extrapolated data from Liverpool University's animal hospital.
-
Reality (via FOI): Only one case in eight years—not even one per year.
-
Logical consequence: The core data point used to justify the extrapolation never existed.
This suggests one of the following:
-
The estimation method was fabricated or misunderstood.
-
The government relied on poorly vetted secondary sources.
-
There was a deliberate inflation to justify a policy objective.
๐งพ Accountability Questions
This scenario invites tough questions for policymakers:
-
Who created this estimate?
-
Was the source peer-reviewed or independently verified?
-
Why was the estimate not updated or withdrawn when more accurate data became available?
-
How was the FOI data reconciled with the original policy narrative?
๐ก Broader Takeaway
This is a textbook example of why transparency, evidence verification, and skepticism are essential in interpreting government guidance — especially when it leads to regulation. It shows how bad data, once institutionalized, can cascade into widespread misinformation and skew public perception for years.
✅ Conclusion
If a government document made this claim, yet the cited data source (the hospital) reports only one case in eight years, then the claim is:
-
Demonstrably false or grossly exaggerated,
-
Potentially misleading, and
-
In need of correction or retraction.
It may justify a formal challenge or review of the guidance that relied on it.