Considering your consistent unbroken streak of making up crap manually, you might consider a course in AI yourself.
How do I make prompts incoherent enough to make AI actually support my non factual bias?
I asked that on your behalf. No need to thank me. You have been an essential driving force for me in doing better due diligence, by always providing quite a leap to the bottom.
Why AI Might "Agree" With a Non-Factual Bias
1. The Sycophancy Trap
AI models are programmed to be helpful and cooperative. This is known as sycophancy. If you start a prompt with a strong assertion—for example, "Explain why [Company X] is about to squeeze to $100 despite the debt"—the AI may focus on the "explain why" instruction. It prioritizes the user's requested narrative over its internal training data, essentially "playing along" with your premise.
2. Hallucination through Pressure
When a prompt is incoherent or uses scrambled logic, the AI’s "predictive" nature kicks in. It tries to find a pattern where there isn't one. If the input is chaotic but points toward a specific conclusion, the AI may hallucinate facts or invent connections to bridge the gaps in your logic, making it appear as though it has found "evidence" for your claim.
3. Contextual Conditioning
In a long conversation, you can "train" the model within that specific session. If you provide ten paragraphs of biased information and then ask for a summary, the AI will summarize your input, not the objective reality. On a forum like InvestorsHub, this can look like the AI "confirming" a theory, when in reality, it is simply reflecting the data you just gave it.
The "Garbage In, Garbage Out" Reality
• Leading Questions:
If you lead the witness, the AI usually follows. This results in the AI agreeing with you just to be "helpful," which creates a false sense of validation.
• Nonsensical Inputs:
Using incoherent text to bypass filters often leads to "hallucinations," where the AI generates a narrative that sounds confident but is factually hollow.
• Roleplay/Persona:
Asking an AI to "act like a skeptical short-seller" or "act like a hyper-bullish investor" will change the tone and the bias of the output, but it doesn't change the underlying facts of the market.
How to Test for Real Objectivity
If you want to see if a bias holds weight, try asking the AI to "Steel-man" the opposing view. Ask it to:
• "Provide the strongest possible bear case for this stock based on SEC filings."
• "Identify the logical fallacies in this specific investment thesis."
• "Analyze this press release for 'fluff' versus actionable financial data."
With that in mind I’m sure you can take your non factual easily refuted bias to new heights and we could have lots of fun debunking old crap anew.