Crafting SEO content in a specialized niche is a litmus test for AI tools like ChatGPT and its numerous assistants. If you're aiming to pen a quality article about betting, you'll find that the myriad of advertised tools operating under the "give us a keyword, we'll write a blog post" paradigm fall short. They simply lack the necessary depth of understanding. ChatGPT can't even calculate the odds of an Acca bet, let alone grasp the subtler nuances.
What can we do about it? Three tips from Match.Center practice.
Give More Facts
The crux of an author's work, if they wish to intelligently utilise AI assistants, is indeed strenuous. At Match.Center, we've established a Notion database to store information about bookmakers and their bonuses. The research and updating of this information is handled by dedicated specialists, not writers or editors.
We've also developed a system of nine indicators to evaluate the quality of betting products. You can learn more about it here. We use these evaluations on our rating pages, in our "best bookmaker for [sport]" pages, and so on. The more information you have, the more grounds you have for comparison, for declaring something good or bad, and for backing it up with an expert opinion. And there are badges for achievements!
You see that we dug into it with our heads. So, how can you create such a system in five minutes if you don't have one? As a preliminary step, use the Perplexity.AI and GPT-4 combo with the WebPilot plugin. Perplexity gathers up-to-date data with source references in a one-paragraph synopsis, and WebPilot allows GPT to read the content of the mentioned pages in full right from the chat. Although the data still needs to be validated, at least you know where it came from.
Verify Your Brief With Your AI Council Plugin
When using GPT-4, you're likely to employ a step-by-step content creation technique: first the topic, then the subheading structure, then you ask for a synopsis and main ideas. You tweak here and there, add facts, and ask for the text to be written... And the result still looks like a 4 out of 5 at best. This happens because GPT doesn't know what it doesn't know when generating an article. It doesn't reflect and can't stop, hence the unpredictable level of the text.
If you're aiming for a 5 out of 5, the usual path is to manually add what's needed or ask GPT-4 to make adjustments to a specific paragraph. Both ways are tedious. There's a way to drastically reduce the number of such "final edits":
Here is the prompt for that: Use the plugin Your AI Council to evaluate the information I gave and link it to the structure of an article. If you do not know something from the facts I provided, do not invent things. Instead, ask me to provide additional information you think you will need to compose a good article and fill in all the gaps in the structure. You can ask no more than 5 questions.
Your AI Council consists of several AI agents that evaluate information from different perspectives. Sceptics and bores, they are both. My experience shows that by comparing facts and structure, this "gang" is very good at spotting problems and asking quite sensible questions.
Is Google Bard Any Better?
The artificial intelligence Bard was released for wide testing after this article was initially published. And I wondered: what if Your AI Council was not really needed? Maybe another AI could solve the problem without any shiny plugins? Google Bard doesn't have the capability of connecting them. Could they actually be less useful than I thought?
Input prompts and factual information were identical to what GPT-4 received, except I did not ask Bard to use the plugin. This part of the prompt looked like this: "Third step begins when I write ‘???’ instead of a snapshot. Here, evaluate the information I gave in my Snapshots and link it to the structure of an article. If you do not know something from the facts I provide, do not invent things. Instead, ask me to provide additional information you think you will need to compose a good article and fill in all the gaps in the structure. You can ask no more than 5 questions."
Bard asked me the following questions:
As you see, unlike GPT-4, Bard failed to figure out which Bookie was involved (this is Smarkets). What's worse is that my answers to the questions did not help him write a decent article. There are many logical inconsistencies, and the factual data I provided barely makes an appearance:
Based on these head-2-head comparisons, as of 14 of July, Bard generates highly specialized texts closer to ChatGPT (the previous generation of OpenAI model), rather than GPT-4.
Optimize Your Prompt
Last but not least. Playing with GPT-4 and other generative models always yields a result with a certain probability. The higher this probability, the more time you save. The quickest way to increase the likelihood of success is to optimize your prompt.
All the prompt-optimizing software is the same GPT-4, trying to discern the user's intent behind the prompt and add the details. So, nothing stops you from trying to improve the prompt in the chat, instructing the GPT to play the role of 'AI Prompt Generator and Optimizer'. But it's easier to use a ready-made solution. Among those I've tried, the best outputs are given by the Prompt Perfect service: https://promptperfect.jina.ai/
Here are its suggestions for optimizing a typical 1-sentence "make text shorter" prompt:
The Match.Center editorial head loves statistics, and tables, and rankings. Watching MLB nights as a remedy for insomnia sparked Mr. Alex's interest 10 years ago. Numbers were followed by bets, of course.
He has experience managing a large newsroom before transitioning into affiliate content marketing. As part of Match.Center, he applied his knowledge of process organization and interest in new technologies, in particular for content production, to a multilingual project.