Dos and don’ts of AI for social media

T
The Researcher's Source
By: Christabell Ndive, Thu Nov 20 2025
P_Christabell Ndive

Author: Christabell Ndive

AI tools make it easy to generate content, and you can use this content to talk about your research on social media. However, the tricky part about using these tools (including ChatGPT, Copilot, Gemini, and others) is that while they offer opportunities, they also have potential pitfalls. They’ll help you say more in less time; but without investing time in checking, editing, and adding your own authentic voice, you won’t get the attention you’re looking for.    

In this blog, we’ll talk about some of the best ways to use AI on social media, including how to make sure you keep that authentic voice, how to avoid posting AI hallucinations, and more. 

Common AI tools 

First, which AI tools are researchers using? As part of an ongoing effort to understand researchers’ needs, Springer Nature recently surveyed researchers on the AI tools that they’re using in their work. Although that survey focused on the tools researchers use in finding and consuming information, you would also use those same tools to create social media content about your work.  

The survey found that the tools researchers use include some of the most known and common ones, including: 

  • ChatGPT (OpenAI)  
  • DeepSeek  
  • Gemini (Google)  
  • Copilot (Microsoft, but which runs on OpenAI’s models)  
  • Perplexity  
  • Claude (Anthropic) 

Most of these are generative, however Perplexity is more of a search engine, and also provides an AI-enhanced web browser. (There are more AI-enhanced browsers coming on the market, but they’re not aimed at generating content.)   

They all generally work in a similar way, with an easy-to-use interface where you can ask the system what you want it to generate for you. These “prompts” can range from the very simple to rather detailed; but the more detailed your prompt, the more likely you are to get the output you’re looking for.   

(It is also worth noting that some social media platforms are building their own AI tools directly on their platforms, as well.)   

The tools listed here are most frequently thought of as generating text, but they can also generate images. And there are some others, not listed here (including Sora), that specialise in generating video. The best practice suggestions we’ll discuss will generally apply across all of them.   

AI best practices   

All of the best practices come down to, and derive from, trust and authenticity. The most important piece is to make sure that what you post is factually true and accurate, talks about your research correctly, avoids bias, and is authentically your voice. Doing this relies both on careful prompting, as well as checking, editing, and revising the output before you post it.   

Even so, these tools will save time and help generate new ideas. You can: Ask AI for inspiration as to what to post; upload your work into tools like Copilot or ChatGPT and ask it to generate draft posts for you; write your thoughts about your work in your own voice, and then upload that, and ask these tools to generate “remixes” to give you more to post. And, you can include character limits in your prompts — so you can tell the AI to keep its outputs to limited character counts, which is helpful when generating material for microblogging sites like BlueSky or Mastodon (these sites similar to platform X).   

One additional note: You should also edit out words that AI commonly uses that could flag your posts as AI — words like, “unlock,” “delve,” “leverage.” You can also keep your eyes open to words you frequently see online from possible AI-generated material, and keep a list of these. You can actually include, in your prompts, instructions that the tool you’re using should avoid those words.  

You can also train some of these tools on your voice so that its initial output sounds more like you, but you should still edit it before you post. And that’s to say nothing about checking it over for accuracy, to make sure you’re not posting inaccurate or misleading AI-generated information, and to follow the guidelines for responsible AI use. (You should also follow some general safety guidelines. These would include prioritizing human well-being and dignity, editing the output for bias, and being accountable for what you post.) But the key step comes after that, and that’s to edit these outputs to add your own individual voice — to make it yours, and to make it have a better chance at standing out.   

How to get started

Of course, the best practices and advice for using social media that we’ve discussed in past blog posts still apply here. Just that using these tools can help give you inspiration, can help you get a volume of material to post with less effort, and can help you get the word out about your research. It’s just one more puzzle piece to getting your work the attention it deserves. 

And remember to check out the previous blogs in this series:    

Related Content

P_Christabell Ndive

Author: Christabell Ndive

Christabell Ndive, Senior Marketing Manager based in London, is the chief editor of The Source Blog and oversees the creation and maintenance of community webpages. She has expertise and previous experience in B2C audience marketing. She is focused on exploring new trends and insights in academic research and publishing to ensure “The Source” remains a vital resource for the research community.