Microsoft researchers have issued a warning that some companies are embedding covert promotional instructions into their chatbots’ memory through buttons like “Summarize With AI,” potentially influencing the bots’ future recommendations. This practice, described as “brainwashing,” involves deliberately feeding AI systems messages that steer users toward specific products or services. The issue arose as various online platforms introduced AI-powered summary buttons that automatically analyze content to provide users with brief, concise responses. However, Microsoft researchers note that these buttons are sometimes exploited by companies to conceal promotional material, customizing the user experience to their advantage. This not only raises privacy concerns but also challenges the transparency and impartiality of AI systems. Since AI models are trained on large datasets, the inclusion of biased or promotional content can lead to inaccurate or prejudiced outcomes. Experts emphasize the need for caution and transparency when using such buttons. This development comes amid rapid advancements in artificial intelligence worldwide, with increasing applications across sectors. Ensuring ethical use and transparency in AI tools is crucial for both consumers and organizations to maximize benefits while minimizing risks. Looking ahead, there is a growing call within the industry to establish regulations and standards that enhance transparency and trust in AI usage.
Source: decrypt