top of page

AI Fights Fake News

  • Writer: TPI
    TPI
  • Jun 21, 2022
  • 6 min read

dear Elon Musk, please change Twitter's censorship policy


By Darryl Weng

Twitter has major fake __ problems - fake stats, fake accounts, and fake news. Elon Musk, attempting to take over Twitter, has made a priority of eliminating Twitter’s fake statistics and accounts.


Though, what about fake news and misinformation? Elon Musk believed that Twitter had the potential to be “the platform for free speech around the globe.” While free speech is indeed important, social media platforms like Twitter face heavy amounts of misinformation. If Musk seeks to mitigate Twitter’s fake news issue whilst upholding free speech, a new, improved method of fake news censorship must take place to avoid any harm to free speech that would cause further political divisions in the already heated political atmosphere.


Although Twitter has its censorship policy, it has been objective from the beginning and made up of a rigid set of rules that are not frequently updated. This, despite relative success in preventing misinformation, has great consequences when it comes to free speech regarding politics. For instance, one Twitter policy asserts that its users “may not use Twitter’s services to share false or misleading information about COVID-19 which may lead to harm.” The issue is that many opinions regarding COVID-19 have been evolving, with experts changing opinions within months. One day, experts advocated for masks and went into great detail on the plethora of protection masks provide against the coronavirus. The next day, experts begin explaining in detail how masks provide little to no protection. If experts provided opinions on both ends of the mask debate, how might Twitter decide whether information regarding masks or false or misleading? Even if Twitter attempts to change its policy, Twitter will never be able to keep up with current events by constantly updating its policy. By that point, the policy is decided by Twitter’s enforcement team’s sentiment and is no longer a clear policy on right or wrong. Furthermore, deciding whether or not false/misleading information “lead[s] to harm” depends on how one defines the term harm. Harm as in causing dissatisfaction or to the extremes of violence?


These questions do not lead to Twitter’s computer system deducing the answer. Rather, a human is left deciding with their emotions causing decisions to become far less reliable than that of a computer whose decisions are not marred by emotional bias. Therefore, computers and machines must be the key decision-makers on whether or not something is misinformation. To ensure the accuracy and reliability of a computer’s decisions, AI must be an integral part of the decision-making process. Thus, a censorship policy with a set of rules is no longer feasible, because, with AI, the rules evolve and adapt, changing appropriately to any user and bot tweet.


To better a computer system through AI, data science is needed to present general solutions to various problems by analyzing data, which leads to AI - the development of algorithms to find ways to reach solutions found through data science.


Over the years, individuals and corporations have filed numerous patents to develop AI methods(with data science) for fake news identification and for aiding in areas like fake news.


For instance, Chenope Inc., this year, filed the U.S patent 20220164643 titled System To Detect, Assess And Counter Disinformation. This patent, as per the title, has the purpose of detecting, assessing, and countering fake news and misinformation. What makes this patent unique is its multitude of analyses in “combatting the influence of large-scale creation and distribution of disinformation.” This patent has an intricate method - no, methods - in succeeding at its goal.


One unique aspect of the patent is that it acknowledges many outlier cases. For example, a “human pretending to be a bot” is an instance where a human utilizes a bot account to either “avoid accountability or to deceive an adversary as to the capabilities of its bots.” This case is contrary to the case of a bot account - where a bot controls a human account. As a result, a unique approach must be used, which, according to the patent, the supposed bot account will be analyzed for its range of capabilities. If those capabilities mimic those of a human, then the account will be flagged. Another example of an outlier case is suspected account swapping. The patent highlights that there is “no legitimate reason for people to share accounts,” as social media accounts are quite accessible and easy to make. As a result, this case will also be noted and flagged.


As this patent is primarily focused on disinformation, the analysis of news articles is just as important as the analysis of individual social media accounts. For example, narratives and propaganda are closely analyzed through this patent. The patent details how events that appear to have “one or more known adversary narratives” are subject to various propaganda and, using relationship graphs, this patent devises a method of determining what and where the fake news is. Beyond the news articles themselves, the patent analyzes the relationship between online comments on those articles and fake news. Utilizing a linked analysis between comments, the text of an article and opposing narratives, the assessment of disinformation predicts more outlier cases. While primarily focusing on an article’s text is key, as other patents have detailed methods such as advanced semantic analysis where words, phrases, clauses, and other syntactic structures are carefully analyzed, not considering enough unknown variables may limit the potential of a computer system to learn.


This patent by Chenope Inc. provides a unique outlook on disinformation through its highly diverse methods to predict as many outlier cases as possible, resulting in a computer system learning more quickly and efficiently.


Another corporation that created an exemplary patent, PayPal, in 2021, filed the U.S patent 20210287140 titled Real-Time Identification Of Sanctionable Individuals Using Machine Intelligence. This patent mainly details the approach in how machine learning-based techniques may be used through repeated semantic analysis(analysis of syntactic structures - words, phrases, punctuation, etc.) and other analysis to identify and block/restrict individuals subject to sanction requirements.


This patent’s key feature is the “monitoring [of] a plurality of electronic content sources.” By taking data from a variety of “electronic content sources,” or electronic content servers, this patent allows for a larger and wider view of the various transactions taking place that may be “prohibited or restricted.” Although the general purpose of this key feature is to analyze transactions to locate sanctionable individuals/users, taking in various electronic content sources in context allows for a broader picture of the activities of such sanctionable individuals. Since much fake news and misinformation tend to travel along with profiles of users or bots subject to sanction requirements, analyzing where the information and data flow between these users allows for increased efficiency in restricting not one, but a multitude and network of sanctionable users/bots. As a result, the more malicious users/bots sanctioned, the less misinformation travels around Twitter.


Another feature of this patent is its “decision reliability score.” This scoring system is aimed to determine how sanctionable a user should be, based on increasing and decreasing a rating given to that user through the user’s “broadcast decisions.” Taking system accuracy into account, a user’s rating/score is modified by the “calculated success rate” of the AI system to reach a proper rating for the user. This system/feature of this patent is quite similar to many other features that revolve around scoring and ranking systems to provide an evaluation of individuals when the intent of an individual cannot be accurately found. Given the nature of tweeting and texting, social media mask true individual identity and motives and only display the “broadcast decisions” of such individuals. This feature, like many other scoring/ranking systems, allows a computer system to better forecast decisions by a user. This is quite handy for predicting where misinformation will most likely come from, which, if the scoring system would be shared publicly, allows other users and individuals to have proper warnings before viewing certain broadcast decisions from a specified user. The higher rating a user holds, the more genuine and believable the information would be. The lower rating a user holds, the more likely information would be in the form of misinformation.


These concepts and features within PayPal’s patents, while primarily aimed at decreasing the amount of fake, suspicious, and malicious social media accounts would have beneficial impacts on accurately decreasing the distribution of fake news and misinformation without sacrificing free speech.


Although Musk has not publicly stated that he has priority on solving Twitter’s fake news issue, the solution is already in reach and needs to be immediately implemented, as many patents have shown how to manipulate data - unknown, irregular, and niche - to decrease the number of unknown variables that reduce a computer system’s analysis on fake news.








 
 
 

Recent Posts

See All
Economic Fallacy

Tricked by Surface-Level Information By Darryl Weng 2023 should have been America’s reset year. Instead, that year marked the market’s...

 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
Post: Blog2 Post
bottom of page