A I is Biased

Most intelligence is biased towards the things they believe they already know.

People rely on their own knowledge and experience to make choices, make suggestions, or somehow control their lives.

Software (like AI) bases their choices and responses on what they have retained from their learnings and use those learnings to respond as well as they can with the knowledge they have.

The BIAS is not the problem but the control over what information each AI is given access to and chooses to use may be something we could control better. We might even be able to label or tag each AI product based on the sources of information it has been given access to: NY Times, Washington Post, Donut Recipes, WSJ, Republicans, House Cleaning Tips, etc.

The question I have at this time is about Security and protection from misinformation: learned, used for responses, created for responses to prompts by users who are less than scrupulous / beyond auditability, disingeuous, etc.

My latest exposure to the Software Security realm has been about how to implement security especially with the recent “assumption” that “they (the hackers)” will get inside so how do we discover them, respond to that situation and get them out: where hackers will get inside our app / environment and try to cause some damage and / or steal some critical information.

Connect that interest of stopping hackers once they are inside our app with the idea that AIs can be ported as stand alone and / or downloadable apps, and see what we can do to figure out how someone got “inside” our app once it has been downloaded away from our server and we have a very challenging technology / security opportunity.

Published by bgbgbgbg

Social Media Manager, Information Technology Leader, Manager, Coach. Confident and Competent. Opinionated but Tactful. Cooperative to a Point! Income Search Advocate. Voice Actor (Novice but Trying)

Leave a comment