The Potential “Holy Shit” Threats Surrounding AI and ML

The Potential “Holy Shit” Threats Surrounding AI and ML

The Potential “Holy Shit” Threats Surrounding AI and ML | DeviceDaily.com

Artificial intelligence(AI) and machine learning (ML) are the most viral topics discussed in this age. It has been a big controversy among scientists today, and their benefits to humankind cannot be overemphasized. We need to watch for and understand the potential “holy shit” threats surrounding AI and ML.

Who could have imagined that one day the intelligence of machine would exceed that of a human — a moment futurists call the singularity? Well, a renowned scientist (the forerunner of AI), Alan Turing, proposed in 1950 —  that a machine can be taught just like a child.

Turing asked the question, “Can machines think?”

Turing also explores the answers to this question and others in one of his most read thesis titled — ‘’Computing Machinery and Intelligence.”

In 1955, John McCarthy invented a programming language LISP termed “artificial intelligence.” A few years later, researchers and scientists began to use computers to code, to recognize images, and to translate languages, etc. Even back in 1955 people were hoping that they’d one day make computer to speak and think.

Great researchers like Hans Moravec (roboticist), Vernor Vinge (sci-fi author), and Ray Kurzweil were thinking in a broader sense. These men were considering when a machine will become capable of devising ways of achieving its goals all alone.

Greats like Stephen Hawking warns that when people become unable to compete with advanced AI, “it could spell the end of the human race.”  “I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just feels a bit daft,” said Stuart J. Russell, a professor of computer science at the University of California, Berkeley.

Here are five possible dangers of implementing ML and AI and how to fix it:

1.  Machine learning (ML) models can be biased — since its in the human nature.

As promising as machine learning and AI technology is, its model can also be vulnerable to unintended biases. Yes, some people have the perception that ML models are unbiased when it comes to decision making. Well, they are not wrong, but they happen to forget that humans are teaching these machines —  and by nature — we aren’t perfect.

Additionally, ML models can also be biased in decision-making as it wades through data. You know that feeling-biased data (incomplete data), down to the self-learning robot. Can a machine lead to a dangerous outcome?

Let’s take for instance, you run a wholesale store, and you want to build a model that will understand your customers. So you build a model that is less likely to default on the purchasing power of your distinguish goods. You also have the hope of using the results of your model to reward your customer at the end of the year.

So, you gather your customers buying records — those with a long history of good credit scores, and then developed a model.

What if a quota of your most trusted buyers happen to run into debt with banks — and they’re unable to find their feet on time? Of course, their purchasing power will plummet; so, what happens to your model?

Certainly it won’t be able to predict the unforeseen rate at which your customers will default. Technically, if you then decide to work with its output result at year end, you’ll be working with biased data.

Note: Data is a susceptible element when it comes to machine learning, and to overcome data bias — hire experts that will carefully manage this data for you.

Also note that no one but you was looking for this data — but now your unsuspecting customer has a record — and you are holding the “smoking gun” so to speak. 

These experts should be ready to honestly question whatever notion that exists in the data accumulation processes; and since this a delicate process, they should also be willing to actively look for ways of how those biases might manifest themselves in data. But look what type of data and record you have created.

2. Fixed model pattern.

In cognitive technology, this is one of the risks that shouldn’t be ignored when developing a model. Unfortunately, most of the developed models, especially those designed for investment strategy, are the victim of this risk.

Imagine spending several months developing a model for your investment. After several trials, you still got an “accurate output.” When you try your model with “real world inputs” (data), it gives you a worthless result.

Why is it so? This is because the model lacks variability. This model is built using a specific set of data. It only works perfectly with the data with which it was designed.

For this reason, safety conscious AI and ML developers should learn to manage this risk while developing any algorithmic models in the future. By inputting all forms of data variability that they can find, e.g., demo-graphical data sets [yet, that is not all the data.]

3. Erroneous interpretation of output data could be a barrier.

Erroneous interpretation of data output is another risk machine learning might face in the future. Imagine after you’ve worked so hard to achieve good data, you then do everything right to develop a machine. You decided to share your output result with another party — perhaps your boss for review.

After everything — your boss’ interpretation is not even close to your own view. He has a different thought process — and therefore a different bias than you do. You feel lousy thinking how much effort you gave for the success.

This scenario happens all the time. That’s why every data scientist should not just be useful in building modeling, but also in understanding and correctly interpreting “every bit” of output result from any designed model.

In machine learning, there’s no room for mistakes and assumptions — it just has to be as perfect as possible. If we don’t consider every single angle and possibility, we risk this technology harming humankind.

Note: Misinterpretation of any information released from the machine could spell doom for the company. Therefore, data scientists, researchers, and whoever involved shouldn’t be ignorant of this aspect. Their intentions towards developing a machine learning model should be positive, not the other way round.

4. AI and ML are still not wholly understood by science.

In a real sense, many scientists are still trying to understand what AI and ML are all about fully. While both are still finding their feet in the emerging market, many researchers and data scientists are still digging to know more.

With this inconclusive understanding of AI and ML, many people are still scared because they believe that there are still some unknown risks yet to be known.

Even big tech companies like Google, Microsoft are still not perfect yet.

Tay Ai, an artificial intelligent ChatterBot, was released on the 23 March 2016, by Microsoft Corporation. It was released through twitter to interact with Twitter users — but unfortunately, it was deemed to be a racist. It was shut down within 24 hours.

Facebook also found that their chatbots deviated from the original script and started to communicate in a new language it created itself. Interestingly, humans can’t understand this newly created language. Weird, right? Still not fixed — read the fine print.

Note: To solve this “existential threat,” scientists and researchers need to understand what AI and ML are. Also, they must also test, test, and test the effectiveness of the machine operational mode before it’s officially released to the public.

5. It’s a manipulative immortal dictator.

A machine continues forever — and that’s another potential danger that shouldn’t be ignored. AI and ML robots cannot die like a human being. They’re immortal. Once they’re trained to do some tasks, they continue to perform and often without oversight.

If artificial intelligence and machine learning properties are not adequately managed or monitored — they can develop into an independent killer machine. Of course, this technology might be beneficial to the military — but what will happen to the innocent citizens if the robot cannot differentiate between enemies and innocent citizens?

This model of machines is very manipulative. They learn our fears, dislike and likes, and can use this data against us. Note: AI creators must be ready to take full responsibility by making sure that this risk is considered while designing any algorithmic model.

Conclusion:

Machine learning is no doubt one of the world most technical capabilities with promising real-world business value — especially when merged with big data technology.  

As promising it might look — we shouldn’t neglect the fact that it requires careful planning to suitably avoid the above potential threats: data biases, fixed model pattern, erroneous interpretation, uncertainties, and manipulative immortal dictator.

Ejiofor Francis

Entrepreneur, Digital Marketer, IT/Technology Freelance Writer

Entrepreneur and Inbound Marketing Consultant with over 6 years of guest blogging experience. He’s a strong enthusiast of technology and business events. Ejiofor Francis is the Founder of EffectiveMarketingIdeas(EMI), a professional content marketing agency for startups and mid-sized businesses. When he’s not learning something new about his industry you’ll find him working on his client’s project(s). Want to say hi? You can shoot him an email at francis@effectivemarketingideas.com.

The post The Potential “Holy Shit” Threats Surrounding AI and ML appeared first on ReadWrite.

ReadWrite

(29)