If pop culture has taught us anything, it would be that we can’t trust Artificial Intelligence (AI) applications, because they will turn against us eventually. I think pop culture is wrong. We are just training the machines wrong and feeding them biased material. Whether we want to or not, we are all biased to some extent, which means that even if we try not to be and train the machines, we are still adding a little bit of bias.
Artificial Intelligence, cognitive computing or machine intelligence involve computers that can perform tasks previously thought to require human intelligence (e.g. problem solving and learning). AI is often used as an umbrella term for artificial intelligence, machine learning and deep learning.
Before we look ahead, let’s take a step back and see where these concepts came from.
AI may seem like some something new, but in reality the concept is quite old.
Artificial intelligence in the 50s
Artificial Intelligence was created as a topic in computer science during a two-month study1 that John McCarthy pushed for in the summer of 1956. Many of the attendees were to become world leaders in AI for decades. The goal of this study was to develop a thinking machine.
The study also included other topics, e.g. computers, natural language processing, neural networks, and other subjects that today are still highly relevant to the field of AI.
They never got as far as a thinking computer, but their brainstorming laid the foundation for AI.
Machine learning in the 50s
Machine learning is a field in computer science that gives the computer the ability to learn without being explicitly programmed.
The term was coined in the late 50s by Arthur Samuel, a pioneer in the areas of AI and computer games. It was developed from a number of other methods, including pattern recognition.
Machine learning has given us a lot of cool applications, and some of them are my favorites which I use on a daily basis, for instance more effective web searches, personal recommendations given by Netflix, and spam filters for email.
Deep learning in the 80s
In the mid-80s Rina Dechter introduced the term Deep Learning, It’s the part of machine learning where a model learns to classify things directly from the data, e.g. images, text or sound. Deep learning is usually implemented with the help of neural network architecture. Deep refers to the number of layers in the network. The more layers, the deeper the network.
Traditional neural networks have a few layers while deep networks can have hundreds and even thousands of layers.
Searching with AI in the 2010s
A couple of years ago Google built a neural network consisting of sixteen thousand processors with one billion connections. The model was used to analyze ten million random videos over a period of three days, while no help was given with things like characteristics or what it was supposed to search for.
By the end of the three-day period, it had learnt how to recognize the most common things in videos, faces, body parts and cats, in that order. It had become so good that it found cats with 74.8% accuracy and faces with 81.7%.
One billion connections and sixteen thousand processors can sound rather unfeasible for us “regular” developers. Do I need to be a data scientist or an AI specialist to use AI in my apps and solutions? Fortunately not.
There are a lot of companies that offer services to help us out, for instance Amazon, IBM, Google, Microsoft, Apple and more.
The cognitive or intelligent services that they offer are surprisingly easy to use. Most of them are just a rest-call away. They all offer services that handle vision, speech, text analytics, video, bots & personal assistants, and even software for deep learning, tools for prediction and classifications.
If you want a taste of what you can use these services for, check out Seeing AI.2. It’s a smart camera app that is extremely valuable for people with visual impairment. It can read text out loud, scan barcodes on products to tell you what they are, recognize friends and people around you and describe them to you, including their emotions. It can also describe the surroundings and identify currency, all this thanks to cognitive services.
There are ways to do these things locally, for instance Microsoft’s Windows ML. This is great for sensitive data, small devices, and much more.
It’s all about the data
As I stated before, it all comes down to how we train the machines. We are full of bias, and this influences the way we train the machine, whether we like it or not.
Microsoft released an AI bot on twitter a couple of years back. It was called Tay, and it was designed to learn from the conversations it had.
In less than 24 hours Microsoft had to withdraw the bot because it had become racist and used words that were … let’s call them inappropriate.
However, we should not fear Machine learning. Instead, we should fear the data and the way humans use, and abuse the opportunities offered by Machine Learning.
Let’s take exploits and bias out of the equation for now and look at the possibilities of AI and Machine Learning.
Focus on the right thing
Replacing humans should never be the goal of AI, the goal should be to remove humans from tedious and repetitive tasks such as analyzing spreadsheets, analyzing large amounts of data, or as a failsafe to catch human errors.
We should let the machines do what they are best at, and let us humans flourish in the areas in which we excel.
By adding AI to the mix, we can save time, help more customers, and create a better workplace for everyone.
Machines for our health
A machine3 was trained to look for breast cancer in biological tissue samples on a slide using machine l earning, predictive analytics and pattern recognition. Going through thousands of hi-res images is of course very time consuming for a person. On average, human pathologists have a 73% accuracy rate while the machine attained 89%. Of course the result is far from perfect, but we are well underway.
The same method has been tried with other types of cancer, and in some cases results were found two years before medical staff could find the- se results using traditional methods. The machine can, therefore, make the difference between life and death, and imagine how many lives this could save!
Why not utilize our connected lives to improve our health with the help of AI?We can already keep track of what we eat with the help of our phones today. What if we take this a step further and let AI keep track of everything we eat, how much we work out etc. Who knows the type of devices we will have in the future. Perhaps it’s a pair of glasses with a camera and built-in AI, or perhaps our whole society will be connected to cameras for us to utilize.
What AI could do is analyze what nutrients you lack based on the information collected on your phone, and keep track of your pulse and blood pressure, weight and so on. It can even call the doctor for you, get a prescription for iron or blood pressure medication or whatever you are in need of before it becomes a serious problem.
As we speak, remarkable advances in research in the field of prosthetics are being made. Machine learning already enables us to control a robotic limb with our minds, but researchers are working on limbs that can feel with the help of wires that link the limb and brain, telling the sensory cortex when pressure is applied.
Some researchers are working on a hand that has a tiny camera built in. Using AI it can analyze the object in front of it and determine the action to be taken, for instance, grab a can and raise it to the mouth.
A recently published research paper described prosthetic limbs packed with electronics and sensors, which, combined with a new control algorithm, can help amputees to feel more realistic sensory feedback. This is the start of letting prosthetics feel like a part of the body, not just a tool.
AI for added safety
What about the notion of using self-flying planes in addition to self-driving cars? The technology of autopilot and landing assistant is nearly ready. And why not take it a step further, because machines can often communicate better than humans can, and they are definitely better at analyzing data, in larger quantities much faster, and they can adjust accordingly.
On the streets we can have retractable road barriers that only allow authorized vehicles to pass. And we could use facial recognition in office buildings, which would avoid having to mess about with entry cards, but still ensure that only.
authorized people enter the building. Of course, we can go full “Person of interest” or “Minority Report” and search for suspicious behavior or even emotional states, but this would entail that we do not train the machines with our human bias. The machine could use facial recognition to find known perpetrators or even suspect behavior common amongst thieves or shop-lifters. Or look for depressed or suicidal behavior and deploy help before it’s too late.
Mass shootings like the one in Las Vegas in 2017 might actually have been prevented if we had had systems like that. The perpetrator stockpiled an arsenal of weapons and ammunition in at least 21 bags and a couple of smaller bags during six days running up to the shooting. If the rifles had been visible it would have been easy for a human to see, but detecting behavioral anomalies is not as easy, especially at a large and busy resort. Anyway, it’s not regular behavior for a single man to bring that many bags.
Smart home then, now and in the future
I remember when remote controls for private homes came on the market, allowing you to control your lights with a remote control or even over the web.
This was in early 2000 and it didn’t take long until we had integrated voice control, so we could control our lights by means of voice commands. Well, it didn’t work as well as voice control does today, but this was 18 years ago!
So, we went from a button to a remote, remote to voice and voice to…? The next logical step to making your home smart, is to actually make it smart.
Most smartphones have digital assistants and it’s getting more and more common for us to invite them into our homes with smart speakers and the likes.
Our house and assistants know basically everything about us, but they don’t act on it yet.
I can manually set my washing machine to be ready when I get back from work, but if I come home early, it’s not done, or even worse, if I get home very late, I will have to re-wash the laundry.
Since our house and assistants already know our schedule and they keep track of our whereabouts, why should I have to tell them when it should be done? They already know!
If we just add a little machine learning and combine all those things into one, it could look a bit different.
When the last person has left the house, the security alarm will automatically turn on, and the lights and appliances will be turned off. Our smart fridge will look up a recipe based on the ingredients we have in the house (or order groceries) so we don’t have to nag about what’s for dinner. It might even cook for us if we use a sous vide or crockpot.
The house would be vacuumed and mopped while we’re out and when we return, it turns on the lights where needed, and music starts to play. Depending on who’s home it plays different music, and if we’re all home it will only play music genres we have in common.
We should not have to open an app or ask our assistants to turn the lights off at night or pull the blinds.
It’s all about removing petty little tasks and letting humans focus on what’s important. Giving us more time to spend with our families, giving us more time to be creative, giving us more time to relax and unwind. Something that is much needed in our society today, where we all live under constant stress and where even school kids get burn outs.
All these small things may help reduce stress and anxiety.
Many people suffer from telephobia or telephone apprehension or just don’t want to talk on the phone. Google has revealed that their assistant4 can take away all this and make it easier. The assistant can make phone calls for you and make appointments and such. It does so in a very humanlike way with added filler words, a very natural way to speak.
Being able to use a very natural way of talking to a bot or assistant is very important, so we should try our hardest not to force language or syntax on people. It will not feel natural. The same goes for bots that are talking to us: if it feels computerized, it’s not as natural.
Customer service run by bots
Bots are more popular than ever: 49,4% prefer a chatbot5 over calling in. By adding a bot to your home page, you can let the bot answer the most common questions, and let it connect to a person when needed, making your helpdesk available the whole day, every day of the week. This will put the helpdesk in a much better position for answering questions than a situation in which its staff would have to handle all those regular questions. The waiting time would also be reduced, and if you are available 24/7, nobody will try to find another similar service just because they couldn’t find that simple question on your page.
There are services available for feeding all your documents, FAQs and even recorded calls into a machine, extract all your questions and answers, and put them into an FAQ.
From that FAQ you can use a build- a-bot-from-your-FAQ service which means that you don’t even have to write your own questions to have your bot up and running.
Being a geek shows in all parts of her life, whether it be organizing hackathons, running a user group and a podcast with her husband, game nights (retro or VR/MR) with friends, just catching the latest superhero movie or speaking internationally at conferences. Her favorite topics are UX/UI, Mixed reality and other futuristic tech. She’s a Windows Development MVP. Together with her husband she runs a company called “AZM dev” which is focused on HoloLens, Windows development, UX and teaching the same.
Conclusion
Don’t be afraid of AI, don’t let pop culture rule how we develop the future, and don’t let our bias ruin what can be a glorious future.
A lot of people are talking about the risks of AI and everything that could go wrong, but I wanted to show you that AI can be a force for good. Machines won’t use data for evil, whereas humans might.
Let us use AI and create the best future possible where we can focus on what matters most. So I encourage you to take advantage of all the services that do the hard work for you, and see how you can improve and optimize your apps and solutions to make a better future for us humans.
This article is part of XPRT. magazine #7.
Get your free copy or download XPRT. magazine