nz365guy

View Original

Artificial Intelligence in Today's World

Delving into the topic Artificial Intelligence has raised some interesting questions. In my recent posts, I’ve explored the concepts of Practical AI and Mixed Reality, and the exciting possibilities opened up by evolving AI technology.

Alongside this though are raised questions of the ethical impact of AI, and what it’s going to mean for us in the future. From my conversations with people in the community, I know I’m not the only one asking these questions. While I don’t have all the answers, maybe at this point asking the questions is a good place to start.

When I think of ethics and AI, one of the key things to try to understand is the societal impact of AI.

There is definitely a need for ongoing work and discovery in this space. It is pretty clear that AI is going to impact the way we work, how we work, and even the opportunity we have to work. One of the risks is that the biases of the people or organizations who build AI systems unknowingly codify those biases into their algorithms.

Amazon’s attempt to build a recruitment system based on a data set of their existing employee base is a good example of this. It seemed like an ideal use case for AI: take a huge data set of ideal candidates who you’ve already hired, understand patterns in their experiences and attributes, and automatically score applicants based on how similar they are to this existing standard.

What Amazon didn’t realise though was since their algorithm wasn’t able to provide a gender-neutral assessment though because the data it was being fed was overwhelmingly male-dominated. Understandably, Amazon had to scrap this approach. As the power of AI is harnessed to make the decision-making process more efficient, we need to ensure that it doesn’t reinforce stereotypes with built-in biases.

Let’s talk about the ethics of AI when it comes to autonomous weapons. What happens when an AI-equipped weapon is more efficient at doing its job than a human could ever be?

Army of None: Autonomous Weapons and the Future of War is an excellent book to read if the whole area of warfare, and where it is going with AI, is of interest to you. The ability to fight wars using extremely efficient AI raises a mess of ethical dilemmas.

The book gives an example. Say 50 drones are competing against another 50 drones. They are let loose with a clear target to destroy or be destroyed. Systematically, these drones calculate the best way to destroy the enemy drones when speed and accuracy that humans could not keep up with.

From a testing point of view, this is where the technology is already at. What happens when it is employed in real-life warfare? Consequently, if a country maintains an ethical standpoint that it will not use AI equipped weapons, but the enemy does not share its stance, it would be unable to compete fighting from a disadvantaged position.

I believe that we are going to see a pivot away from the traditional jobs that we have known for the last 100 to 200 years, into new job roles that do not yet exist.  Some positions that currently exist will become irrelevant. While as humans we are programmed to resist change, I do not think there is a need for us to protect these roles as if they are living, breathing creatures. They are things we do, ways of working, and we need to move from trading time for money to working smarter.

Will AI actually replace job roles? In my previous post on practical AI, I looked at how in some cases there will be a detrimental effect on low-skilled roles. Additionally, we may see an impact on very high-skilled roles that traditionally take people several years of university study and thousands of practical hours to master.

The way I see it, where we are moving with AI will see some roles entirely replaced, especially where the work is repeatable.

Machine Learning is a subset of Artificial Intelligence where an algorithm is trained to identify patterns and make predictions based on data sets.

The results can be hugely helpful – take Google as an example. When you start typing a search query they use machine learning to make increasingly accurate suggestions to help you find what you are looking for faster. But data quality is a key consideration when relying on outputs from machine learning. We need to be asking:

  • Does the set contain sufficient data to uncover the repeatable patterns?

  • Are there any known quality problems? How are any errors handled?

  • What about missing data? How is that factored in?

  • Does it make sense to use more than one data set? If so, how do the sets correlate?

Before an Organization is in a position to ask the above questions, I would hope they had already spent time exploring higher level questions like:

  • How will the output influence our business?

  • How will it enhance the way we do business?

  • Will it evolve the products and the services that we offer?

  • Will it transform the way we do business?

Microsoft’s current commercially available offering in AI is Cognitive Services and Azure ML, which allow you to train machine learning models. Increasingly Microsoft is also building machine learning into the out-of-the-box functionality of Dynamics 365 and the Power Platform, including Sales, Customer Service, and Field Service.

Beyond out-of-the-box though, there are some incredible solutions being enabled by AI. Recently I started working with Maptaskr, an Independent Software Vendor (ISV) based in Western Australia who use AI with geospatial data. One example of their work is this sub-regional view of GDP data.

The risk of hacking algorithms will naturally be one of the issues that we have to look out for in the future.

One of the reassuring things about working with Microsoft is that they are focused on security. Particularly around the Internet of Things (IoT) and making sure that all practical avenues have been taken to secure the AI infrastructure. Thus preventing anyone with malicious intent from turning these into a potential attack tool in a cyber-security scenario.

Application of AI is going to continue to increase in the years ahead as we see platforms scale and the ability to train algorithms based on more data than before. We are going to see many new scenarios that maybe haven’t been thought of yet. It is going to transform the way we do business.

The challenge for those of us working in this space is to be continually asking ourselves:

  • How do I need to evolve in this changing world?

  • How can I start adjusting now?

  • What do I need to learn?

Often it is our resistance to evolution that holds us back from embracing what is next.

What do you think?  Leave a comment below if you want to join the conversation.