How to Stay Popular in the The Road To AI – Speedy And Full Of Blind Spots World.






It is fair to say that Artificial Intelligence (AI) is everywhere. Newspapers and magazines are littered with articles about the latest advancements and new projects being launched because of AI and machine learning (ML) technology. In the last few years, it seems like all of the necessary ingredients – powerful, affordable computer technologies, advanced algorithms, and the huge amounts of data required – have come together. We’re even at the point of mutual acceptance for this technology from consumers, businesses, and regulators alike. It has been speculated that over the next few decades, AI could be the biggest commercial driver for companies and even entire nations.

However, with any new technology, the adoption must be thoughtful both in how it is designed and how it is used. Organizations also need to make sure that they have the people to manage it, which can often be an afterthought in the rush to achieve the promised benefits. Before jumping on the bandwagon, it is worth taking a step back, looking more closely at where AI blind spots might develop, and what can be done to counteract them. Only then will organizations be able to truly take advantage of the benefits that new technology, especially AI, will be able to bring to them in the long run and how it can change the future of their business.

The lack of ML development


As the pace of AI and ML development intensifies alongside heightened awareness of cybercrime, organizations must ensure they take into account any potential liabilities.

Despite this, it has been proven that security, privacy, and ethics are low-priority issues for developers when modeling their machine learning solutions.


According to O’Reilly’s recent AI Adoption in the Enterprise survey, security is the most concerning blind spot within organizations. In fact, nearly 73 percent of senior business leaders admit that they don’t check for security vulnerabilities during model building. Additionally, more than half of organizations also don’t consider fairness, bias, or ethical issues during machine learning development. Privacy is similarly neglected, with only 35 percent keeping this top of mind during model building and deployment.


Despite the lack of attention to security and privacy concerns with machine learning development, the majority of resources are focused on ensuring AI projects are accurate and successful. For example, 55 percent of developers mitigate unexpected outcomes or predictions, but a large number who don’t still remain. Furthermore, 16 percent of respondents don’t check for any risks at all during development.


This lack of due diligence is likely due to numerous internal challenges and factors, but surprisingly, a big part of this problem is having the skills and resources to complete these critical aspects of the development process. In fact, the most chronic skills shortages in technology are centered around ML modeling and data science. To make progress in the areas of security, privacy, and ethics, organizations urgently need to address this.


What can be done?

AI maturity and usage has grown exponentially in the last year. However, considerable hurdles remain that keep it from reaching critical mass. To ensure that AI and ML are both represented by the masses and that they can be used carefully, organizations need to adopt certain best practices.

One of these is making sure technologists who build AI models reflect the broader population. Both from a data set and developer perspective this can be difficult, especially in the technology’s infancy. This means it is vital that developers are aware of the issues that are relevant to the diverse set of users expected to interact with these systems. If we want to create AI technologies that work for everyone – they need to be representative of all races and genders.


As machine learning inevitably becomes more widespread, it will become even more important for companies to adapt and excel in this technology. The rise of machine learning, AI, and data-driven decision-making means that data risks extend much further beyond data breaches and now include deletion and alteration. For certain applications, data integrity may end up eclipsing data confidentiality.


We will need to learn to live with the premise that there will not always be a perfect answer when using AIs. These AI systems learn from the data that are fed into them, so this is subjective to each use case. There may never be a time where we get the right results. With no precise checklists to overcome these situations, we must learn to adapt and provide new training and education platforms. These new training platforms will be vital to allow AI to become representative of all races and gender over the next few years. The talent pool is only set to grow, yet – the challenge remains to ensure it becomes even more diverse.


As AI and ML become increasingly automated, organizations must invest the necessary time and resources to get security and ethics right. To do this, enterprises need the right talent and the best data. Closing the skills gap and taking another look at data quality should be their top priorities in the coming year.

Post a Comment

Previous Post Next Post