Advertisement

  • News
  • Columns
  • Interviews
  • BW Communities
  • Events
  • BW TV
  • Subscribe to Print
BW Businessworld

Social Impact of AI Technology

Big tech companies are pushing the boundaries in search of cutting-edge technology and are becoming digital sovereigns with footprint across geographies, creating new rules of the game

Photo Credit :

1627452687_ifgaWO_online_education.png

The AI (Artificial Intelligence) race is getting increasingly interesting now with the two main protagonists, Alphabet, Google’s parent company and Microsoft, duelling for pole position. On Tuesday, 14 March 2023, Google announced tools for Google Docs that can draft blogs, build training calendar and text. It also announced an upgrade for Google Workspace that can summarise Gmail threads, create presentations and take meeting notes. "This next phase is where we're bringing human beings to be supported with an AI collaborator, who is working in real time," Thomas Kurian, Chief Executive of Google Cloud, said at a press briefing.

Microsoft, on Thursday 16 March, 2023, announced its new AI tool, Microsoft 365 Copilot. Copilot will combine the power of LLMs (Large Language Models) with business data and the Microsoft 365 apps. Says CEO Satya Nadela “We believe this next generation of AI will unlock a new wave of productivity growth”. This is in addition to the chatbot battle that is in progress with Microsoft funded OpenAI’s ChatGPT and Google’s Bard.

As these companies and many others invest billions in research and development of tools based on technology that they say will allow businesses and their employees to improve productivity, the social impact that this tech will have is under scrutiny. While it is accepted that AI tech will have a deep influence on our society, what is also true is that not all of it will be positive.

Notwithstanding the fact that AI can significantly improve efficiencies and support human beings by augmenting the work they do and by taking over dangerous jobs, making the workplace safer, it will also have economic, legal and regulatory implications that we need to be ready for. We will have to build frameworks to ensure that it does not cross legal and ethical boundaries.

The naysayers are predicting that there will be large-scale unemployment and millions of jobs will be lost, creating social unrest. They also fear that there will be bias in the algorithms leading to avoidable profiling of people. Another challenge that will affect day-to-day life is the ability of the technology to generate fake news and disinformation or inappropriate/misleading content. The problem is that people will believe a machine, thinking it is infallible. The use of deepfakes is not a technology problem in isolation. It is a reflection of the cultural and behavioural patterns being displayed online on social media these days.

*Question of IP

There is also the question of who owns the IP for AI innovations. Can it be patented? There are guidelines in the United States and the European Union as to what can and cannot be considered inventions that can be patented. The debate is on regarding what constitutes a creation which is original. Can new artifacts generated from old ones be treated as inventions? There is no consensus on this and authorities in different countries have given diametrically opposite judgements, a case in point being patents filed by Stephen Thaler for his system called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) which were rejected in the UK, the EU and the USA but granted in Australia and South Africa. One thing is clear; due to the complexities involved in AI, IP protection that currently governs software is going to be insufficient and new frameworks will have to develop and evolve in the near future.

*Impact on Environment

The infrastructure used by AI machines consume very high amounts of energy. It is estimated that training a single LLM produces 300,000 kilograms of CO2 emissions. This raises doubts on its sustainability and begs the question, what is the environmental footprint of AI?

Alexandre Lacoste, a Research Scientist at ServiceNow Research, and his colleagues developed an emissions calculator to estimate the energy expended for training machine learning models.

  

 As language models are using larger datasets and becoming more complex in search of greater accuracy, they are using more electricity and computing power. Such systems are called Red AI systems. Red AI focuses on accuracy at the cost of efficiency and ignores the cost to the environment. On the other end of the spectrum is Green AI which aims to reduce the energy consumption and carbon emissions of these algorithms. However, the move towards Green AI has significant cost implications and will need the support of the big tech companies for it to be successful.

*Ethics of AI

Another fallout of the ubiquitous AI systems is going to be ethical in nature. According to American political philosopher Michael Sandel, “AI presents three major areas of ethical concern for society: privacy and surveillance, bias and discrimination and perhaps the deepest, most difficult philosophical question of the era, the role of human judgment”.

As of now, there is an absence of regulatory mechanism on big tech companies. Business leaders “can’t have it both ways, refusing responsibility for AI’s harmful consequences while also fighting government oversight,” says Sandel and adds that “we can’t assume that market forces by themselves will sort it out”.

There is talk of regulatory mechanisms to contain the fallout, but there is no consensus on how to go about it. The European Union has taken a stab at it by formulating the AI Act. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.

It proposes checks on AI applications that have the potential to cause damage to people like systems for grading exams, recruitment or assisting judges in decision making. The Bill wants to restrict the use of AI for computing reputation-based trust worthiness of people and use of facial recognition in public spaces by law enforcement authorities. The Act is a good beginning but will face obstacles before the draft becomes a final document and further challenges before it is enacted into a law. Tech companies are already wary of it and worried that it will create issues for them. But this Act has generated an interest in many countries with the UK’s AI strategy including ethical AI development and the USA considering whether to regulate AI tech and real time facial recognition at a federal level.

Big tech companies are pushing the boundaries in search of cutting-edge technology and are becoming digital sovereigns with footprint across geographies, creating new rules of the game. While governments will do what they must, the companies can do their bit by having a code of ethics for AI development and hiring ethicists who can help them think through, develop and update the code of ethics from time to time. They can also act as watchdogs to ensure that the code is taken seriously and call out digressions from the same.

There will be social and cultural issues driving responses to AI regulation by different countries and in such a scenario,  the suggestion by Poppy Gustafsson, the CEO of AI cybersecurity company Darktrace, regarding the formation of a “tech NATO” to combat and contain increasing cybersecurity dangers seems like the way forward.

Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house. Unless otherwise noted, the author is writing in his/her personal capacity. They are not intended and should not be thought to represent official ideas, attitudes, or policies of any agency or institution.


Tags assigned to this article:
AI Technology

Jayesh Shah

Jayesh Shah

More From The Author >>