Artificial intelligence (AI) was founded as an academic discipline in 1955 but stakeholders lost their interest in it subsequently. Though interest in AI was revived from time to time yet its full fledged acceptance and implementation never saw the light of the day. After AlphaGo successfully defeated a professional Go player in 2015, AI once again attracted widespread global attention.

Throughout in its journey, AI has been divided into sub-fields that have no coordination and support systems for each other. One field is separate, independent and unrelated to the others. As a result we have no single AI system that can perform multiple tasks and functions simultaneously and in a holistic manner.

These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”), the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences. Sub-fields have also been based on social factors (particular institutions or the work of particular researchers).

AI can be categorised as weak AI or strong AI (or Artificial General Intelligence (AGI)). Weak AI is artificial intelligence that implements a limited part of mind, or as narrow AI, is focused on one narrow task. It is contrasted with strong AI, which is defined as a machine with the ability to apply intelligence to any problem, rather than just one specific problem, sometimes considered to require consciousness, sentience and mind.

It is weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including medical diagnosis, electronic trading platforms, robot control, and remote sensing. AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.

Some high-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines, online assistants, image recognition in photographs, spam filtering, predicting flight delays, prediction of judicial decisions, targeting online advertisements, and energy storage.

With social media sites overtaking TV as a source for news for young people and news organizations increasingly reliant on social media platforms for generating distribution, major publishers now use AI technology to post stories more effectively and generate higher volumes of traffic. AI can also produce Deepfakes, a content-altering technology. The boom of election year also opens public discourse to threats of videos of falsified politician media.

AI, like any other technology, has many challenges to manage. These include legal and regulatory norms, handling of AI prejudice and bias, civil liberties and cyber security issues, etc. However, what is most feared is the element of technological singularity or singularity aspect of AI.

Singularity is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the hypothesis of intelligence explosion, an upgradeable intelligent agent will eventually enter a “runaway reaction” of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an “explosion” in intelligence and resulting in a powerful super intelligence that qualitatively far surpasses all human intelligence.

In short, we are talking about Artificial General Intelligence (AGI) that is capable of defeating human intelligence and efforts from and within a single source. Such a scenario cannot be taken lightly and whisked away as paranoia says Praveen Dalal.  

Legal and judicial systems and stakeholders have been using technology in one form or another though its use and adoption is slow. Now the focus is upon LegalTech and this may expedite the adoption and use of technology by legal fraternity and other stakeholders.

Some people have also suggested that AI can be used for Online Dispute Resolution (ODR) purposes. In a general, basic and limited sense that is possible but in a technical and intended use sense that may be still few years away. We may be successful in weak AI use for ODR but strong AI or Artificial General Intelligence (AGI) is still miles and years away. So those fearing of robot lawyers and robot judges’ situation can relax as that scenario are not only many years away but may actually never happen. We must not forget the crucial difference between automation and human substitution vis-à-vis ODR and while the former may be years away the latter may never happen on a large scale says Praveen Dalal.

We are not saying this as a general opinion but as an institution and organisation (Perry4Law Organisation (P4LO)) that is engaged in ODR since 2004. We are managing the exclusive Techno Legal Centre Of Excellence For Online Dispute Resolution In India (TLCEODRI) along with two other ODR projects named Resolve Without Litigation (RWL) and ODR India. We have tested many open source software for ODR purposes from time to time and resolved many disputes using ODR in India. In 2012 we established a unique Centre of Excellence (CoE) for ODR named TLCEODRI and it is helping various stakeholders to launch and use ODR projects for various purposes. With this techno legal background and experience, we wish to make it clear that automation and human substitution are two totally different objectives and ODR may never intend to and achieve the latter objective.

We are currently testing use of automation and AI for ODR purposes and we would share details about our works with national and international stakeholders soon. We would also post links about all developments in this regard at this page and at other online resources of P4LO.