By Aurangzeb Soharwardi
Artificial General Intelligence
Artificial intelligence (AI) is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals, which involves consciousness and emotionalism. Artificial intelligence was founded as an academic discipline in 1955. The distinction between the former and the latter categories is often revealed by the acronym chosen. ‘Strong’ AI is usually labelled as AGI (Artificial General Intelligence) while attempts to emulate ‘natural’ intelligence have been called ABI (Artificial Biological Intelligence).
Description of Artificial Intelligence
Leading AI textbooks define the field as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is often used to describe machines (or computers) that mimic “cognitive” functions that humans associate with the human mind, such as “learning” and “problem solving”. The first work that is now generally recognized as AI was McCullouch and Pitts‘ 1943 formal design for Turing-complete “artificial neurons.
AI Particular Tools
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence. sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),the use of particular tools (“logic” or artificial neural networks), or deep philosophical differences.
The field of AI research was born at a workshop at Dartmouth College in 1956, where the term “Artificial Intelligence” was coined by John McCarthy to distinguish the field from cybernetics and escape the influence of the cyberneticist Norbert Wiener.
Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research. According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. By 2020, Natural Language Processing systems such as the enormous GPT-3 (then by far the largest artificial neural network) were matching human performance on pre-existing benchmarks , some systems claim to have a 99% accuracy rate. There are many AI tools like Search and optimization.
Many problems in AI can be solved theoretically by intelligently searching through many possible solutions: Reasoning can be reduced to performing a search. Evolutionary computation uses a form of optimization search.
Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning and inductive logic programming is a method for learning.
Probabilistic methods for uncertain reasoning in which Many problems in AI (in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.
Classifiers and statistical learning methods in which The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems.
Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. Artificial neural networks which were inspired by the architecture of neurons in the human brain.
A simple “neuron” N accepts input from other neurons, each of which, when activated (or “fired”), casts a weighted “vote” for or against whether neuron N should itself activate. The study of non-learning artificial neural networks began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch.
Frank Rosenblatt invented the perception, a learning network with a single layer, similar to the old concept of linear regression. Today, neural networks are often trained by the back propagation algorithm, which has been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa, and was introduced to neural networks by Paul Werbos.
Deep feedforward neural networks deal with Deep learning which is the use of artificial neural networks which have several layers of neurons between the network’s inputs and outputs. Deep learning has transformed many important sub fields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.
Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs) which are theoretically Turing complete and can run arbitrary programs to process arbitrary sequences of inputs. AI is being utilized for a wide range of activities including medical diagnosis, electronic trading platforms, robot control, and remote sensing.
AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more. Weak artificial intelligence, is artificial intelligence that implements a limited part of mind, or as narrow AI, is focused on one narrow task. In John Searle’s terms it “would be useful for testing hypothesis about minds, but would not actually be minds”.
It is contrasted with strong AI, which is defined as a machine with the ability to apply intelligence to any problem, rather than just one specific problem, sometimes considered to require consciousness, sentience and mind.
In his article of 2021 in Oxford Business Review ,Enterprise Adoption and Management of ARTIFICIAL INTELLIGENCE , Thomas H. Davenport,
Babson College writes gives a comprehensive account of Successful application of AI . He writes that Artificial intelligence is the most important new technology of the age, but it comes in many varieties, and businesses face a range of challenges in effectively deploying it throughout their organizations.
Tom Davenport takes a pragmatic but positive approach to AI’s long-term potential, describing effective approaches to creating and implementing a strategy for this transformative technology. tificial intelligence (AI), often defined as technology that performs tasks which previously could only be done by the human brain, Results from various surveys, most of them conducted by consulting firms, suggest that between 20 and 37 percent of large companies globally are either adopting or experimenting with AI.
Market researchers have found that a larger percentage—perhaps as many as 60 percent of large firms—now employ robotic process automation, the easiest form of AI to assimilate. In many of these firms, AI, particularly in the form of machine learning, is used to extend business analytics. However, some forms of AI have different capabilities.
There are many applications of AI like its different tools such as Image and Speech Recognition , Intelligent Agents and Chatbots , Prediction and Classification Systems , Planning and Scheduling , Intelligent Robots and Cobots , Robotic Process Automation (RPA) . He suggests that If a company wants to use AI to create a competitive advantage, it must adopt the technology broadly and aggressively.
And if the AI requires changes to an existing process or new employee skills, that’s another barrier, since the company must devise a plan to manage those changes.The most important factor is that of AI’s interaction with humans in the companies and teaching those workers new tasks and skills can be time-consuming and expensive. Surveyed workers often feel they are not being effectively trained to work with AI.
AI will have a transformative impact on their businesses and industries. There are four current trends which are beginning to reshape the use of AI in large companies: embedding AI into transactional systems, democratization through automation, creation of AI centers of excellence and other management structures, and sparse data technologies.
In a 2018 global survey, Deloitte found that 57 percent of executives believed that AI technology would substantially transform their companies within three years, and 38 percent believed that their industries would also be transformed. While these numbers are lower (and perhaps more realistic) than those in Deloitte’s 2017 survey, they still suggest high expectations.
He further adds that One of the most common concerns about AI is that it will eliminate jobs. Yet so far almost none of the organizations in which I have conducted interviews have reported significant job cuts. Almost all say they are “freeing up human workers to do more creative or complex tasks”or something similar. Two technologies, though, have contributed to exceptions: industrial robots and robotic process automation.
Two economists studied the impact of industrial robots on jobs. They found that, per thousand US workers, each robot replaced six humans and decreased wages by less than one percent. One report of unofficial conversations at the 2019 World Economic Forum in Davos suggests that executives privately hope and plan to cut jobs on a large scale.
Similarly, a 2018 Deloitte survey of US executives familiar with their companies’ AI initiatives found that 63 percent agreed that, “to cut costs, my company wants to automate as many jobs as possible with AI.”Several vendors of AI technology have told me that, while they don’t talk about it publicly, their customers are intent upon using AI to eliminate jobs.
It seems likely that any economic recession would lead to more substantial job and cost reductions from AI in its various forms. In her Harvard Business Review article of 2018 ,Can We Keep Our Biases from Creeping into AI? Kriti Sharma writes that Eminent industry leaders worry that the biggest risk tied to artificial intelligence is the militaristic downfall of humanity. AI created with harmful biases built into its core, and AI that does not reflect the diversity of the users it serves but AI is an opportunity to build technology with less human bias and built-in inequality than has been the case in previous innovations.
But that will only happen if we expand AI talent pools and explicitly test AI-driven technologies for bias. The more long-term approach requires expanding the talent pool of people working on the next generation of AI technologies to remove people bias.
AI should consider hiring creatives, writers, linguists, sociologists, and passionate people from nontraditional professions. AI-driven technologies will continue to integrate into the everyday lives of people around the world in meaningful ways.
It will become commonplace at the office and at home. To remove technology bias AI-driven enterprise technologies will improve commercial productivity, close workforce skill gaps and bolster customer experience across industries.
That’s why now is the right time to implement methods that eliminate harmful biases and take gender out of the equation, expand the population of people working on technologies, and address trust issues with AI.
The most important way to integrate AI with Human intelligence or natural intelligence is to train the work force and embed AI technologies and tools in human process fit.
Creativity of human workers coupled with AI can perform wonders for companies. How ever AI used in isolation or in disintegration with natural intelligence can create issues and disruptions. The organizations have to be more training intensive and also endeavor to create a more Intelligence based culture