We are searching data for your request:
Upon completion, a link will appear to access the found materials.
Every day we hear the chant of the supposed benefits and promises of what artificial intelligence systems can do, but with little or no critical vision about their social, economic and political impacts.
However, there is much concern about the commercial and political use of personal data, the increase in discrimination and racism, the substitution of jobs, and the development of weapons and killer robots, among other aspects of the application of artificial intelligence. Now it is added that these systems also have a huge environmental and climate impact due to their high demand for energy and the greenhouse gas emissions that this entails.
A study by Emma Strubell, A. Ganesh, and A. McCallum of the University of Massachusetts Amherst (June 2019) estimated the impacts from energy use and carbon emissions of some of these systems. They found that in systems that emulate neural networks, training a single artificial intelligence system generates up to five times more carbon emissions than the average U.S. car over its entire lifespan, including manufacturing and fuel use. (https://arxiv.org/abs/1906.02243)
They focused on four deep learning artificial intelligence models for natural language processing (PLN), which are among the most widely used: Transformer, ELMo, BERT, and GPT-2. All have dramatically increased their capabilities in the last two years. OpenAI's GPT-2, funded by businessman Elon Musk, generated controversy for its ability to make up and complete sentences, massively generating credible fake news. Musk announced that the system will not have open source, supposedly to prevent its indiscriminate use - and in the process maintain its monopoly.
The impact calculation they used for the study is based on the energy expenditure of the processing equipment, electricity and associated tools for training artificial intelligence systems. Strubel explained to the magazineNew Scientist that to assimilate something as complex as language requires processing an immense amount of data. A common approach is for you to read billions of texts to see the meaning of words and how sentences are constructed. This requires enormous processing, storage and energy capacity. It does not mean that you understand what you read, but you will eventually be able to imitate our use of language.
The study makes a comparison with other sources of carbon emissions. In the case of a car, it emits an average of 57 tons of CO2 during its useful life. Training an artificial intelligence unit that can decipher and handle language could emit up to 284 tons of carbon, five times more. It means about 315 times the emissions of a flight from coast to coast in the United States and 56 times the average energy consumption of a human being in his entire life.
Large digital platform companies, such as Amazon, Microsoft and Google, seek that part of the energy they use comes from renewable sources, but this is not even remotely enough given the exponential growth in demand that they cause.
Being serious, this is just one of the examples of the monstrous demand for energy for the development of the digital age, which is added to other impacts of this that are generally not associated: the dispossession and accumulation of materials and resources that are scarce, environmental pollution that causes production and waste, the worsening of climate change, in addition to the impact on health, both direct by electromagnetic radiation from telephone networks and the Internet and those derived from other forms of pollution in this industry.
The use of artificial intelligence is also tremendously problematic on other levels, because based on algorithms determined by the commercial goals of the developers and their economic and cultural context, they repeat discriminatory and racist schemes. For example, artificial intelligence systems are being used in banking institutions - for the evaluation of credits, loans, investments - and judicial institutions to manage sentences, places of confinement, and so on. In both cases, the system has been shown to be discriminatory and racist: for example, if the person tested is black or Latino in the United States, the system automatically evaluates him as less trustworthy and more dangerous, supposedly based on the historical percentage of people detained. and / or convicted. As this is already a racist and discriminatory basis, artificial intelligence affirms and increases it.
As with large digital platforms, independent regulation and supervision is either non-existent or heavily biased in favor of powerful companies that should be controlled. Much more debate and social action is needed on the implications of these technologies that affect us all. In this sense, we welcome the recent creation of two publications that are the collaboration of various social organizations and activists: the Latin American digital magazineCitizen Internet and the portalBot Populi on digital justice, which for now is mostly in English (https://botpopuli.net/).
By Silvia Ribeiro
Source: La Jornada