The Evolution of Artificial Intelligence: From Assistance to Super Mind of Artificial General Intelligence? Article 2. Artificial Intelligence: Terra Incognita or Controlled Force?


The Evolution of Artificial Intelligence: From Assistance to Super Mind of Artificial General Intelligence? Article 2. Artificial Intelligence: Terra Incognita or Controlled Force?
Download
Authors: Grinin, Leonid; Grinin, Anton L. ; Grinin, Igor L.
Journal: Social Evolution & History. Volume 23, Number 2 / September 2024


DOIhttps://doi.org/10.30884/seh/2024.02.07

Leonid E. Grinin, HSE University; Institute of Oriental Studies, Russian Academy of Sciences, Moscow, Russia

Anton L. Grinin, Lomonosov Moscow State University, Russia

Igor L. Grinin, Volgograd State Technical University, Russia

ABSTRACT

The article is devoted to the history of the development of ICT and AI, their current and expected future achievements, and the problems (which have already arisen but will become even more acute in the future) associated with the development of these technologies and their widespread application in society. It shows the close connection between the development of AI and cognitive science, the penetration of ICT and AI into various spheres, particularly health care, and the very intimate areas related to the creation of digital copies of the deceased and posthumous contact with them. A significant part of the article is devoted to the analysis of the concept of ‘artificial intelligence’, including the definition of generative AI. The authors analyse recent achievements in the field of Artificial Intelligence. There are given descriptions of the basic models, in particular the Large Linguistic Models (LLM), and forecasts of the development of AI and the dangers that await us in the coming decades. The authors identify the forces behind the aspiration to create AI, which is increasingly approaching the capabilities of the so-called general/universal AI, and also suggest desirable measures to limit and channel the development of artificial intelligence. It is emphasized that the threats and dangers of the development of ICT and AI are particularly aggravated by the monopolization of their development by the state, intelligence services, major corporations and those often referred to as globalists. The article provides forecasts of the development of computers, ICT and AI in the coming decades, and also shows the changes in society that will be associated with them.

The study consists of two articles. The first, published in the previous is-sue of the journal, has provided a brief historical overview and characterized the current situation in the field of ICT and AI. It has also analyzed the concepts of artificial intelligence, including generative AI, changes in the understanding of AI related to the emergence of the so-called large language models and related new types of AI programs (ChatGPT and similar models). The article has discussed the serious problems and dangers associated with the rapid and uncontrolled development of artificial intelligence.

This second article describes and comments on the current assessments of breakthroughs in the field of AI, analyzes various predictions, and provides the authors' own assessments and predictions of future developments. Particular attention is paid to the problems and dangers associated with the rapid and uncontrolled development of AI, with the fact that advances in this field become a powerful means of control over the population, imposing ideologies, priorities and lifestyles, influencing the results of elections, and a tool to undermine security and geopolitical struggles.

Keywords: information and communication technologies, ICT, artificial intelligence, AI, large language models, LLM, cognitive science, self-regulating systems, the Cybernetic Revolution, inforg, technological progress.

1. SYMBIOSIS OF COGNITIVE DISCIPLINES AND AI

In many cases, the development of ICT and AI clearly demonstrates the multiple links and interdependencies that result in the closest symbiosis of advanced technologies MANBRIC (e.g., Grinin L., Grinin A. 2015a; 2015b, 2016; Grinin et al. 2017a; 2017b; 2020; 2021). 1 In particular, there is an evident combination of neuro-technologies on the one hand, and ICT and AI on the other, when developers of artificial intelligence seek to use the achievements of cognitive science for its development. In fact, it is the achievements of cognitive science that have become one of the major drivers of AI, at least over the last three decades. We mean here the rapidly developing technology of neural networks (more precisely, they are now called neuromorphic networks, and the field of study is neuromorphic computing). 2 In order to develop AI, it was necessary to understand how the human brain functions (see, e.g., Hawkins and Blakeslee 2004). The lack of knowledge about the structure and functioning of the brain, of the mechanisms of memory, decision-making, foresight and other intellectual functions of the brain at deep levels has become an obstacle to progress along the path of machine learning and other opportunities. As a result, a symbiosis between cognitive sciences and technologies on the one hand, and programming on the other, began to form, with the aim of studying the brain in order to use it for machine learning technologies, as well as expanding the opportunities for influencing human consciousness with the help of AI. This approach promised great prospects, so it began to be actively promoted and guided by governmental structures. The key to further success is believed to lie in the collection, storage, processing, analysis and use of brain data. The United States has launched large-scale programmes to collect and analyse brain data, as have European countries, China and others. The Apollo Project of the Brain, launched in 2016 and for which the US government allocated $100 million, is very indicative in this regard. The aim of the project is to find algorithms that will enable a computer to think like a human being. The Intelligence Advanced Research Projects Activity (IARPA), created as an analogue to the famous Defense Advanced Research Projects Agency (DARPA), has also allocated $100 million to a similarly grandiose project, the Machine Intelligence fr om Cortical Networks (MICrONS), to perform an inverse analysis of a one-millimeter-sized brain sample, study the mechanisms by which the brain performs computations, and use the data obtained to improve the performance of machine learning and artificial intelligence algorithms (Cepelewicz 2016). To illustrate the novelty and scale of this task, let us provide data fr om the Allen Institute (2020) which shows that its role was to do something that had never been done before: to slice a one-cubic-millimeter section into ~25,000 ultrathin slices, then take ~125 million photographs of these slices, and assemble them into a three-dimen-sional volume containing ~100,000 cells, 2.5 miles of wires, and 1 bil-lion synaptic connections.

In short, the development of AI in many of its directions is based on the attempts to imitate various biological mechanisms. 3 Of course, it is very difficult for artificial intelligence to achieve the capabilities of the brain. The most advanced neural networks today have dozens of layers while in the human brain there are hundreds of thousands and millions of such layers. This naturally limits many opportunities, including the so-called deep learning (Schmidhuber 2014; Gavrilov, Kangler 2016). Nevertheless, the successes of AI, even in its early stages, are very impressive and at the same time alarming, because they are being used primarily by structures whose intentions and technologies are secret and unaccountable. ‘It’s a substantial investment because we think it’s a critical challenge, and [it’ll have a] transformative impact for the intelligence community as well as the world more broadly,’ says Jacob Vogelstein at IARPA (in Cepelewicz 2016).

2. A STEP TOWARDS GENERATIVE AI AND POSSIBLE THREATS 4

Modern advanced AI is often referred to as generative. We discussed it in detail in the first article. However, it is worth recalling that generative AI is a type of artificial intelligence that is capable to generate new content, formulate ideas, conduct dialogues, create works, stories, images, videos and music, as well as edit images, videos, etc. It is based on modern basic models of AI learning called Large Language Models (LLM). They are specifically designed to perform language tasks, including the creation (generation) of texts, information, blogs, dialogues and information extraction. The powerful generative AIs are ‘trained’ on super-expensive supercomputers by a team of highly-skilled programmers and other specialists, including psychologists. The AIs are trained using incredible amounts of information fr om a wide range of spheres.

Artificial intelligence is becoming a topic of public debate. The dangers of AI and its misuse have been apparent for a long time, and not just to specialists. 5 We feel its influence (often negative and very annoying) almost every day (see also examples in the first article). Much has also been written about the ethics associated with the development and use of AI (see, e.g., Razin 2019). But working on such an ethics requires the accumulation and concentration of the forces of the whole society, as well as very clear imperatives and rules. One example is the well-known postulates formulated in Isaac Asimov's science fiction collection ‘I, Robot’ and other books about robots. However, the problem of the impact of the development of AI on society, on humanity and in particular on the future of the human race, that is, the threat to its development, has never been the focus of attention. In 2023, everything changed, but largely because the forces behind AI became interested in creating hype and sensationalism around the topic in order to boost tech stocks (we'll talk about this in more detail below, but the situation is very similar to what happened with the COVID-19 pandemic and vaccines). But in any case, it has become one of the most important issues on the public agenda – and we think that is a very good thing because of public awareness.

ChatGPT: Achievements. One of the most recent examples of the most complex AI is the GPT-3 text neural network (it is also the AI of large language models), which was created in 2020. As the successor to GPT-2, after many years of development, it was something truly terrifying with about 200 billion parameters of internal development, with its training costing $ 5 million and 570 GB of text for training (its predecessor GPT-2 was trained ‘only’ on 40 GB of data) (Kasparyants 2022). The Open AI developer only licensed its use to a few major platforms, but three years later it launched ChatGPT, a chatbot based on the capabilities of the neural network itself. Its capabilities strike the imagination. Obviously, in the relatively near future (within 5–15 years), ChatGPT will begin to actively replace humans in intellectual work, especially in text writing in many fields (journalism, advertising; even medical and scientific texts, etc.; cf: Biswas 2023; Lund, Ting Wang 2023). We will also discuss this below.6 For example, there is information that Amazon has already become a market for books created by artificial intelligence and presented as written by humans, with travel books being a popular category of fake works. We still believe that after some time such ‘authors’ will be required to state that their work is the product of an artificial intelligence.

Open AI, the creators of ChatGPT, also demonstrated a neural network called DALL-E 2, which is capable of transforming text descriptions into photo-realistic and artistic images. It illustrates Brodsky's poems and completes paintings of classics. It can even create images fr om meaningless (but grammatically correct) sentences, such as Noam Chomsky's famous statement, ‘Colourless green ideas sleep furiously’. At the same time, psychological experiments by Harvard psychologists have shown that DALL-E 2 has no idea how to understand the results of its own activities, which many people mistakenly perceive as AI creativity. Nevertheless, such capabilities will have very serious implications for people in creative occupations and for society as a whole.

If such chatbots become very convenient executive and intellectual assistant, ‘advice-giver’ and ‘counsellors’, mentors in certain operations and problem solving, etc., this can significantly facilitate the preparatory work in many areas of intellectual activity. But there is also a serious danger that people will stop double-checking the data and rely on AI answers, which will reduce the depth and reliability, not to mention the generalization of personal experience and the inevitable serious errors for which you will only have yourself to blame. In general, this process has been going on for a long time (Wikipedia is an example), but it is likely to accelerate. People will inevitably become lazy and their style will begin to follow the style of the chat, where you can ins ert what the sponsors and owners think is necessary. In other words, AI will begin to impose the style of thinking, presentations and images. Let us consider the prospects of modern artificial intelligence by the example. 7 Recently, in May of 2024, Open AI demonstrated a new and more advanced modification of Chap-GPT 4 – GPT-4 Omni, or just GPT-4o.

AI as an assistant and a threat. It is obvious that the use of large language models will make the work of various specialists much more productive in many respects, allowing for research, calculations and studies that are simply unthinkable in today's conditions. A good evidence of such a breakthrough in a scientist's productivity is presented by Patrick Mineault, who studies the ways to overcome schismogenesis as part of his NeuroAI research.8 He also tries to understand which research trends are more promising:

– Using the achievements in neurobiology to improve AI (neuro → AI);

– or vice versa (AI → neuro).

In the context of this task, it was necessary to identify studies on the interrelationships and mutual influence between neuroscience and AI. They applied large language models. So, the procedure was the following: a) the artificial intelligence first analysed 40,000 scientific articles on neurobiology and AI over the past 40 years; b) then it identified 15,000 articles among these 40,000 that deal with the interrelations and mutual influence between neurobiology and AI; c) in the third stage, basing on the results of the analysis, AI placed the identified 15,000 articles in the landscape invented by Mineault. Of course, it is impossible for a department or even an institute to carry out such an analysis, let alone a single person.

But in this respect, large language models are just another, albeit significant, step forward, starting with the first computers, which made it possible to perform calculations in a matter of days that would previously have taken many years. Or take Excel, for example. It allows you to perform calculations and create mathematical models in seconds, increasing the productivity a thousand times. And even a non-mathematician can create such models, which were out of the question before.

We witness a process that we have described many times in our research – the transition fr om the narrow specialisation that has developed over many centuries to the universalisation of skills and abilities through self-regulating systems and artificial intelligence (see e.g., Grinin L., Grinin A. 2015b).

At the same time, it is very likely that the new, even more advanced AI will have a dramatic (and perhaps even crucial) impact on programmers, because with the help of such chats, programming will become much easier, and accordingly both the need for an increasing number of programmers and the income levels of most of them may decrease. Of course, this is a process that will take decades, but we are afraid that the old story may repeat when an invention eliminates its creator. The main thing, of course, is that AI does not become a Frankenstein's monster.

One way or another, we can agree that in a decade or two, personal LLM assistants will outperform modern intelligent assistants Alice (Yandex), Siri (Apple) and Alexa (Amazon) many times over and become ubiquitous (see Chat... 2023). These assistants will acquire almost fabulous capabilities, at least in intellectual assistance and information. It turns out that many or even all of us will be able to have ‘one assistant for all occasions’ or ‘the beast’ fr om the Russian fairy tale ‘The Scarlet Flower’ who will grant every wish (see above about the abilities of assistants). This is not surprising, as humans have made many fairy tales come true. However, it is extremely important that assistants do not become a tool that constantly and increasingly drives us into a virtual concentration camp.

Dangers. We are thus entering a new stage in the implementation of new AI capabilities in our intellectual life. This will require a serious transformation of education, as well as art, science (which will, of course, partly degrade under such pressure) and other fields. As a result, the monstrous quantity of artificially created intellectual products will prevail over their quality.

An even greater danger is that such chats will immediately begin to be used as an indirect, powerful ideological and propaganda machine (the more relevant texts with the necessary bias that are filled into memory, the greater the propaganda effect), and they can also serve as a powerful censorship mechanism. It is likely that, in time, ideological chats will begin to fight each other, just as websites are fighting each other now. The most serious danger of AI development (as with other technologies) is that we will only become aware of these threats when it turns out to be too difficult to change anything. As Bill Gates noted, ‘whatever limitations it (AI) has today, they will be gone before we know it’ (Gates 2023). Therefore, regulations need to be put in place well in advance.

Regarding concerns about the latest AI (such as ChatGPT or DALL-E 2), it should be noted that it is now very difficult to assess this phenomenon, since there is no balanced and systematised view, let alone a clear theory. Instead, there is a lot of hysteria, admiration, nonsense and hype. There was immediate talk of a ‘ChatGPT revolution’, although it is not at all clear whether this is a revolution or just another leap in information technology, of which we have already seen a dozen. In many ways (if not mainly) this excitement is related to attempts to accelerate the market, as has happened before with breakthrough technologies, notably nanotechnology (for more details see Grinin L., Grinin A. 2015a). We have witnessed a powerful wave of information, the general meaning of which is: new AI will not allow the economy to go into recession, 9 as a result of which the shares of leading technology companies, and therefore the stock markets in general, have risen quite significantly10 in 2023 and the rally continues in 2024, without any fundamental reasons. In this way, the managers of this media campaign have succeeded in raising interest in the shares of digital companies.

It is expected that the increasing use of AI will lead to the adaptation and transformation of technologies in traditional economic sectors along the entire value chain, leading to the algorithmisation of almost all functions, fr om logistics to corporate management, from the choice of pricing policy to market analysis. This is generally true. But in the increasingly financialised Western economy, the main way for companies to make money is through hype, media campaigns and stock market bubbles. The message has got through. According to Goldman Sachs, generative AI can increase global GDP by 7% (or almost $7 trillion) and boost productivity growth by 1.5 percentage points over ten years (Amazon 2023). Of course, these numbers are too big and are less a forecast than an opportunity to capture the imagination of investors and encourage them to invest in technology stocks (see above on the growth of Tesla shares in the first article). It would be optimistic to think that generative AI could add 1% to global GDP in ten years, but even these are very big numbers.

Some argue that, like electricity, GenAI will be ubiquitous (Fraley 2023). Of course, the range of applications for genAI will expand, but the comparison with electricity seems an exaggeration.

3. FEARS AND DANGERS: REAL AND IMAGINED

3.1. The Most Discussed Fears

Numerous fears about AI can be divided into several directions:

1. AI will subjugate humans.

2. AI will take our jobs.

3. The pace at which AI changes will begin to increase by an order of magnitude.

4. AI will replace teachers in about ten years.

Now let us look at them in more detail.

AI will subjugate humans. Here is an example of such views: ‘AI just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches’ (quoted from an interview with the American computer scientist Douglas Hofstadter [2023]). It is worth citing his other statements:

For me, this [progress of AI] is quite frightening because it means that everything I believed in before is being overturned. I thought it would be hundreds of years before anything even remotely resembling human intelligence appeared.

… I never imagined that computers would rival, let alone surpass, human intelligence. … But it seemed to me like it was a goal that was so far away, I wasn't worried about it… And then this started happening at an accelerating pace, wh ere unreachable goals and things that computers shouldn't be able to do started toppling.

…I think it's [progress of AI] terrifying. I hate it. I think about it practically all the time every single day.

…it feels as if the entire human race is going to be eclipsed and left in the dust soon.

…But the progress, the accelerating progress, has been so unexpected, so completely caught me off guard, not only myself but many, many people, that there is a certain kind of terror of an oncoming tsunami that is going to catch all humanity off guard.

This looks like technological hysteria. We do not deny the possible attempts to enslave humanity by a group of powerful people, whom we call globalists, as well as national governments and large corporations, with the help of a targeted and accelerated development of AI and supercomputer power. This process has been underway for decades and may well accelerate. However, technologies in themselves do not enslave or subjugate anyone. At worst, they force society and people to restructure somewhat, to change their lifestyles, their skills, etc., but only if they really give a powerful boost to labor productivity, resource savings (or the involvement of new resources into circulation) and wealth growth. The key point is that any technology is controlled by those who control it. Therefore, the danger comes from those groups that can use AI to gain excessive power and uncontrollable capabilities. These groups must be exposed and constrained. And the hysteria about AI enslaving humanity and ‘turning it to dust’ only obscures the real hazards of the process. Moreover, experience shows that dangerous technologies are somehow counterbalanced by countermeasures (which are extremely necessary).11 Therefore, it is important to identify both the underlying reasons for artificially stimulated development of Artificial Intelligence (pardon the pun), and the interests and forces that stand behind it.

AI will take our jobs. For example, according to analytical reports by Nelson, McKinsey and Pricewaterhouse-Coopers, AI will replace from 30 to 50% of today's jobs by 2030. In this form, these are sensational (in the most negative sense of the word) ‘horror stories’. However, behind them are the great interests of those political and financial actors, both national and global, that seek to use the new AI to further subjugate the intellectual process of billions of people.

Nevertheless, it would be wrong to ignore the danger of undermining the well-being of certain professional groups, including highly intellectual and, most importantly, creative ones. Conflicts are already arising, the most famous being the protests and strikes by Hollywood screenwriters. They demand to ban the AI from violating their copyrights. 12 In Japan, it has already been stated that generative AI cannot infringe copyright. In other words, there may be a ban on using other people's works to train AI (which seems generally justified).

Let us consider the threats to other occupations, the danger described above, and the problems associated with the need to change qualifications, in a little more detail. Of course, there will not be and cannot be a 30% or even 50% reduction in occupations by 2030. In the last 15 years we have often heard that robots will soon replace humans everywhere. But for now, the opposite is true: there is an acute shortage of workers and labor supply on the labor market. And at the first stage, AI will most likely act as an assistant rather than a competitor – just as the navigator made taxi drivers' jobs easier, but did not replace them. Nevertheless, representatives of a number of occupations need to think about the fact that their skills will be less in demand in the future, and that, over time, they will have to make room, since a number of their functions will be taken over by AI. This is especially important for those who choose a future career for themselves or their children. Not so long ago, it seemed that knowledge of a foreign language would always be useful, but now translators are under threat. In the future, AI will also be able to replace proofreaders, bank employees (their number has been rapidly declining for some time now), tour operators, dispatchers, announcers, designers and illustrators, copywriters and journalists, even writers and screenwriters (if it is not possible to prohibit the use of texts with copyright, see above), to some extent lawyers, teachers, doctors, accountants, analysts and even, as mentioned above, programmers (but this is a more distant prospect). Shops without sales assistants have long been created. Robots with developed AI in fact threaten taxi drivers and drivers (but this is also a matter of the not too near future, within 30 years [see Grinin L., Grinin A. 2015b; 2023]), as well as couriers, cleaners, nurses and hotel staff, cooks, waiters, etc.

Displacement of occupations: it is important that the destruction is creative. It is important to understand that the reduction in creative occupations and programmers will primarily affect relatively low-skilled specialists. AI can replace them relatively easily, but it will be very difficult to replace highly-skilled professionals. Thus, today the number of photographers has considerably declined and there are few photo studios. However, good photographers are still needed. Therefore, it is necessary to think about improving one's skills, about specialization and niches that will be difficult for AI to occupy (there will be no point in developing it in the narrow areas, see also Section 3.2). Thus, making it easier to acquire certain skills will allow a large number of non-professionals, who can become professionals with the help of AI, to enter some fields.

In addition, we should expect many new occupations and specializations to emerge. Technological giant Dell predicts that 85% of the jobs that will be relevant in the next 15 years do not even exist yet. Of course, this figure is used to shock the public, but there is some truth to it. Some jobs are not even visible yet. Here, as with any technological process, the principle of creative destruction, formulated by the economist Joseph Schumpeter is at work. However, it is extremely important that the destructive phase is not too severe, harsh, painful and rapid. That is why we need to think and regulate today.

In its report ‘Augmented Work for an Automated, AI-Driven World’ (IBM 2023), IBM presented some important ideas (along with some shocking and, we think, incorrect predictions about timing).

According to IBM, a new era in the division of labor between humans and machines is coming. This means the need for retraining, the ability to integrate into the new division of labor between humans and machines, and the acquisition of ‘augmented work’ skills, wh ere the partnership between humans and machines increases labor productivity many times over and ensures exponential growth in business returns. It is an interesting approach, but the implications are very worrying (and clearly aimed at boosting digital stock prices, see above). In the next three years, 40% of the workforce will need to be re-skilled because of AI. This means that 1.4 billion of the world's 3.4 billion workforce will need to reskill. The estimated percentage of jobs that will transit to ‘augmented work’ will be three-quarters in marketing (73%) and customer service (77%), and more than 90% in procurement (97%), risk and compliance (93%), and finance (93%).

To repeat, IBM's findings, with their mind-boggling figures for reskilling or augmented work, that is work which is much more closely related to AI skills, are designed precisely to astonish corporate clients so that they immediately want to invest their money in artificial intelligence and in the shares of companies that develop AI. This is cynical and very much in the spirit of such companies. Of course, if such a transition does take place (and it is very likely to start and will gradually spread to new areas), it will take much more time. However, in any case, a very important question arises as to what this augmented work will look like: will AI become an assistant to a human specialist or, on the contrary, will the specialist become a supplement to AI (as in the period of early industrialization and during the introduction of conveyor work, when the worker became an appendage of the machine, according to the definition of Karl Marx and Friedrich Engels (Marx, Engels 1955: 430)?

It is not AI that will replace humans, but humans using AI that will replace those who do not know how, do not want to, or cannot use it. This is a very interesting idea which is among the mix of reasonable ideas and clearly provocative predictions made by IBM (2023). It is important for understanding that AI is not an independent entity, but only a tool (albeit becoming increasingly intelligent) in the hands of the largest financiers, digital specialists, and various types of employers. Of course, AI, like machines before it, and to some extents robots (e.g., in the automotive industry), will replace humans in many jobs. But because this phenomenon is obvious and attracts attention, the second important aspect is overlooked. Namely, that people who have mastered AI are in an advantageous position compared to those who failed to master it. Consequently, mastering technological progress is a priority. There is even a special name for the most strenuous fans of the computer-digital and intellectual-artificial environment: inforgs – people who spend more time in digital reality than in sleep. However, the idea that it will be inforgs who will replace people with a poor command of AI does not seem to be indisputable. To take the right position in the emerging world of AI, a balance is needed between mastering the technology and having one's own analytical skills, a healthy and critical view of things, and independent thinking. It is also extremely dangerous to completely hand over creativity to AI, since it will become an ersatz of low-level creativity.

On the pace of AI development. It is argued that the AI revolution has reached its turning point and the rate of change will increase rapidly, by an order of magnitude (IBM 2023). We believe that an order of magnitude increase (i.e. at least a tenfold increase) is a significant exaggeration (as is the description of the speed of genAI development and its impact as a combination of a storm and a tsunami, as stated by the author of the recent ‘Generative AI Bible’ [Fraley 2023]). Nevertheless, we can expect the speed of AI development to increase significantly (given the involvement of billion-dollar supercomputers). This should only provide additional incentives for reasonable restrictions on development and requirements for transparent algorithms. Therefore, it is quite possible that in 10–15–20 years we will see ‘LLM applications’ (i.e., large language models) advanced to a degree that we cannot imagine today (Chat… 2023). This is often the case with the development of innovative technologies. Another highly improbable prediction, made by one of the pioneers in the field of AI, Terry Sejnowski, is that LLMs will become the final informa-tion gadget for people, replacing all the others. Social networks challenge considerably websites, but both coexist for the time being.

Artificial intelligence and education. According to the same Terry Sejnowski, in about ten years, education will be based on AI teachers (Chat… 2023). However, we do not believe that such a substitution will take place within ten years. Today, distant learning, which greatly reduces the quality, poses a greater threat. The replacement of teachers and lecturers by AI will certainly happen, but we believe that this is a much longer process (at least several decades) and such a replacement is unlikely to be complete. However, in some narrow areas, primarily wh ere people are willing to learn independently (e.g., foreign languages), it is quite possible, since the dialogue with AI can already be established at a high level, data search in a large language model takes a short time, translation from one language to another, as well as phonetics in general, is established etc. Therefore, it would be foolish to completely ignore such a transformation; it is necessary to find the ways to make AI assistants in the educational process, but not replace humans with quasi-partners.

Returning to what we have already said above, we emphasize that we cannot allow those who control AI, as well as the aforementioned inforgs, who enthusiastically embrace every innovation, to displace independent and creative people, humanists. This will require not only restrictions on the development of large language models and a narrowing of the field in which they are trained, but also governmental policy, especially in education, so that the necessary technical skills are taught in schools and universities of every speciality as firmly as the skills of writing, reading and arithmetic. Then it will not be difficult for a humanities student to fit into the environment of artificial intelligence, and it will be a fruitful symbiosis.

3.2. Dangers which are Undeservedly Under-reported

AI and its impact on sociological research. New large language models are actively used for content analysis of sociological groups. This is important for influencing voters, buyers, sympathizers, opponents, etc. The idea is to create models of thinking of certain groups based on the analysis of their texts and other content, and then to constantly monitor changes in the quantitative and qualitative composition of these groups in order to influence them more effectively. Given the closed nature of the algorithms of these language models, such deep penetration into human moods, attachments, beliefs, etc. becomes increasingly dangerous and uncontrollable. And as the tools of penetration and influence become more convenient for manipulators, a very serious threat to privacy, political freedom, and free expression of will arises. In fact, the ability to manipulate our choices is greatly increasing.

Even projects that seem to pursue externally a noble goal, such as the ‘Democracy by Design’ project by an international group of researchers from Switzerland, Austria and the UK, which aims to develop the so-called ‘computational diplomacy’ and the prospects for modernizing society with the help of digital technologies based on public participation, have suspicious options. In particular, the main instrumental directions of ‘designed democracy’ include:

• shaping people's views;

• correcting false information and disinformation;

• widespread involvement of collective intelligence in decision-making;

• building a voting system (since simple electronic voting works poorly and carries additional risks of manipulation) (Helbing et al. 2023).

And we know very well in what direction left-wing politicians and political entrepreneurs push democracy, trying to ‘brainwash’ voters. But while the media that used to be the main ‘brainwashing’ tool, now AI, especially large language models, will play an increasingly important role. And taking into account the above, new models of electronic voting may become more sophisticated in terms of influencing decision-making.

Thought-reading and thought-crime. Large language models are actively trained to read other people's thoughts without any implanted electrodes. In particular, the ‘Meaning Decoder’ (a generative model similar to ChatGPT) works in the following way. First, for up to 15 hours, the decoder learns to match the brain activity patterns of a person lying in an fMRI scanner and listening to podcasts there with the texts of those podcasts. Having learnt to compare patterns and text, the decoder can then generate texts based on any pattern of human brain activity (random thoughts, imaginary stories, etc.). The result is not a verbatim transcript of thoughts, but a text that roughly corresponds in meaning to what the person heard (during decoder training) or thought of (during decoder operation).

The hypothesis of the authors of the study ‘Thought Cloning: Learning to Think as You Do It, Mimicking Human Thought’ (University of British Columbia, Vector Institute, and Canada CIFAR AI Chair) (Airhart 2023) is that if you teach a model actions and corresponding thoughts, it will learn the correct associations between behavior and goals. Of course, it is not that simple, the process of thinking is much more complex than the theory, so it is not at all clear whether it is possible to teach a model to read a person's mind. But, in our opinion, the movement in this direction poses a serious threat, and even small successes in this direction will be extremely dangerous. It is absolutely clear to everyone who will use it and for what purposes, and privacy will have to be completely forgotten. That is why this kind of research must be stopped.

Of course, it is difficult at this stage to judge how well-founded these or those predictions are. It is obvious that most of what is written about will not happen at all, or not to the extent predicted. However, what is less written about, or not mentioned at all, may develop and spread unexpectedly and rapidly. In any case, the development of generative AI is a way to strengthen the symbiosis between governments and the biggest global players, a way to finally trample on freedoms and rights, even those that no one has yet violated (take, e.g., mind reading, which is not far from the concretization of the idea of thought crime). 13 This can strengthen the globalists; and the control and infiltration of ideas needed by the customers will definitely increase. Of course, there will appear some natural difficulties, obstacles, barriers and mechanisms that will reduce the power of the generative AI (see above). But with such a rapid pace of development, and the uncertainty of the prospects and dangers, it should become imperative to preliminary lim it the development of AI and its influence.14 On the other hand, ideological censorship should not be excessive and self-sufficient. For example, China is preparing a law that will make it illegal to use any generative AI if it is trained using a training dataset that contains more than 5% of ‘illegal and harmful’ content. ‘Harmful content’ in China is any information that is blocked by the ‘Great Firewall’ (i.e., Chinese Internet censorship). The political objectives are clearly at the forefront here. So the problem looks very complex: how to lim it the development of generative AI without turning it into a tool for strengthening government power?

But while experiments in the field of mind-reading remain experiments, practice of eavesdropping on people's conversations and using them for commercial and other purposes is flourishing.

In one way or another, it is obvious that all the sensational successes of the AI of Large Language Models (LLM) have been achieved thanks to the most powerful analysis of the fruits of human activity (i.e., human actions) in the form of digital data. These are texts of various subjects and sizes created by people, from scientific to poetic; images, photographs and videos; melodies; various actions, including conversations. This huge database is used to teach the LLM. At the same time, its developers treat this database as if it belonged to no one and as if it were their own just because they have enormous advantages in analysing and dissecting it. This is not the case. And the time has come (in fact, it came the day before yesterday, but today there is no time to hesitate) to lim it and regulate the use of this database for the purposes of AI training.

4. CONCLUDING REMARKS

It is not the technology that should be controlled, but its customers. In conclusion, we would like to express a very simple, often repeated and clear idea that has been lost in the hysteria about the future of AI. No non-human intelligence is (and, we believe, ever will be) an independent actor. Whatever miracles AI shows us today and tomorrow in solving very complex intellectual and creative problems, these models and programs will always remain just a tool created and controlled by humans. Just as with military or biological technologies (remember Covid-19), the danger lies not in the power of the new generative AI, but in the power of those elites (today global) who remain beyond the society's control. This lack of control, protected by all forms of bribery, secrecy, monopoly, collusion, the power of the state machine and secret services, manipulation, instilled ideology, corruption, alleged state or political expediency, mutual responsibility and other various ways of governing without being accountable for one's actions, is the greatest threat to our privacy, our constitutional and other rights, society and humanity as a whole.

And with such omnipotence, total deception and impunity, the uncontrolled development of AI becomes very dangerous indeed. Therefore, reasonable restrictions on the development of AI, its training, the uncontrolled and free use of copyrighted materials, requirements for transparency of algorithms, as well as demonopolization of the market, etc., become a very important and extremely urgent task. With a proper legislation and strict control, the development of AI will proceed quite safely and will undoubtedly bring many benefits.

Every technology is controlled by those who manage it. At least in its primary intended use. Of course, no one calculates various other effects, as has always been the case with technological progress, the introduction of computer games, etc. (on the dangers of future technologies, see: Grinin L., Grinin A. 2023). However, given the rapid development of AI, since we cannot calculate the more distant consequences of its progress, it is better to lim it the capabilities of artificial intelligence, to make the framework for the development of new capabilities more rigid. It will be easier to extend them later, because if you do not introduce restrictions, it will be much more difficult to force AI into rigid frameworks.

In any case, rules for the safe use of AI must be developed, as has long been done for technology, chemicals, pharmaceuticals, etc.15 The process, called alignment in English (roughly speaking, adaptation, balancing the goals and contradictions), aims to ensure that the behavior of LLM benefits humans and does not harm them. This is usually achieved by configuring the model so that the desired behaviour is strengthened and the undesired behaviour is weakened. The analogous process in humans is called parenting. This is how people raise their children.

Of course, one can agree that there are fundamental limitations to such ‘training’ of large language models (Wolf et al. 2023), and that deviations and incidents are always possible. But the same thing happens with technology, which sometimes seriously fails a person. However, the rules of operation and safety make such human-made disasters a rare and largely tolerable phenomenon. The same will happen with AI, if rules for ‘safe generative artificial intelligence’ are created. And these rules need to be developed and implemented everywhere.

How far will the ‘human game with AI’ goDaniel Bell, the author of the theory of post-industrial society, identifies ‘the changing nature of labor’ as one of its dimensions. Human life in pre-industrial society was, in his terminology, a ‘game against nature’, that is, the interaction between humans and natural forces and resources. In industrial society, nature is replaced by an artificial environment (machines). There is a ‘man-machine relationship’. In a post-industrial society, in which the service sector is the major one, the leading role, according to Bell, is played by the ‘game between people’ (Bell 1973). Bell emphasizes that ‘people must learn to live with each other’ (Bell 1984: 20–21). Unfortunately or fortunately, the ‘game of man with man’ has now greatly reduced its scope. However, a new kind of game – ‘the game of human with artificial intelligence’ – is rapidly gaining momentum, with the result that personal communication between people is increasingly being replaced by communication via the Internet. And now it is rapidly being replaced by communication with AI in the form of Alice, Alexa or other assistants. This is largely inevitable. However, to paraphrase Daniel Bell, ‘people will have to learn and live with artificial intelligence.’ But to do this, we need to curb the ambitions of those forces that want to use AI to enslave society, to ‘cross-flash’ our psychology, to replace our freedom of choice. To learn to live in harmony with AI, we must develop clear and strict rules for it and force its owners to obey them.

The sixteenth-century English philosopher Francis Bacon said, ‘Money is a very good servant, but a bad master.’ And this idea applies perfectly to the situation with artificial intelligence.

* * *

We are moving forward faster and faster, but always along an uncharted path, groping our way, with very little idea of the consequences of using our innovations. And that is cause for concern. ‘Perhaps we should worry more about the fact that we are rewriting the code of life on Earth at a terrifying pace, usually without even considering that this is what we are doing’ (Field 2015). But it is high time for us to become aware of the consequences of every new step. And although we have no choice but to move forward, a maximum of caution, wisdom, prudence and even some humility before the greatness of the Universe and the world, a deep respect for the legacy left to us by billions of years of biological evolution are absolutely necessary on this path. And then our persistence, knowledge and (albeit still weak) foresight will allow us to safely reach new heights of human power and leave descendants capable of preserving it.

FUNDING

The research was supported by the Russian Science Foundation (project No. 23-11-00160 ‘Modeling and forecasting the development of the BRICS countries in the 21st century in the context of global dynamics’).

NOTES

1 MANBRIC is an acronym for seven areas that we believe will play the leading role in the future: medicine, additive technologies, nanotechnology, biotechnology, robotics, ICT and AI, and cognitive technologies.

2 When modelling neural networks, the developers strive to make them as similar as possible to biological neural networks in terms of information processing. In such networks, information is encoded in the form of intervals between pulses generated by a neuron in response to locally integrated excitation in space and time from pulse signals arriving at its inputs. The technology focuses on a fully hardware implementation (Benderskaya, Tolstov 2013; Gavrilov, Kangler 2015).

3 A feature of neuromorphic networks is the ability to model fragments of the biological nervous system and the intellectual properties of the brain; evolutionary computation model describes natural evolution and formalizes the basic laws of genetics; swarm intelligence models the social behavior of organisms living in a colony (swarm, flock, etc.); artificial immune systems model the basic principles of biological immune systems; fuzzy systems are based on studies of the interaction between organisms and the environment (Skobtsov 2008).

4 Let us recall that in the philosophy of artificial intelligence, the general (strong, universal) AI is a hypothetical AI capable of performing any task that humans can perform (we discussed this in detail in the first article). It is still a hypothesis, and it is still unclear whether this is possible in principle, but progress in this direction is much faster than previously thought, and its pace may increase significantly in the future. That is why there is a danger of the uncontrolled use of increasingly powerful and sophisticated AI.

5 It has been pointed out that the rapid development of artificial intelligence technology is the main challenge for humanity (Eden et al. 2012). Since this was first discussed, communication technologies, data analysis and surveillance technologies have advanced very significantly, even radically. As a result, the problem has become even more urgent. A number of works have already been devoted to the analysis of various aspects of this problem in the present and future (see, e.g., Westin 1966; Ashman et al. 2014; Cecere et al. 2015; Moustaka et al. 2019; Schwartz 1999; Solove 2008; Brammer et al. 2020; Alharbi 2020; Grinin L., Grinin A. 2023).

6 There is a lot of hype around this chat, from apocalyptic predictions to nihilism (see below for more details). For example, some modern statistics show that ChatGPT is currently ‘getting stupid’ due to the large number of requests and the percentage of its correct answers is decreasing. Whether this will affect future development or whether this shortcoming can be eliminated is not yet entirely clear.

7 When preparing this article, we have widely used information from the ‘Theworldisnoteasy’ telegram channel (https://t.me/theworldisnoteasy).

8 Schismogenesis is a change in individual behaviour as a result of the accumulation of experience in interactions between individuals.

9 For example, although it is quite doubtful, the market for generative AI alone (i.e., ChatGPT and other chatbots) could grow 30-fold to $1.3 trillion by 2032 (see e.g. Rudnitsky 2023).

10 As a result, companies that mentioned AI in their Q2 earnings reports had a better stock price performance than those that did not, according to FACTSET calculations. This AI-fuelled hype, which resembles a scam, has been going on for more than half a year. To take just one example: on September 12, 2023, Tesla shares soared, after Morgan Stanley analysts strongly linked the company to artificial intelligence in their report. The company plans to spend more than $1 billion on a supercomputer called Dojo to train AI on self-driving cars. The analysts believe that this will open up new big markets for the company and add $600 billion to its capitalization. It is a long way off, but Tesla added $80 billion on the same day. Its share price rose by more than 10 %, the most since January this year. In this sense, the AI hype reminds another famous mania, the dot-com bubble of the 1990s. At that time, any company with something like ‘*.com’ in its name could expect to see a multiple, if not an order of magnitude, increase (Baranov 2023).

11 This has happened even with such a terrible threat as nuclear weapons. The mutual danger of the opposing sides reduced the risk of their use of nuclear weapons, although today, unfortunately, there is again much and irresponsible talk of nuclear war.

12 A group of American writers have filed a lawsuit against OpenAI in federal court in Manhattan. In their lawsuit, they accuse the company of using their copyrighted texts to train the ChatGPT artificial intelligence software (Ott 2023). The actors are also protesting the use of their faces to train the neural network, which could lead to their partial replacement and save filmmakers a lot of money. The protest includes voice actors and members of the Union of Russian Announcers, who want to amend the law to help protect them from illegal voice synthesis.

13 Already today, in many countries, including Russia, the public reproduction of certain words or expressions becomes grounds for public ostracism, persecution or even criminal prosecution. And doubting the credibility of some historical events like the Holocaust (or its scale) can also lead to criminal charges. But if you can be imprisoned for a word (with freedom of speech), why not go further and be imprisoned for a thought?

14 In some places, we may simply have no time. Indeed, the only atomic bombing in history took place because no one yet suspected the consequences of such weapons and there was a war going on. And if the bombs had been created three or four months later, such a bombing would simply not have happened. The same is true of COVID-19. The speed of its spread and the imposition of a course of action did not allow society to control the use of vaccines. Now, for many years to come, we will learn about new and more serious side effects of vaccination. The speed of AI development may lead to some extremely undesirable consequences. Meanwhile, globalists and those in power are trying to take advantage of society’s inexperience and disorganization to implement certain innovations that can cause serious harm.

15 The first steps are already being taken in this direction. For example, Anthropic (an active participant in the race for ever smarter AI) has published its ‘Responsible Scaling Policy for AI’ (https://www.anthropic.com/in-dexanthropicsresponsiblescaling-policy). But this will require very serious legislative changes.

REFERENCES

Airhart, M. 2023. Brain Activity Decoder Can Reveal Stories in People's Minds. College of Natural Sciences, May 1. URL: https://cns.utexas.edu/news/podcast/brain-activity-decoder-can-reveal-stories-peoples-minds.

Alharbi, F. S. 2020. Dealing with Data Breaches amidst Changes in Technology. International Journal of Computer Science and Security (IJCSS) 14 (3): 108–115.

Allen Institute. 2020. An Automated Pipeline for Understanding How the Brain is Wired. Allen Institute, February 10. URL: https://alleninstitute.org/news/an-automated-pipeline-for-understanding-how-the-brain-is-wired/.

Amazon. 2023. What is Generative Artificial Intelligence? Amazon. URL: https://aws.amazon.com/ru/what-is/generative-ai/Original in Russian (Что такое генеративный искусственный интеллект?).

Ashman, H., Brailsford, T., Cristea, A. I., Sheng, Q. Z., Stewart, C., Toms, E. G., and Wade, V. 2014. The Ethical and Social Implications of Personalization Technologies for E-learning. Information & Management 51 (6): 819–832. DOI: 10.1016/j.im.2014.04.003.

Baranov, G. 2023. Nasdaq: Growth on Rumours – Decline on Facts. Expert.ru, September 12. URL: https://expert.ru/2023/09/12/aktsii-ssha/_Original in Russian (Баранов Г. Nasdaq: рост на слухах – падение на фактах. Expert.ru. 12 сентября). Accessed August 22, 2023.

Bell, D. 1973. The Coming of Post-Industrial Society: A Venture in Social Forecasting. New York: Basic Books, Inc.

Bell, D. 1984. Post-Industrial Society. In Shakhnazarov G. Kh. (ed.), ‘American Model’: With the Future in Conflict (pp. 16–24). Moscow: Progress. Original in Russian (Белл Д. Постиндустриальное общество // «Американская модель»: с будущим в конфликте / под общ. ред. Г. Х. Шахназарова. М.: Прогресс, С. 16–24).

Benderskaya, E. N., Tolstov, A. A. 2013. Trends in the Development of Hard-ware Support for Neurocomputing. Nauchno-tekhnicheskiye vedomosti SPbGU 3 (174): 9–18. Original in Russian (Бендерская Е. Н., Толстов А. А. Тенденции развития средств аппаратной поддержки нейровычислений. Научно-технические ведомости СПбГУ. № 3(174). С. 9–18).

Biswas, S. 2023. ChatGPT and the Future of Medical Writing. Radiology, February 2: 223312. DOI: 10.1148/radiol.223312.

Brammer, S., Branicki, L., Linnenluecke, M. 2020. COVID-19, Societalization and the Future of Business in Society. Science of the Total Environment 34 (4): 2–7. DOI: 10.5465/amp.2019.0053.

Gavrilov, A. V., Kangler, V. M. 2016. Neuromorphic Technologies: Status and Development Prospects. In Ughlev, V. A. (ed.), Proceedings of the VII Russian scientific and technical conference ‘Robotics and artificial intelligence’ (Zheleznogorsk, December 11, 2015) (pp. 148–154). Krasnoyarsk: SFU. Original in Russian (Гаврилов А. В., Канглер В. М. Нейроморфные технологии: состояние и перспективы развития. Робототехника и искусственный интеллект (г. Железногорск, 11 декабря 2015 г.) / Ред. В. А. Углев. – Красноярск: СФУ. С. 148–154).

Cecere, G., Le Guel, F., Soulié, N. 2015. Perceived Internet Privacy Concerns on Social Networks in Europe. Technological Forecasting and Social Change 9: 277–287. DOI: 10.1016/j.techfore.2015.01.021.

Cepelewicz, J. 2016. The U.S. Government Launches a $100-million ‘Apollo Project of the Brain. Scientific American, March 8. URL: https://www.scien-tificamerican.com/article/the-u-s-government-launches-a-100-million-apollo-project....

Chat GPT. 2023. Chat GPT and the Talking Dog with Dr. Terry Sejnowski. URL: https://www.youtube.com/watch?v=dZOEXNIrZLI. Accessed August 30, 2023.

Eden, A. H., Moor, J. H., Søraker, J. H., Steinhart, E. (eds.) 2012. Singularity Hypotheses: A Scientific and Philosophical Assessment. Berlin: Springer. DOI: 10.1007/978-3-642-32560-1.

Field, D. 2015. Perfect Genetic Knowledge. MADAN, October 6. URL: https://madan.org.il/ru/node/126996. Accessed September 16, 2024.

Fraley, A. 2023. AI Bible: [5 in 1] The Most Updated and Complete Guide | From Understanding the Basics to Delving into GANs, NLP, Prompts, Deep Learning, and Ethics of AI. Kerala: AlgoRay Publishing.

Gates, B. 2023. The Age of AI has Begun: Artificial Intelligence is as Revolutionary as Mobile Phones and the Internet. URL: https://www.gatesnotes.com/The-Age-of-AI-Has-Begun. Accessed August 28, 2023.

Grinin, L. E., Grinin, A. L. 2015а. Cybernetic Revolution and the Sixth Technological Mode. Istoricheskaya psikhologiya i sotsiologiya istorii 8 (1): 172–197. Original in Russian (Гринин Л. Е., Гринин А. Л. Кибернетическая революция и шестой технологический уклад. Историческая психология и социология истории, 8 (1). С. 172–197).

Grinin, L. E., Grinin, A. L. 2015b. From Choppers to Nanorobots. The World is on the Way to the Epoch of Self-Regulating Systems. Moscow: Moscow branch of Uchitel Publishers. Original in Russian (Гринин Л. Е., Гринин А. Л. От рубил до нанороботов. М.: Моск. ред. изд-ва «Учитель»).

Grinin, L., Grinin, A. 2016. The Cybernetic Revolution and the Forthcoming Epoch of Self-Regulating Systems. Moscow: Uchitel.

Grinin, L. E., Grinin, A. L. 2023. Opportunities and Dangers of Future Technologies. Istoriya i sovremennost 1: 63–87. Original in Russian (Гринин Л. Е., Гринин А. Л. Возможности и опасности технологий будущего. История и современность. № 1. С. 63–87).

Grinin, L., Grinin, A., Korotayev A. 2017a. Forthcoming Kondratieff Wave, Cybernetic Revolution, and Global Ageing. Technological Forecasting and Social Change 115: 52–68. DOI: 10.1016/j.techfore.2016.09.017.

Grinin, L., Grinin A., Korotayev A. 2017b. The MANBRIC-Technologies in the Forthcoming Technological Revolution. In Devezas, T., Leitão, J., Sarygulov, A. (eds.), Industry 4.0 – Entrepreneurship and Structural Change in the New Digital Landscape: What is Coming on Along with the Fourth Industrial Revolution (pp. 243–261). N. p.: Springer. DOI: 10.1007/978-3-319-49604-7_13.

Grinin, L., Grinin, A., Korotayev, A. 2020. A Quantitative Analysis of Worldwide Long-Term Technology Growth: From 40,000 BCE to the Early 22nd Century. Technological Forecasting and Social Change 155: 1–15.

Grinin, L., Grinin, A., Korotayev, A. 2021. Does COVID-19 Accelerate the Cybernetic Revolution and Transition from E-government to E-state? In Grinin, L. E., Korotayev, A. V. (eds.), Kondratieff Waves: Processes, Cycles, Triggers, and Technological Paradigms (pp. 95–125). Volgograd: Uchitel.

Hawkins, J., Blakeslee, S. 2004. On Intelligence. New York: Owl Books.

Helbing, D., Mahajan, S., Hänggli Fricker, R., Musso, A., Hausladen, C. I., Carissimo, C., Carpentras, D., Stockinger, E., Argota Sanchez-Vaquerizo, J., Yang, J. C., Ballandies, M. C., Korecki, M., Dubey, R. K., Pournaras, E. 2023. Democracy by Design: Perspectives for Digitally Assisted, Participatory Upgrades of Society. Journal of Computational Science 71. URL: https://www.sciencedirect.com/science/article/pii/S1877750323001217?via%3Dihub.

Hofstadter, D. 2023. Douglas Hofstadter is ‘Terrified and Depressed’ when thinking about the risks of AI. Reddit. URL: https://www.reddit.com/r/slatestarcodex/comments/14pqxb8/douglas_hofstadter_is_terrified_and_depress....

IBM Institute for Business Value. N.d. Augmented Work for an Automated, AI-driven World. URL: https://www.ibm.com/thought-leadership/insti-tute-business-val ue/en-us/report/augmented-workforce.

Kasparyants, D. 2022. On the Development of Basic AI Models. URL: https://rdc.grfc.ru/2022/07/ai_foundation_models_development/. Original in Russian (Каспарьянц Д. О развитии базовых моделей ИИ).

Lund, B. D., Ting Wang. 2023. Chatting about ChatGPT: How May AI and GPT Impact Academia and Libraries? Library Hi Tech News 40. DOI: 10.1108/LHTN-01-2023-0009.

Marx, K., Engels, F. 1955. Manifesto of the Communist PartyIn Marx K., Engels F., Collected Works. 2nd ed. Vol. 4 (pp. 419–459). Moscow: State Publishing House of Political Literature. Original in Russian (Маркс К., Энгельс Ф. Манифест коммунистической партии / К. Маркс, Ф. Энгельс // Соч. 2-е изд. Т. 4. М.: Гос. изд-во полит. лит-ры, С. 419–459).

Moustaka, V., Theodosiou, Z., Vakali, A., Kounoudes, A., Anthopoulos, L. G. 2019. Εnhancing Social Networking in Smart Cities: Privacy and Security Borderlines. Technological Forecasting and Social Change 142: 285–300. DOI: 10.1016/j.techfore.2018.10.026.

Ott, T. 2023. OpenAI Lawsuit: US Authors Allege ChatGPT Copyright Theft. URL: https://www.dw.com/en/openai-lawsuit-us-authors-allege-chatgpt-co-pyright-theft/a-66895907?maca=en-r....

Razin, A. V. 2019. Ethics of Artificial Intelligence. Filosofiya i obshchestvo 1: 57–74. DOI: 10.30884/jfio/2019.01.04. Original in Russian (Разин А. В. Этика искусственного интеллекта. Философия и общество, 1. С. 57–74.).

Rudnitsky, J. 2023. ChatGPT to Fuel $1.3 Trillion AI Market by 2032, New Report Says. Bloomberg, June 1. URL: https://www.bloomberg.com/news/articles/2023-06-01/chatgpt-to-fuel-1-3-trillion-ai-market-by-2032-bi... says. Accessed August 30, 2023.

Schmidhuber, J. 2014. Deep Learning in Neural Networks: An Overview. Technical Report IDSIA-03-14. URL: http://arxiv.org/pdf/1404.7828.pdf. Accessed August 29, 2023.

Schwartz, P. M. 1999. Internet Privacy and the State. Connecticut Law Review 32: 815–829.

Skobtsov, Yu. A. 2008. Fundamentals of Evolutionary Computation. Donetsk: Don-NTU. Original in Russian (Скобцов Ю. А. Основы эволюционных вычислений. Донецк : ДонНТУ).

Solove, D. J. 2008. Understanding Privacy. Harvard: Harvard University Press.

The world is not easy. N.d. Little-Known Interesting Things (Sergey Karelov's author's channel). URL: https://t.me/theworldisnoteasyOriginal in Russian (Малоизвестное интересное (авторский канал Сергея Карелова).

Westin, A. F. 1966. Science, Privacy, and Freedom: Issues and Proposals for the 1970s. Part I. The Current Impact of Surveillance on Privacy. Columbia Law Review 66 (6): 1003–1050.

Wolf, Y., Wies, N., Avnery, O., Levine, Y., Shashua, A. 2023. Fundamental Limitations of Alignment in Large Language Models. URL: https://arxiv.org/abs/2304.11082.