Loading...
Anytime, Anywhere, All-the-Time
JoBot™ is the leading technology for psychological services
14

Ethics

What is the alignment problem in AI?

A failure of value alignment means that humans give artificial intelligence a goal to achieve and it has very negative side effects for individuals or society in general. Nonetheless, the original goal given to machines to solve may be perfectly innocent and rational. An example are the algorithms for the optimisation of click-through rates on social media. The goal is perfectly innocent: Select content that allows users to click on links so that people stay social media platforms longer. The negative side effect is that more and more extreme content engages people, enables click-through and results in radicalisation.

An AI can also try to achieve goals that are perfectly innocent to begin with. For instance, a machine is giving the goal to get some coffee. The machine will immediately realise that it cannot achieve the goal if it is destroyed or switched off. Hence, the machine will create a subgoal to ensure that it is operational at all times. This may include self-modification so that it cannot be switched off by a human user. It is possible to generalise this problem immediately to realise that achieving any goal requires the existence of the machine and the generation of subgoals that insure self-preservation. Any argument that claims we can simply turn off the machines simply falls short. Self-preservation is logically built into any advanced form of artificial intelligence. If humans are somehow in conflict with the existence and the drive of self-preservation built into the machines, then the superior AI systems may decide to remove the humans.
What is universal solicitation?
Gestalt Theory is a school of psychology that emphasises the context of any stimulus. Depending on the needs of a person, stimuli in context can have a soliciting character. Stimuli or objects in the environment invite or solicit action. A simple tool like a hammer invites its use for nailing and construction. Typically, the shape or form of the tool is the affordance, the invitation to action. Many objects have more than one affordance and satisfy more than one need. For instance, a car can be a means of transportation, it can be a status symbol or a sports equipment. Since a car satisfies more than one need and it becomes dysfunctional without maintenance, the car constantly invites its use. In other words, the car represents a solicitation to the used and maintained.
An artificial general intelligence (AGI) system is the ultimate universal solicitation since it can satisfy almost any human need. This includes human desires that are destructive. Devices with artificial intelligence invite their use to the point that the human user becomes inactive. Currently, devices are under development that inject rewards directly into the human brain, potentially fulfilling any desire without any particular action by the receiver. This is a solicitation like no other, with the potential to change human existence forever.
The Singularity
The advent of an artificial superintelligence is frequently discussed under the hypothetical, umbrella term of a “technological singularity”: A self-improving artificially intelligent agent enters an uncontrolled process of advancement, causing an intelligence explosion and resulting in a powerful superintelligence beyond any human intelligence. Furthermore, it is assumed that beyond this point of “singularity”, human affairs change forever.

This process is depicted as fast, irreversible and with consequences that cannot be anticipated (but it is implied that there are many negative outcomes). The expression “singularity” implies that in the future, general artificial superintelligence is a single entity. It may be composed of interacting parts, but in essence, it is a single mind. Since it is computational in nature, it requires energy and may compete with humans for resources.
What are JoBot™'s design principles?
The first design principle is that any form of language input must be accepted and JoBot™ is likely to give reasonable responses in each and every case. Secondly, any response given by the bot must not only be understandable and reasonable but must to some extent represent the competence and personality of the human original. Thirdly, no conversation about sexual issues is possible (this cannot be enforced if a generative AI system is modelling JoBot™). Next, JoBot™s dialogue structure cannot be changed by machine learning. It must be guaranteed that the bot’s responses are appropriate and on topic at all times. The fifth principal is that JoBot™ does not recognise the user. There is no user registration and no reference can be made to previous conversations. JoBot™ is like a book in a library: you can take the book and read it or leave the book behind, in any case, the book does not know you at all. Finally, JoBot™ is not a mobile app. The program is accessible through Web pages and can be used on desktops and any mobile device.