Overview

Through video and panel exhibits, this project takes on the challenge of science communication that allows us to think and talk about the impact of generative AI on people and society from multiple perspectives.
You will watch videos of three professionals, poet TANIKAWA Shuntaro, OZAWA Kazuhiro of the comedy duo Speedwagon, and OBA Misuzu, who has written a book based on her child-rearing experience. Each, try to reproduce their own tacit skills by using generative AI (ChatGPT), which is attracting growing social interest.
In addition, you can also look at panel exhibits organizing the question, "What is human intelligence, intellect, and creativity?" which is being debated once again due to the spread of generative AI.

Objectives

Before entering, we strongly recommend looking at the explanatory panels on the left and right sides of the entrance gate that display the purpose and concept of this project. On the right-hand panel, as to the purposes why three professionals (TANIKAWA Syuntaro, OZAWA Kazuhiro, and OBA Misuzu) used generative AI (ChatGPT). They are organized in a chart.
In this project, rather than replacing professionals with generative AI, these experts engaged in dialogues with AI, by examining the contexts of instructions (prompts) to mirror their skills and techniques.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
When we view the history of efforts to mirror human skills through artificial intelligence (AI), in classical AI, it was necessary to instruct one by one, specifying how they wanted things to be done in each situation and how AI should respond if certain conditions were met. As a result, handling complex judgments and exceptions was very challenging.

As research has advanced and deep learning has emerged, there is no longer a need for humans to directly instruct the characteristics of skills. AI itself has become capable of finding features in large amounts of data, features that a person may not even be aware of, thus leading to a significant improvement in the fidelity of skill replication.

However, humans cannot comprehend the features that AI has discovered from data through deep learning. Even the individuals who designed the AI, despite its ability to replicate convincing text or images, often struggle to explain why it works well. This is known as "black box problem".

Recently, there has been a lot of talk about generative AI like ChatGPT, which is a type of deep learning models referred to as "a large language model (LLM)". One of its significant features is the ability to receive instructions in natural language. Compared to conventional AI, it has a higher understanding of instructions and can respond flexibly. This enables the use of generative AI to mirror human skills by repeatedly providing instructions.

Please take a look at what professionals in language think and feel when they see AI's responses to their own instructions.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
As you pass through the gate, you will see a panel in front of the venue with a series of yes-or-no questions. Whether you have used ChatGPT before or not, whether you have heard of generative AI or not, and we encourage you to use your intuition and answer the questions. Your answers will guide you to either a clockwise or counterclockwise viewing route.
By talking to people who are viewing the exhibition in the opposite direction, or by facing those questions again when you return to the exhibition, you will deepen your relationship with generative AI.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).

Video Exhibition

There are voices suggesting that the tasks of creators such as illustrators, composers, and scriptwriters may be replaced by generative AI. The three participants in these videos contemplated both the concerns about how generative AI might affect their sense of purpose and work, and the potential that could emerge from engaging with this new technology. With a mix of caution and curiosity, they interacted with generative AI (ChatGPT) and shared their own skills.

Exhibition Description
A poet, TANIKAWA Shuntaro continues to take on unconventional challenges in the world of poetry. He is highly interested in science and technology. He has contributed poems to Miraikan's planetarium programs, has held a dialogue with AI researchers.
When we approached him about this project, he made a proposal based on the premise of applied AI, saying, "Why don't you let AI learn all of my past works?”
When we explained that we wanted him to experience ChatGPT, one of the large language models that enables natural sentence generation by learning a large amount of text data on the Web, not limited to TANIKAWA's works, he asked, "Does it learn sounds in addition to text? Does it understand onomatopoeic words?" and "How does it use Chinese Character and Hiragana (Japanese Character) differently? How about English and Japanese?" And even more, "Does it understand expressions that are not based on meaning, such as in non-sense poetry?" The questions were filled with the kind of persistence that only a poet would have, and we could feel the passion of a poet's curiosity and pride in being a professional.
He also said that the exhibition, in which his poems were secondarily sculpted by the exhibitors, was like “a part of series of poems”.
In this project, we were guided by his words and the way he earnestly played with the ChatGPT. In editing the live performance into a video of a few minutes, our goal was to make it seem like “a part of series of poem" to himself and to the audiences.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
Exhibition Description
In his book, Mr. Ozawa says, "I can understand myself when I consult with others". We made an offer to him as we thought he would be the perfect person for the purpose of this project, which is to deepen understanding of one's own tacit knowledge through dialogue with the ChatGPT.
In the process of trying to make the ChatGPT generate “manzai” and “sweet words (he is famous for this)”, many of his unique senses were revealed. When we told him that "ChatGPT is connecting words (tokens) based on contextual probabilities", he told us that he himself sometimes creates manzai in the same way, by associating keywords with a theme.
On the other hand, he also says that "combinations of words with a high probability" cannot betray the expectations of the audience and are not funny as they are. From there, he goes further and describes how to make manzai as "laughter that betrays the audience's imagination by shifting the pattern shared by everyone" and "laughter that plays with emotion by using the groove and the unique senses of the performer, rather than the logic of shifting the pattern". He also said that he aims for the latter type of laughter.
On the day of filming, he was told by Miraikan staff that "ChatGPT is controlled by RLHF (Reinforcement Learning by Human Feedback) so that it does not produce dangerous or harmful output", to which he asked, "How can we decide what is 'good' or 'right'?". ChatGPT, which can be said to be hoped for a harmless machine, and Mr. Ozawa, who believes that it makes easier to live as an individual and as a society by avoiding making people feel bad and worrying about oneself (even if they do something bad to you). Are they on the same page?

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
Exhibition Description

As an expert on "her own child," Ms. Oba is actively using new technologies and sharing her know-how on how to talk to children. On the other hand, she also knows that it takes more than just know-how to deal with children; she also values the difficulty and depth of dealing with children face to face. Therefore, we asked Ms. Oba about how to interact appropriately with ChatGPT and about the significance and limitations of verbalizing her implicit skills using ChatGPT.
During the interview, I could feel her attitude toward ChatGPT, calling it "dear ChatGPT (ChatGPT-san)" and engaging with it as if she were talking with human. As she interacted with ChatGPT, she noticed its weaknesses, but she never denied them. Rather, she accepted these "weak points" as part of its characteristics and reconsidered them as strong points. Plus, she talked about the potential of ChatGPT and humans helping each other in child-rearing. In the video, you can see her careful choice of words with pauses, which is characteristic of her deep insight into children. This shows that she was dealing with ChatGPT as an equal partner.

*In order to achieve “human-like understanding of the meaning of words,” in which AI has poor performance, there are some studies to teach AI pain and emotion from "experience", or to give AI “a body” using robotics technology.



Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).

Panel Exhibition

Generative AI using large-scale language models like ChatGPT has made it possible to engage in natural conversations with humans, a task traditionally considered difficult. Some individuals who have experienced or worked on this technology feel that it possesses a form of intelligence that is either similar to or fundamentally different from human intelligence. How will generative AI, which doesn't fit neatly into the category of machines used by humans, impact human creativity and ethics?

In November 2022, ChatGPT, a generative AI, was introduced and rapidly gained popularity among the general public. Even the developers themselves did not anticipate that this AI service would have such a significant impact. (*1)

In this panel, the impact of generative AI on both the general public and its developers is metaphorically represented as a part of an iceberg visible above the sea. The panel explains the factors beneath the surface (underwater) that contribute to these impacts. Reflecting on these factors might provide valuable insights for considering the future impacts generative AI could bring.

Additionally, this panel is positioned in the middle of the counterclockwise direction (indicative of proactive utilization of generative AI) and the clockwise direction (indicative of cautious utilization). We believe that viewers' perceptions of the content will vary based on their attitudes toward AI. If viewers have a positive impression, reflecting on the facts of overcoming inhibiting factors one by one might further enhance their anticipation for the future development of AI. On the other hand, those with concerns and anxieties might perceive accidental discoveries as evidence that understanding and control of AI have not kept pace with the speed of technological advancements.

*1 別ウィンドウで開きますThe inside story of how ChatGPT was built from the people who made it

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
We have a tendency to anthropomorphize objects, machines, and even moving figures, and to identify their personalities and states of mind. It is no wonder, then, that when we observe the natural and flexible responses of ChatGPT, we might perceive it as having intelligence similar to humans. However, natural responses are possible without intelligence, as long as we know the rules (patterns) that successfully reproduce human responses. Does ChatGPT have human-like intelligence, or does it only appear to have intelligence?
Research exploring the "intelligence" of generative AI utilizing large-scale language models, such as ChatGPT, which flexibly handles diverse tasks, includes investigations into the mechanisms of AI as well as studies that impose various tests on AI to examine its results (responses). These studies include not only tests designed for AI, but also tests intended to measure human intelligence.
This panel presents three examples of AI "intelligence," that makes you think "maybe it's the same as humans," or that makes you think "maybe it's different from humans," and that makes you feel both. With these examples, you can consider whether human and AI intelligence can be compared, and if so, what methods would be appropriate. If they can accomplish the same thing in the end, can we say that they have the same intelligence, even if the mechanisms and processes are different between them?
How can we know how they work in the first place? Do you think that intelligence is inherent in individuals (individuals or AI systems alone) and can be demonstrated at any time? Or do you think what can be done depends on the environment at the time or the people around you?
Depending on how you consider the "intelligence" of AI and humans, your relationship with AI may change in terms of what you entrust to AI and what kind of words (prompts) you give to it.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
ChatGPT, capable of generating natural-sounding sentences, is a Generative AI based on Large Language Models (LLM). The remarkable aspect of this LLM is its ability to craft natural and coherent sentences without the knowledge brought by humans. The training process of LLM, known as machine learning, involves utilizing vast amounts of textual data from the web. However, instead of merely copying this dataset, LLM has the capability to “generate” new sentences by combining an extensive array of words. The name “Generative AI” came from this unique feature.
These two features, the ability to generate natural-sounding sentences and the capability to produce content that is not merely copied from the training dataset, create a challenge wherein it becomes difficult for many individuals to discern misinformation when it is generated.
Let's look at this panel from the perspective of someone who is trying to create fake information. The callout at the top of the panel lists the three characteristics of the Generative AI: ①it can generate a large amount of information quickly ②it cannot be identified as text generated by the generative AI ③the bias contained in the generated text is spread out.
The strategy of “Throw enough mud at the wall and some of it will stick” becomes easily achievable with Generative AI, which can quickly produce a large amount of information. If text generated by Generative AI is indistinguishable from human-written text, it may not only “Throw enough mud” but also more widely spread the deception. Regarding "bias," while some may expect AI to generate objective content based on data, if the data itself contains bias, the generated content from AI may inherit that bias.
Looking further at the bottom half of the panel, it suggests significant influences beyond individual assumptions, regardless of someone’s intent to harm. For instance, the task of verifying the accuracy and sources of information, commonly known as fact-checking, might become enormous. This could lead to a situation where obtaining trustworthy information becomes even more challenging than it is now. Additionally, there is the possibility of exacerbating conflicts and divisions between differing ideologies. This could occur through pointing out biases to each other amid conflicting viewpoints, and developing Generative AI aligned with one's own assertions, contributing to further polarization.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).

Credit

View more about Credit

Related links