Overview

Through video and panel exhibits, this project takes on the challenge of science communication that allows us to think and talk about the impact of generative AI on people and society from multiple perspectives.
You will watch videos of three professionals, poet TANIKAWA Shuntaro, OZAWA Kazuhiro of the comedy duo Speedwagon, and OBA Misuzu, who has written a book based on her child-rearing experience. Each, try to reproduce their own tacit skills by using generative AI (ChatGPT), which is attracting growing social interest.
In addition, you can also look at panel exhibits organizing the question, "What is human intelligence, intellect, and creativity?" which is being debated once again due to the spread of generative AI.

Objectives

Before entering, we strongly recommend looking at the explanatory panels on the left and right sides of the entrance gate that display the purpose and concept of this project. On the right-hand panel, as to the purposes why three professionals (TANIKAWA Syuntaro, OZAWA Kazuhiro, and OBA Misuzu) used generative AI (ChatGPT). They are organized in a chart.
In this project, rather than replacing professionals with generative AI, these experts engaged in dialogues with AI, by examining the contexts of instructions (prompts) to mirror their skills and techniques.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
When we view the history of efforts to mirror human skills through artificial intelligence (AI), in classical AI, it was necessary to instruct one by one, specifying how they wanted things to be done in each situation and how AI should respond if certain conditions were met. As a result, handling complex judgments and exceptions was very challenging.

As research has advanced and deep learning has emerged, there is no longer a need for humans to directly instruct the characteristics of skills. AI itself has become capable of finding features in large amounts of data, features that a person may not even be aware of, thus leading to a significant improvement in the fidelity of skill replication.

However, humans cannot comprehend the features that AI has discovered from data through deep learning. Even the individuals who designed the AI, despite its ability to replicate convincing text or images, often struggle to explain why it works well. This is known as "black box problem".

Recently, there has been a lot of talk about generative AI like ChatGPT, which is a type of deep learning models referred to as "a large language model (LLM)". One of its significant features is the ability to receive instructions in natural language. Compared to conventional AI, it has a higher understanding of instructions and can respond flexibly. This enables the use of generative AI to mirror human skills by repeatedly providing instructions.

Please take a look at what professionals in language think and feel when they see AI's responses to their own instructions.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
As you pass through the gate, you will see a panel in front of the venue with a series of yes-or-no questions. Whether you have used ChatGPT before or not, whether you have heard of generative AI or not, and we encourage you to use your intuition and answer the questions. Your answers will guide you to either a clockwise or counterclockwise viewing route.
By talking to people who are viewing the exhibition in the opposite direction, or by facing those questions again when you return to the exhibition, you will deepen your relationship with generative AI.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).

Video Exhibition

There are voices suggesting that the tasks of creators such as illustrators, composers, and scriptwriters may be replaced by generative AI. The three participants in these videos contemplated both the concerns about how generative AI might affect their sense of purpose and work, and the potential that could emerge from engaging with this new technology. With a mix of caution and curiosity, they interacted with generative AI (ChatGPT) and shared their own skills.

Exhibition Description
A poet, TANIKAWA Shuntaro continues to take on unconventional challenges in the world of poetry. He is highly interested in science and technology. He has contributed poems to Miraikan's planetarium programs, has held a dialogue with AI researchers.
When we approached him about this project, he made a proposal based on the premise of applied AI, saying, "Why don't you let AI learn all of my past works?”
When we explained that we wanted him to experience ChatGPT, one of the large language models that enables natural sentence generation by learning a large amount of text data on the Web, not limited to TANIKAWA's works, he asked, "Does it learn sounds in addition to text? Does it understand onomatopoeic words?" and "How does it use Chinese Character and Hiragana (Japanese Character) differently? How about English and Japanese?" And even more, "Does it understand expressions that are not based on meaning, such as in non-sense poetry?" The questions were filled with the kind of persistence that only a poet would have, and we could feel the passion of a poet's curiosity and pride in being a professional.
He also said that the exhibition, in which his poems were secondarily sculpted by the exhibitors, was like “a part of series of poems”.
In this project, we were guided by his words and the way he earnestly played with the ChatGPT. In editing the live performance into a video of a few minutes, our goal was to make it seem like “a part of series of poem" to himself and to the audiences.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
Exhibition Description
In his book, Mr. Ozawa says, "I can understand myself when I consult with others". We made an offer to him as we thought he would be the perfect person for the purpose of this project, which is to deepen understanding of one's own tacit knowledge through dialogue with the ChatGPT.
In the process of trying to make the ChatGPT generate “manzai” and “sweet words (he is famous for this)”, many of his unique senses were revealed. When we told him that "ChatGPT is connecting words (tokens) based on contextual probabilities", he told us that he himself sometimes creates manzai in the same way, by associating keywords with a theme.
On the other hand, he also says that "combinations of words with a high probability" cannot betray the expectations of the audience and are not funny as they are. From there, he goes further and describes how to make manzai as "laughter that betrays the audience's imagination by shifting the pattern shared by everyone" and "laughter that plays with emotion by using the groove and the unique senses of the performer, rather than the logic of shifting the pattern". He also said that he aims for the latter type of laughter.
On the day of filming, he was told by Miraikan staff that "ChatGPT is controlled by RLHF (Reinforcement Learning by Human Feedback) so that it does not produce dangerous or harmful output", to which he asked, "How can we decide what is 'good' or 'right'?". ChatGPT, which can be said to be hoped for a harmless machine, and Mr. Ozawa, who believes that it makes easier to live as an individual and as a society by avoiding making people feel bad and worrying about oneself (even if they do something bad to you). Are they on the same page?

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
Exhibition Description

As an expert on "her own child," Ms. Oba is actively using new technologies and sharing her know-how on how to talk to children. On the other hand, she also knows that it takes more than just know-how to deal with children; she also values the difficulty and depth of dealing with children face to face. Therefore, we asked Ms. Oba about how to interact appropriately with ChatGPT and about the significance and limitations of verbalizing her implicit skills using ChatGPT.
During the interview, I could feel her attitude toward ChatGPT, calling it "dear ChatGPT (ChatGPT-san)" and engaging with it as if she were talking with human. As she interacted with ChatGPT, she noticed its weaknesses, but she never denied them. Rather, she accepted these "weak points" as part of its characteristics and reconsidered them as strong points. Plus, she talked about the potential of ChatGPT and humans helping each other in child-rearing. In the video, you can see her careful choice of words with pauses, which is characteristic of her deep insight into children. This shows that she was dealing with ChatGPT as an equal partner.

*In order to achieve “human-like understanding of the meaning of words,” in which AI has poor performance, there are some studies to teach AI pain and emotion from "experience", or to give AI “a body” using robotics technology.



Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).

Panel Exhibition

Generative AI using large-scale language models like ChatGPT has made it possible to engage in natural conversations with humans, a task traditionally considered difficult. Some individuals who have experienced or worked on this technology feel that it possesses a form of intelligence that is either similar to or fundamentally different from human intelligence. How will generative AI, which doesn't fit neatly into the category of machines used by humans, impact human creativity and ethics?

In November 2022, ChatGPT, a generative AI, was introduced and rapidly gained popularity among the general public. Even the developers themselves did not anticipate that this AI service would have such a significant impact. (*1)

In this panel, the impact of generative AI on both the general public and its developers is metaphorically represented as a part of an iceberg visible above the sea. The panel explains the factors beneath the surface (underwater) that contribute to these impacts. Reflecting on these factors might provide valuable insights for considering the future impacts generative AI could bring.

Additionally, this panel is positioned in the middle of the counterclockwise direction (indicative of proactive utilization of generative AI) and the clockwise direction (indicative of cautious utilization). We believe that viewers' perceptions of the content will vary based on their attitudes toward AI. If viewers have a positive impression, reflecting on the facts of overcoming inhibiting factors one by one might further enhance their anticipation for the future development of AI. On the other hand, those with concerns and anxieties might perceive accidental discoveries as evidence that understanding and control of AI have not kept pace with the speed of technological advancements.

*1 別ウィンドウで開きますThe inside story of how ChatGPT was built from the people who made it

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
We have a tendency to anthropomorphize objects, machines, and even moving figures, and to identify their personalities and states of mind. It is no wonder, then, that when we observe the natural and flexible responses of ChatGPT, we might perceive it as having intelligence similar to humans. However, natural responses are possible without intelligence, as long as we know the rules (patterns) that successfully reproduce human responses. Does ChatGPT have human-like intelligence, or does it only appear to have intelligence?
Research exploring the "intelligence" of generative AI utilizing large-scale language models, such as ChatGPT, which flexibly handles diverse tasks, includes investigations into the mechanisms of AI as well as studies that impose various tests on AI to examine its results (responses). These studies include not only tests designed for AI, but also tests intended to measure human intelligence.
This panel presents three examples of AI "intelligence," that makes you think "maybe it's the same as humans," or that makes you think "maybe it's different from humans," and that makes you feel both. With these examples, you can consider whether human and AI intelligence can be compared, and if so, what methods would be appropriate. If they can accomplish the same thing in the end, can we say that they have the same intelligence, even if the mechanisms and processes are different between them?
How can we know how they work in the first place? Do you think that intelligence is inherent in individuals (individuals or AI systems alone) and can be demonstrated at any time? Or do you think what can be done depends on the environment at the time or the people around you?
Depending on how you consider the "intelligence" of AI and humans, your relationship with AI may change in terms of what you entrust to AI and what kind of words (prompts) you give to it.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
The boundary between human creation and mechanical generation is being blurred by the rise of generative AI, capable of producing text that resembles poetry, even with basic prompts like, "Please craft a poem."
This panel exhibition raises the question of how specific and detailed a prompt must be in order to be considered a “human creation” directed by generative AI.

In guidance issued on March 16, 2023 (*1), the U.S. Copyright Office indicated that it will not accept works created by AI with human prompting as human creations.
In a public letter (*2) explaining why the bureau dismissed the copyright application of an individual who created a work using Midjourney, an image-generating AI, the bureau stated that the AI "interpreted" the prompts according to different rules than humans, and that the prompts were merely an influence on the AI rather than a direct instruction. For this reason, a distinction can be observed between works created using conventional tools such as paints and image editing software and those created using generative AI.

In the past, providing indirect instructions to staff members in anticipation of their discretion, or developing creative ideas through the interpretation of individuals from different cultures (who do not share common understanding), would not have undermined the creativity (or copyright) of the individual. If we view generative AI as akin to others from different cultures or autonomous entities, we can entertain different interpretations from those outlined above.

In this manner, conventional values and common sense are re-evaluated in the process of integrating the new technology of generative AI into society, bringing about renewed awareness regarding the nature of creative endeavors and copyright management.

*1 別ウィンドウで開きますCopyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence

*2 別ウィンドウで開きますRe: Second Request for Reconsideration for Refusal to Register Théâtre D’opéra Spatial

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).
ChatGPT, capable of generating natural-sounding sentences, is a Generative AI based on Large Language Models (LLM). The remarkable aspect of this LLM is its ability to craft natural and coherent sentences without the knowledge brought by humans. The training process of LLM, known as machine learning, involves utilizing vast amounts of textual data from the web. However, instead of merely copying this dataset, LLM has the capability to “generate” new sentences by combining an extensive array of words. The name “Generative AI” came from this unique feature.
These two features, the ability to generate natural-sounding sentences and the capability to produce content that is not merely copied from the training dataset, create a challenge wherein it becomes difficult for many individuals to discern misinformation when it is generated.
Let's look at this panel from the perspective of someone who is trying to create fake information. The callout at the top of the panel lists the three characteristics of the Generative AI: ①it can generate a large amount of information quickly ②it cannot be identified as text generated by the generative AI ③the bias contained in the generated text is spread out.
The strategy of “Throw enough mud at the wall and some of it will stick” becomes easily achievable with Generative AI, which can quickly produce a large amount of information. If text generated by Generative AI is indistinguishable from human-written text, it may not only “Throw enough mud” but also more widely spread the deception. Regarding "bias," while some may expect AI to generate objective content based on data, if the data itself contains bias, the generated content from AI may inherit that bias.
Looking further at the bottom half of the panel, it suggests significant influences beyond individual assumptions, regardless of someone’s intent to harm. For instance, the task of verifying the accuracy and sources of information, commonly known as fact-checking, might become enormous. This could lead to a situation where obtaining trustworthy information becomes even more challenging than it is now. Additionally, there is the possibility of exacerbating conflicts and divisions between differing ideologies. This could occur through pointing out biases to each other amid conflicting viewpoints, and developing Generative AI aligned with one's own assertions, contributing to further polarization.

Curator's Note

PDFを開きます
Click the image below to open our exhibition panel (PDF file).

Post-Exhibition Note

Mirai can NOW 5th “Professional skills beyond words: Can Generative AI mirror it?” (NOW5), focuses on human skills that cannot be verbalized, commonly known as tacit knowledge. Chat GPT, debuted at the end of November 2022 and rapidly spread, surprised many people by flexibly adapting its responses to users’ instructions. The idea behind NOW5 was to use Chat GPT as a conversational partner to prompt the verbalization of one's own tacit knowledge. The inception of the project began with the thought that using ChatGPT as a conversational partner to prompt the verbalization of one’s own tacit knowledge would be interesting.

When comparing traditional deep learning with the generative AI discussed in this instance of mirroring tacit knowledge, what we noticed once again is that, in the case of generative AI, the degree of reproduction depends not only on the training data and the AI model, but also significantly on the influence of the user's prompts. Text generation AI represented by ChatGPT has markedly improved its understanding of everyday language instructions and has shown the ability to adapt its behavior, on the spot, according to instructions —a capability known as "In-context learning." We believed that this feature might open new possibilities, even in the reproduction of tacit knowledge by AI.

Engaging in conversations with generative AI prompted a sense of discomfort with the articulation of tacit knowledge and its reproduction by AI, leading to new discoveries and insights into one's own skills that cannot be verbalized.


In this project, by showcasing how three professionals utilize generative AI, visitors were encouraged to see generative AI as a mirror reflecting their own skills and underlying values. We believed that by engaging with different perspectives and opinions from others, through dialogues, deeper conversations could be fostered.

To facilitate dialogue, upon entering the exhibition space, visitors were presented with several questions (posted on this special webpage, "Which order would you choose, clockwise or counterclockwise?") and were divided into two viewing routes. By labeling visitors' impressions of generative AI as positive and critical based on their responses to the questions, each route was designed to gradually shift from perspectives aligned with current thoughts and compatibility to more distant viewpoints. This design aimed to raise awareness among visitors who chose to view the exhibition in reverse order.

As Science Communicators, we engaged in conversations with visitors from around the world throughout the venue. Through these interactions, we gained useful insights by understanding the values of the visitors. For example, one visitor, after choosing the route we referred to as the critical route, began viewing panel exhibits. The first panel addressed the impact of generative AI on trust, as it sometimes produces "misinformation”. The visitor calmly commented, "I'm concerned about misinformation. My child isn't going to school, so I'm considering using AI to create problems and teach, but it would be troublesome if incorrect questions or answers were generated."

Later, when I spoke again as the visitor progressed to video exhibits in the latter part of the route, she seemed surprised, saying, "Actually, today I asked AI, 'I feel lonely, what should I do?' and it told me to do my nails, so I did." she then showed me her beautifully manicured nails.

If we had only discussed the importance of accurate information and technical innovations in the exhibition, we would never have heard that this visitor was using generative AI in such a way. When I further conveyed that "Some people suggest using AI as a conversational partner for children." and asked, "What do you think about using AI as a conversation partner for your child, as you're doing (regardless of the accuracy of the information)?" After some thought, she replied, "Well, I might think it's okay for myself, but for children... it might feel scary to feel like they're being manipulated."

This dialogue highlighted how one's thinking can gradually transform as they are inspired by perspectives different from their own.


During the planning stage, we curated questions while imagining scenes where diverse voices of visitors would be expressed. When we actually engaged in dialogue at the venue, we found that the perspectives were even more diverse than expected. Similarly, with new technologies, things beyond what can be anticipated in the research and development stages will become apparent when they are actually used.

In the field of healthcare, care aiming for relief and adaptation become important for disabilities and hardships that cannot be approached within the framework of diagnosis and treatment. There are cases where “Cynefin Framework” is used to organize issues in inter-professional work (IPW). Here, issues are categorized as Simple, Complicated, and Complex, and emphasis is placed on using different approaches for each. We believe that this could serve as a hint when applying science and technology to societal challenges. In the utilization of digital transformation (DX) and AI, it seems common to aim for automation by machines. In such cases, it is necessary to either sort out Simple tasks or simplify Complicated tasks. In Complex tasks, various people with different perspectives are intricately involved. Due to this complexity, since it cannot be controlled as planned, "the correct answer" is not known to anyone. This project focused on the potential of using generative AI for Complex tasks. Some visitors suggested that tasks unsuitable for automation may not be appropriate to generative AI in the first place.

If the purpose is to efficiently perform the same task, it may indeed not be suitable. However, considering ChatGPT’s ability to flexibly capture context based on prompts, it was believed that it could be utilized for verbalizing one's own "skills" and values as implicit knowledge inherent in complex tasks.

Dialogues with others who have skills and values that one could not imagine from their own perspective shake up the values and assumptions that they took for granted. The aim of this project was to update the perspective of capturing issues by being shaken up.

Complex tasks lie in the interaction of various perspectives in complex ways, making it difficult to foresee the "correct answer". To address this difficulty, it is necessary to engage in dialogue with others harboring various positions and viewpoints, and to spend time trying out different approaches. One must resist the temptation of seeking a straightforward "correct answer" and instead tolerate ambiguity. Instead of organizing tasks according to one set of values, is it possible to create a space where mutual influence and changing thoughts can occur while interacting with each other? This was the biggest challenge of the project.


After this exhibition concluded on November 13th, and as the planning team was still basking in its aftermath, shocking news emerged on November 17th (Pacific Time) that the CEO of OpenAI, the company behind the development of ChatGPT, had been dismissed.

Then, on November 21st (Pacific Time), there was a dramatic return of the formerly dismissed CEO. The backdrop to this series of events seems to involve a clash between the advocates pushing for accelerating productivity and economic growth, and the cautious voices expressing concerns about unforeseen impacts due to the pace of technological advancement outstripping societal and individual adaptation.

Through conversations with visitors, we came to realize the varying impacts of generative AI and the diverse perspectives people hold in evaluating it. As we intersected seemingly conflicting voices among the daily influx of visitors, engaging in deeper dialogue, we found ourselves increasingly encountering the complexity and diversity of situations, values, cultures, and historical contexts. What emerged from this was more than a binary conflict between proponents and skeptics, but rather a question of how to govern new changes adaptively within the framework of complex individuals and societies.

Renowned poet TANIKAWA, who collaborated on this project, commented on the AI-generated poems, saying, "They've written something fine-sounding," "A bit too serious," and "There seems to be a kind of logic hidden within, or maybe dwelling." While appreciating and being intrigued by the AI's responses, which differed from his own concept of poetry, he engaged in a dialogue that stimulated him. Watching the video exhibition capturing this interaction, we, along with the visitors, were afforded the luxury of contemplating the question "What is poetry?"

This project endeavored to use generative AI as a mirror to reflect one's values, preferences, allowing unconscious assumptions to serve as a tool for dialogue with oneself and others. On the other hand, there's also the dystopian scenario where generative AI, by producing vast amounts of text, images, audio, graphs, and formulas indistinguishable from human-created ones, could elevate the need of "trust" required in human society to increasingly higher costs, potentially disrupting dialogue and communication.

We would be delighted if this project managed to offer a space where the complex relationship between science, technology, and people, society could be observed from a broader perspective. We hope this would lead to dialogue with others who may not easily agree and non-black and white discussions could find possibilities, significance, and solutions.

Credit

View more about Credit

Related links