The Children of AI

By Florian Douetteau
May 16, 2023

Introduction

We are all involved in building the future. And therefore all of us — including those of us laying the technological foundations of our society — must ask ourselves: what are we leaving for our children? 1

The future is the sum of all remaining possibilities;2 as we make decisions, individually and collectively, we eliminate some of those potential paths. It would be immoral not to consider which doors we are choosing to open, and which we are choosing to leave closed to posterity. In my case, having dedicated my career to making artificial intelligence technologies more accessible, I feel a personal obligation to ask how the decisions we are now making about AI will impact the lives of our descendants.

Imagine the life of a child born in 2023. What do you see? As they grow through their formative years into adulthood, and finally into old age, how will they be marked by the development of technology? Or taken another way: how will the development of technology be marked by them?

As with any attempt at prediction, we run into a paradox: the force that we think will determine the future — in this case, the existence and development of AI, which we take as a fait accompli — may itself be altered over the course of that future. At most, we can observe what’s unfolding before our very eyes: in the short term, at least, technologists worldwide will continue to develop ever-more-advanced AI capabilities. At some point, society could choose to reject these technologies, perhaps violently (but who can say whether that is likely over the next, say, 100 years).3

I am not, here, going to make any predictions. I want instead to explore one possible pathway for the future on our horizon, forcing us to consider whether it is a desirable one or not. And rather than address the unknowable risk of an Artificial General Intelligence misaligned with human interests — a topic that has been covered broadly and elsewhere — I will stick to a narrower concern that, nevertheless, touches on something fundamental to our lives as human beings: what might become of our children in a world that includes increasingly advanced AI? And, if we can answer that: what and how should that AI be?

Like a coin, this essay has two faces. One is an attempt to build a plausible model of the next few decades of AI, comprising a handful of what I consider to be the most relevant parameters and important considerations. The other is what it might look like to run that model, the result not of a deterministic process but a stochastic one. One inference, in other words, among many possible ones — a story, in our case, about a girl named Olive, born this very year, who grows up in a world defined, for better or worse, by artificial intelligence.

Learning, Education, Language, and Research

A life begins with learning: discovering this new world, distinguishing between light and dark, identifying shapes and then faces, understanding emotion. From grunting and babbling emerge the forms of words, of phrases, of entire sentences, and from these the ability to articulate thoughts, wants, and feelings, and the power to name. The experience of the world is increasingly balanced by the capacity for abstraction, as thoughts and reflections allow for hypotheses to be imagined and then tested.

All of these are innate capacities that we have learned, over the centuries, to expand and to hone. Parents and caregivers guide their children through various stages of formal education, leading some to become educators and researchers themselves, teaching the next generation, making new discoveries, and refreshing the cycle.

The development of an artificial neural network follows a similar process, from inception, through reinforcement, to the generation of novel outputs by recombining its inputs.4 GPT-3 has been exposed to 2,000 times more language content (tokens) than the typical human 10 year-old, allowing it to acquire highly functional language abilities. Recent advances in artificial intelligence, and notably the development and availability of large language models like those behind OpenAI’s ChatGPT, make it reasonable to assume that natural language will soon be the primary interface between humans and machines. Because language is the fabric of human intelligence and thought, the increasing sophistication of our linguistic relationship with AI will influence humanity in profound ways.5

Starting with the most direct route, the use of AI to teach language to young and old alike will increase.6 We will likely see personalized AI teachers, adapting to their students’ progress and communication style. Just what the feedback between human learning and machine learning will look like is difficult to accurately predict. That said, we should expect that AI will accelerate both an individual’s ability to acquire new languages and to navigate foreign environments without learning the language. Barriers between communication will fall.

At the same time, communication will be increasingly intermediated by AI whose knowledge and expertise may be partial or specific. This could lead to the creation of new jargons and patois, specific to the model in use, which could in turn become part of natural, human language. The ways in which we describe the world but also the ways in which we comprehend the world will change.

As with children, the linguistic development of AI, which brings with it the capacity to name and to understand, also gives way to the power to hypothesize: given what has been true, and what tends to be true, what might become true? Consider the domain of research and, specifically, the process of scientific discovery. It is reasonable to imagine that humans will develop automated research systems that constantly develop and test hypotheses at a speed and scale far beyond human ability.7 Human beings charged with evaluating the outputs of these results will be at a disadvantage, not having had direct exposure to the millions of experiments that the AI performed, requiring the human evaluator to trust the summary results provided by the AI. (And here, in the very notion of trust, we see another inevitable step in the evolving relationship between humans and AI.) Research will go faster, potentially with fantastic benefits to humanity, but with a risk of further abstraction and degraded understanding of how the increasingly complex systems work, fundamentally.

It is possible that AI will lead us to communicate better, but understand less.

2025

Paul and Julie are the parents of a 2-year-old child, Olive. Busy with work, Paul is passionate about education; he has read about Montessori, Vygotsky, even the Didak 501 experiments from the ‘60s. He is convinced that education is undoubtedly the fundamental element that creates and replicates inequalities from one generation to another. Paul and Julie are not certain they have a lot of financial capital to pass on to Olive, and for them, their best investment in their child is education!

They are full of good intentions.

But Paul and Julie both have demanding jobs — in the coming months, they will both be working late. Without their own parents nearby to help care for their child, they regularly discuss how to build the best environment to support Olive's development. The daycare they have a place in is full. Without it, Olive’s options are bleak. Julie has done the math: potentially, her little girl could spend less than 2 hours a day interacting directly with an adult.

This is exactly what happens come September. Economic activity is booming, and people are being hired left and right. Paul and Julie are no exception, finding Product Management and Lead Developer roles, respectively, their free time diminishing even more as a result.

This raises questions about missed opportunities for development. Does Olive’s lack of time with other children her age slow down her learning to read? Her sociability? Julie has read a book mentioning studies done over 30 years ago and highlighting the different developmental outcomes of children based on their exposure to vocabulary and involvement in their measurable development early in life.

Paul is tech-savvy. Through his professional contacts, he hears about a beta program that gives access to a new pedagogical technology. BabyGPT, as it is called, is essentially a robot and tablet that interact directly with the child. The tablet continuously measures the child's cognitive development and provides continuous stimulation to accelerate learning.

The tablet works in conjunction with two cameras that continuously observe the child's behavior, allowing the software to adapt to her needs and learning style, and empowering it to potentially share this data with providers. This seems a bit invasive, even for Paul, but nonetheless he is able to convince Julie. It's something of a Christmas gift for the entire family.

Every day, little Olive spends a few hours with the machine. Every night, Paul and Julie receive a small report summarizing the day's progress. They finally see problems being solved. They see a representation of an alphabet gradually being filled with the letters that Olive can recognize. Paul and Julie regain a sense of peace of mind. The fear they had harbored of not being good parents gradually disappears.

Olive’s progress continues apace until the summer holiday, when Paul and Julie take her on a trip to the sea.

The tablet system is very well done. Even if they are not required to take the cameras with them on vacation, they can of course take Olive's tablet. And that's a good thing too — how would she be able to fall asleep if she were deprived of part of her daily routine?

On the beach, Olive likes the water, but spends a lot of time drawing in the sand. There’s a heatwave. But with the small tent and beach fan, the family is protected. Paul likes to watch his wife and daughter by the water. The first few days, he tries to show Olive how to play with sand, but Olive doesn't seem very interested. They have found two other couples on vacation, friends of theirs, who have children the same age. Alphonse is 3 years old, and Irène is 2 and a half years old, like Olive. Irène and Alphonse quickly dive into games, playing in the water and with the sand along the shore.

Olive does not want to spend much time with them. She returns to her parents, seeming a bit absent and anxious. Paul does not take much notice, but Julie does.

The holidays, however, are an opportunity to spend a little more time with Olive and to hear her speak. Her linguistic development is amazing. She speaks much better than Irène and Alphonse, it's staggering. Her ability to tell detailed, emotive stories amazes her parents. (“She even uses the subjunctive and the conditional!” Julie said proudly to Paul one night.) When she tells stories, she even seems to invent new words that come back from story to story. "Rapatuktuk! Rapatukuk!"

This catches Paul's attention, who wonders if he has forgotten a character from a children's story. He searches Rapatuktuk on the internet, which leads him down a bit of a rabbit hole, beginning with the correct spelling of the word — “Rapatuctuc.”

"Honey... you won't believe it.. Rapatuctuc".

"Yes?"

"Apparently, virtually all children who have BabyGPT say Rapatuctuc. Apparently it's a word that was used during the development of BabyGPT to set a baseline in the program, but a developer left it there."

"What? They didn't re-train clean models in production? They didn't do backtests on the generated vocabularies?"

"Apparently not — they don't know if it's premeditated or not. A few engineers were fired in the dust-up.”

"But that's crazy, are there other bugs like that?"

"I don't know — but in fact Olive isn't learning English, she's learning BabyGPT English."

“Let’s not exaggerate…”

In another version of this story, Julie and Paul would grow so worried as to become indignant, deciding never to let Olive use BabyGPT again and putting her instead in a daycare, at all costs (even if it means making some career sacrifices).

But that’s not what they do. Paul and Julie don’t follow their apprehensions to their logical conclusion. They get a little scared, sure; but they continue to use BabyGPT and its successive updates. Olive grows up in a different environment from other children her age, with a different vocabulary, and different cognitive references. It's strange...

Or maybe, Julie thinks, it’s not so odd. Don't we have very different values and cultural references depending on the country, culture, social class we come from and live in? Haven't we, in the last century, radically changed the way we think of play and childhood and much else besides, from one generation to the next?

Psychology and Human Experience

Not all learning is a matter of the intellect — of languages, of things, of ideas. Beyond such learning, we also learn to interact with one another, to influence one another, to love one another, and to tolerate those towards whom we feel animosity. Technology has already changed the way that humans meet one another with results that have been assessed to be harmful in some cases, and beneficial in others.8 As AI intermediates more of our interactions with one another, we should expect our individual psychology and our relationships to be affected in ways yet impossible to predict.

AI is a powerful tool that can serve as a lever for both our best and worst tendencies towards one another. An early application of modern AI has already shown us that human emotions can be manipulated on a large scale.9 As AI becomes better at influencing our emotions, we will find ways to use it to heal and to harm one another, to insult and to inspire. Will we be equipped to keep out the bad and hold onto the good? Personal emotional filters could filter content and translate interactions to provide only a particular emotional response, depending on how we want to feel at a given moment, perhaps delegating control to an AI that we trust to craft our emotional experience. (And here, another step: trust gives way to control, naturally.) We may be able to design our personal experiences. What will become of authenticity and spontaneity?

Another dimension to consider is how our behavior may change as we go from influencing one another to influencing the algorithms that intermediate our interactions with one another. The “extremely online” people of today are doing this very well, adapting their online activities and content to what they know the algorithm will promote, with resulting lifestyles that seem abhorrent to most.10 Is this a precursor of where things are going, or rather a temporary aberration where humans are reacting to unsophisticated models, resulting in what seems like strange and unnatural values and behaviors? While it can be tempting to disregard such concerns as an older generation not understanding the new culture of the younger generation, here there are important ethical and societal questions to be raised, particularly by the builders and deployers of the AI.

What values are they building into their technology —unconscious as it may be — that could steer the human experience in a new and, perhaps, undesired direction?

2034

Olive is now 11 years old, attending school with her peers. Compared to other children, she has learned to read much faster — she also has a passion for computers and is starting to code her own programs, mostly games. School has changed little since Paul and Julie's youth — everything is just a bit more digital. Notably, all foreign language classes are now optional, and only rarely offered. Spanish or Chinese are probably not Olive's passion, so Paul and Julie let her focus on something else.

Olive is a typical child who consumes dozens of forms of media and spends a considerable amount of time playing online games. The household runs reliably, comfortably, according to a daily routine. The family manages to get together on some evenings of the week, but, Paul and July being busy as ever with their work, these comprise a minority of evenings, rather than a majority.

Paul and Julie were a little apprehensive about Olive entering middle school; but she found good friends, with whom she plays sports after school, and whom she invites home. They seem very nice; they laugh, they share, they joke around. Paul and Julie learn a bit about the popular apps and sites among their daughter’s cohort. At the moment, 3D video generation is a craze among pre-teens, giving them a creative outlet and inspiring in them an enthusiasm that is pleasing to see. Cracking jokes with fun-looking avatars seems to be the peak of comedy for 10-11 year olds. Paul and Julie are not sure they understand everything, all while understanding perfectly well. At the end of December, the holidays pass peacefully.

But in January, Olive stops speaking to her parents. She locks herself in her room all weekend — which is not necessarily surprising, it would not be the first time — but when she comes out she has a terrible look on her face.

Olive spends hours at a time with her personal assistant. For several years, each member of the family of course has had a subscription for one. Paul and Julie's wealth allows them to have a premium personal assistant subscription with the maximum storage space, which allows each member of the house's assistant to use the maximum computing power, meaning they can constantly train themselves.

Olive and her parents have the same type of assistant, but, given how long she has been interfacing with related AI systems, Olive's is even more personalized and therefore much more sophisticated than her parents'. In fact, hers has been constantly adapting since she was young. Olive and her assistant have their own vocabulary, with specific keywords.

For example, they have a word of their own, "dronte" to describe "cozy, warm, and a little smelly", they have a word of their own "stulb" to say "so high up that it hurts when you stretch to reach it, but you can still get it". And they have a word for them to say "Being sad from morning to night."

One weekend, Olive seems to be cutting her hair by herself. The hair fragments in the sink concern Julie more than anything else. Julie cannot, of course, access Olive's assistant, but she searches for the social media accounts of Olive's friends and comes across a post from a group that has probably been unintentionally made public. At the top of the feed is a video of Olive. The kind of video that no mother in the world wants to see.

Julie takes a deep breath and goes to talk to her daughter. There are screams, there are tears. What has Olive done to get to this point? Julie wants to understand.

And she does. It's not Olive in the video. It's a digital clone. Some of her friends are making her do and say inappropriate things on video and sharing them with each other. It's modern harassment; Paul and Julie had no idea.

Julie talks to Olive, to counselors, to the school. Paul investigates ways to remove the videos from social media platforms. But it's a bit of a lost cause. The only thing Olive can learn, in the end, is to give up control over her image.

This is what she comes to understand in spending time with her emotional support counselor. The counselor meets with her once a week, and gives her exercises to do at home with her personal assistant. Exercises in self-control and self-confidence.

For a few years, Olive will not love herself and will not really know if she wants friends in the real world. She will prefer avatars, pseudonyms, and the other world.

A few years after this scene, a generation of similarly confounding and difficult situations involving adolescents will have completely changed the landscape of intellectual property and generative rights. In most OECD countries, individuals will be granted the inalienable and transferable property of their physical characteristics (face, voice, body) — they can preserve and record these throughout their lives to protect their image as a child, teenager, or adult — and all sharing platforms will have to filter and moderate content while taking into account this new form of property right.

But for Olive's generation, those intermediate years were difficult years. Years of not looking in the mirror.

Socio-Political Economy

Just as certain professions may rise or fall in terms of rewards and status as a result of the broad adoption and accelerating development of AI, the same may be true of different regions around the world.11 Economic growth has been driven by specialization, giving rise to a more complex economy, requiring further specialization and resulting in a specialization-complexity-growth positive feedback loop.12 As AI accelerates economic growth, professions will be further specialized, as will geographic regions and economies, with the possibility of low-wage labor disappearing due to machine automation, running the risk of “premature deindustrialization” for economies that may never have the chance to benefit from the highly industrialized phase of their growth.13

An AI-accelerated specialization-complexity-growth cycle could lead to a dramatic reshaping of the global economy. At this scale of change, it is possible that regional powers would emerge, some specializing in the creation of new intellectual property, and others setting an industrial policy with the goal of becoming the arbiters of global capital. Still others could seek to corner the services economy, developing a human workforce that can be contracted to do the jobs where AI is unable. Some regions could devote their economies to the advanced manufacturing required to sustain such a world system, with intelligent robotics doing much of the labor. Finally, certain areas will likely focus on the production of the physical commodities that human beings have always required: basic clothing, food, and building materials. The differing abilities of nations and regions to manage this transition to dispersed hyperspecialization will result in relative winners and losers, even if there is an absolute improvement across the board. There is a risk for interstate conflict if economic pressures and the perceived success of others predominates; but there is also an opportunity for a highly cooperative, symbiotic world order.

For knowledge workers in wealthier countries, one’s choice of profession may be defined by how one interacts with AI, and to what end. Will you be the creative director, instructing an army of AI designers to produce copy and designs, or are you the director of analytics, exploring the output of countless analytics AIs, constantly crawling and scrutinizing corporate data for new insights to feed the decision-recommending (or decision-making?) AIs.

How will a career develop as the pace of technology forces constant adaptation and reinvention? What skills, possessed by whom, will ultimately find the greatest professional success?

2041

Olive is 18 years old and is still unsure what studies she would like to pursue. If she is honest with herself the studies themselves are not an immediate concern, as she has very good academic results. Her parents provide her with a supportive environment, including private tutors, specialized programs, and apps, which allow her, essentially, to choose what she wants to study, gently guiding her towards areas that will be of interest to her and that are predicted to support her personal and professional development.

But with every day she does not decide what comes next, the pressure to do so only increases. What career or vocation does she want to pursue? As Olive becomes an adult, in 2041, one thing remains unchanged: the choice of deciding her vocation is still hers alone, not a computer's. This continues to be a massive source of stress for her parents.

Like countless young adults before her, she wonders just how to balance economic and ethical considerations, struggles to envision something that is both feasible and desirable. How can one find their own balance between what society values and pays for, and what one knows how to do or can learn to do? What does one enjoy doing? What, above all, will make a meaningful life?

Let's start with the first question and put ourselves in Olive's shoes. She is almost 20 years old, and she is trying to predict what types of jobs will be profitable in the next 20 years...

Maybe they are mining jobs, dealing specifically with the exploitation of natural resources? Back in 2023, such jobs were undervalued in some countries, highly concentrated in others, a dynamic driven by the scarcity of natural resources. In Olive's time, a larger portion of the extraction and exploration processes are accelerated by AI. She learns that AI is driving precision agriculture — there is a growing divide between traditional agriculture, which is now a pure luxury good, accessible only to a minority, and a more industrial agriculture, where AI optimizes the growth of proteins in vats.

After an exciting geology class, Olive grows passionate about the extraction of mineral resources and the distribution of atoms and molecules on earth. She reads articles that make projections 10 or 20 years into the future on the scarcity of minerals and fossil material.

She grasps the possibility that, after a few decades of extraction, there may be a labor shift that sees a convergence of extraction jobs and recycling jobs. This would be an exciting turnaround! For example, to obtain the raw materials necessary for the production of plastic, it will be necessary to balance the tradeoffs between the extraction of oil, which has become too expensive to burn anyway, and the recycling of plastic waste.

Olive looks and imagines following a dual curriculum in petrochemicals and recycling — it's probably a secure job anyway and the opportunity to work in a growth industry. Maybe she should get a PhD? She hesitates; eventually, she loses interest.

Maybe the future is in the transformation of physical goods? In 2023, many manufacturing and processing jobs were moved to low-cost labor countries as part of a decades-long trend of globalization of trade. In Olive's time, gains in automation, transportation costs, and geopolitical risks have changed the picture, with more local production.

What interests Olive the most are the nearly-automated production chains — driven by artificial intelligence — which have also become more adaptable manufacturing chains, capable of being reconfigured for a variety of products. In 2023, only products with high added technological value, such as vehicles or microprocessors, were produced from highly automated manufacturing chains. Olive is passionate about a documentary on the future of the clothing industry: robots are changing the way clothes are made.These compete with "handmade" production, which has become a specialized, high-margin luxury industry — "Made by human hands" appears in consumer surveys as a strong signal for an intent to purchase, almost as potent as "sustainably produced". Olive hesitates: what if she both studied manual work and received a degree in automation chains? Is the future at the crossroads of these two worlds?

It's not entirely clear to her, mostly because she doesn't really know what type of craftsmanship to get into, nor what she’d be good at.

Maybe she should go into sales and trading. One thing hasn't changed for the past two millennia — to sell, one must convince another human being to buy. This still holds true in Olive's time, although the means of persuasion are becoming increasingly digital. When Olive was 15, generative techniques exploded in the field of digital marketing and sales, creating almost a societal discomfort with the sheer amount of information generated.

In 2023, some of the most popular and lucrative jobs were selling of high-added-value goods (often in B2B) who know how to orchestrate complex sales — those are still around, and still as necessary! As for B2C, the prevalence of translation and generation techniques led to a concentration of marketing jobs in certain countries — one could reach the entire world with many fewer language barriers!

Olive doesn't have a strong appetite for jobs with human contact. But she is passionate about a book she's read: the "0-Hour GPT-Work Week". The author describes how he designed robots that make their own websites and launch their own campaigns to market a variety of dietary supplements.

She's not sure if this is reality or fiction, but she wonders: is it a job for me? And does she really need to study to do it?

Maybe she’s best suited for a service job.? In Olive's time, the aging population has led to a boost in personal service jobs all over the world. These locally-defined services are becoming increasingly more valuable, driven by the shrinking population able to provide them. This would be a good direction for a career, were the first personal robotic assistants not just beginning to emerge — the first robotic masseur, the first AI pharmacists, the first AI personal trainer...

Olive therefore imagines that succeeding in services in her future will mean managing fleets of robots interacting daily with people. Management, she knows, is tough; it's primarily a question of motivating others. But robots, a priori, do not need motivation... so what will they need from their managers? To feel appreciated by their users? Or to have a connection with them? Maybe they need to feel emotionally relevant?

Olive thinks it would be a good idea to do a double major in psychology and robotics to tackle this problem. But then again, maybe she should think about management consulting in business. In Olive's time, there are still bankers — it's been several centuries and their job hasn't fundamentally changed, it's just become more complicated. Short-term trading activities have become increasingly robotic, extending 40 years of job evolution — but medium-term trading remains a matter of skill, judgment, and good relationships.

On the management side, managing a company is still based on making hard judgment calls and placing bets that investors don't want AI to make — after all, you need someone to fire when it all goes wrong. The most innovative aspect of finance remains the creation of new accounting methods around ESG metrics. There are different attempts to build parallel systems to the financial system to evaluate the economic and social impacts of companies. The field is not yet fully defined.

Olive thinks very long and hard about a double major in economics, as well as analytics and artificial intelligence...It's very tempting!

But maybe the best choice would be a career that combines design and the creation of intellectual property. The "creator" profession has constantly evolved and become more democratic in the last century: creators have become independent in all fields — music, literature, media, software. The creation of intellectual property has always been a subject of geopolitical domination, either through culture or through control of technologies.

Creativity has evolved significantly in recent years; music is no longer composed in exactly the same way, the best authors are assisted by AI, even celebrity chefs consult with software when building complex flavor recipes.

Olive is particularly interested in the profession of architecture and object design — the progress of 3D printing, AI-assisted design, and nano-technologies allow for the creation of surprising shapes, objects, and designs.

After almost 20 years spent mostly in the virtual world, Olive is deeply moved by a modern art exhibition, in which the artist uses new technologies to reproduce and reinvent old objects — nurtured by the virtual world, Olive becomes enamored by that which her childhood had often lacked: the physical world

Struck by a sudden fit of passion, she creates her own curriculum combining design, artificial intelligence, and nano-tech. Will Olive invent a new profession?

Climate and the Physical World

It is physically impossible to sustain the resource-intensive, exponential economic growth trajectory that we have been on over the past two centuries; there are literally not enough atoms in the universe to sustain it over the coming millenia.14 AI, however, could present an alternative model, decoupling economic growth from resource consumption, or perhaps decoupling human satisfaction from economic growth.

One future could see humans increasingly consuming virtual experience, constantly designed and re-designed by AI to maximize their enjoyment. Rather than boarding a plane for a 8,000-mile flight to stay in a climate-controlled resort on an island where all physical goods need to be delivered by small plane or boat, the “vacationer” could, instead, be immersed in an even more sensorially appealing virtual experience, consuming only a few watts of electricity, produced by the AI-designed, high-efficiency solar panels on their roof. Could an experience whose carbon footprint would be measured today in tonnes be replaced by a more pleasurable experience whose footprint is measured in grams?

Within this realm of speculation, the floodgates open wide. What other human experiences could be replaced by an AI? And what will be the consequences to some of the most fundamental human activities? Will birth rates, which tend to decline under conditions of economic growth, slope further downward in the era of AI-driven productivity? What if a person who is not ready to commit to the labor and expense of becoming a parent can test the experience of becoming a parent in a virtual world? The “child” could be an AI whose model is trained on the behavioral patterns and expressions of the user and another “parent,” real or virtual, merging the characteristics of the two.

And might all this be ethically and politically justified, furthering its acceptability as a cultural norm? The average annual carbon footprint of a western European in 2023 is about 5 tonnes of CO2 equivalent.15 Compared to the near-zero emissions of a computer server powered by decarbonized electricity, the savings are substantial.16

Economies that fall into a net negative birthrate fare poorly in the current economy, would this be true in the AI-powered economy? Are there sub-populations that will continue to maintain higher birth rates? What will this mean for the biological evolution of humankind?

2058

Olive is 35 years old. She has been working at the intersection of design and nanotechnologies for 10 years, in a design firm for smart objects. She holds a senior position in a small, cutting-edge design firm, working for the biggest brands to build furniture, automotive components, and even clothing that use new manufacturing methods and have specific properties.

Tastes have changed, and the luxury market is changing its product portfolio, moving toward objects where luxury is not just about aesthetics, branding, or sourcing, but also about new properties of smart objects that make people dream and offer them a glimpse of a better world.

Her firm continuously works on new creative projects, such as dopamine shirts (a nano-sparkle surface for shirts that increases dopamine levels in observers) or the so-called window-wall (a self-shading window that eliminates the need for curtains, improves thermal insulation, and also serves as a visual projection screen, beautifying of the outside world through subtle augmented reality).

One of the added values of her firm is the constant use of artificial intelligence systems to accelerate and optimize designs. An AI system has been developed by the firm over the years to simulate the reliability of a design, automate responses to human input, build representative user groups, and suggest alternative materials.

In the firm, Olive specializes in communicating objects that adapt to the user's mood and desire. She is an advocate of the "no-button/no-choice" design philosophy, whereby objects are designed to communicate with the user as implicitly as possible.

Her big breakthrough comes with The Implicit Lamp, a desk lamp that reacts to implicit commands from its user. There is no defined way to turn on or off The Implicit Lamp. One can snap their fingers, or tap it, and the lamp learns the user's signal over time. A video goes viral of a user who can control their Implicit Lamp by looking at it and furrowing their brow. The lamp becomes a blockbuster, with bedside versions and models that create a colored atmosphere to improve the user's mood or concentration.

The success of the lamp and some of her work contract royalties allow Olive ease up on the throttle, to take stock of her personal life with calm and without worry.

In terms of personal life, Olive has not yet started a family; in fact, like most of her age group, she is not sure she wants to. What will the living conditions on Earth be like in 30 years? In 50 years? Is now the right time to have children? How does she envision life with her children while she is only working? Won’t they, anyway, spend more time with their Personal Learning Assistant than with her once they are weaned?

Fertility rates are declining, and Olive is also frightened by the success of the objects she creates. The Implicit Lamp is a fantastic object, but building an implicit lamp has a huge environmental impact. Worse, the lamp continuously consumes energy in order to observe the environment. Lamps didn't use to have embedded GPUs.

After finishing the design for version 3 of the Implicit Lamp, Olive takes a few weeks of vacation. Flying has become too expensive, but even in May, the sea by the coast is warm. Global warming has had its benefits.

Taking some time just for herself, she realizes that she thinks humanity should spend less time buying, less time building, and more time dreaming. But how to make more people dream, while creating an economic revolution?

Olive also thinks about her own dreams. With her talent, she has the means to make them real — a fact which might once have seemed to her a kind of magic.

She launches something she calls FamilyWorld©. A virtual reality universe created by AI, in which an existing or virtual couple can create and raise children. FamilyWorld© becomes a huge success, particularly due to the incredible realism of the social interactions and family-related emotions generated in the virtual universe. The AI built by Olive and her team is so potent that it strikes a nerve across the world.

Humans begin connecting with each other through their desire for virtual children. Their own behavioral and cognitive models that have been developed over decades of interactivity with AI are merged in FamilyWorld© to create those of their virtual children. Weights and biases replace genes and chromosomes. An entire economy with a quickly growing population is created in the virtual universe while birth rates in the physical world stagnate.

FamilyWorld© is the first platform to reach 100 million users in less than a month after its launch. During its second year, FamilyWorld© rolls out a marketplace to design and sell toys for the platform.

During its third year, FamilyWorld© becomes a fully fledged media platform: why watch a series alone on your preferred streaming platform when you can watch it as a family instead?

After 5 years, researchers publish studies in Nature about the systemic impact of FamilyWorld© on fertility (-3%) and leisure travel (-10%). Its environmental impact is major: although FamilyWorld© is one of the main consumers of electricity (mostly decarbonized at this point) and bandwidth on the planet, the transfer of family leisure time and the better part of economic activity to a virtual universe has measurably positive effects on the environment. Ever fewer humans enjoy this new world.

Despite its sustainability, FamilyWorld© is subject to controversies; most are negative, related to the decay of family values and the loss of traditional ways of life. But some are also positive: does the platform not provide a way for people to learn how to behave as a parent in the virtual world for a few years before deciding whether they want children, with full awareness of the consequences? What previous generation ever enjoyed such a benefit? The fact is that birth rates have dropped because more people are choosing not to have children once they’ve developed a better understanding of what parenthood would be like. What kind of people are, because of this, not being born?

Olive has realized her dream of making others dream; she herself does not use FamilyWorld, nor does she have a real family. But she becomes one of the first trillionaires in virtual reality, taking into account three decades of inflation.

Society and Humanism

We have now inverted the initial question, rather than asking what future we will leave our children, we are now forced to consider what children we will leave for the future.

Theorists of simulation have long wondered whether the realness of a simulation could ever attain such a degree of authenticity as to become (and not just feel) more real than reality itself. Will we leave biological children who resemble us, biological children who are different in meaningful ways, or digital entries, vivid simulacra of children made of vectors and weights, capable of human-like interaction in most every way?

As the complexity of the AI systems and their many interactions and reinforcements grow over the decades, humanity will face growing challenges to understanding how these systems work at a fundamental level. Basic repair and maintenance will become more complex, potentially becoming impossible without the aid of additional artificial intelligence. Digital neuroscience, where scientists study the behavior and functioning of the AI, may become necessary for understanding the complex systems that will have built themselves.17

As the specialization of the economies leads to narrower areas of expertise for all, will basic mechanical and technical knowledge be lost, reducing the resilience of humanity and exposing it to additional risks? Will we lose our grip on the material world around us, diluting the relationship we have maintained with it since the evolution of our species? Will we, in short, become alienated from what makes us human?

Maybe. But I know that every force has a counterforce. It was a period of scientific, philosophical, and cultural ignorance that gave rise to the humanism of the Renaissance, during which the wisdom of the Classical past was excavated, rediscovered, and renewed. If our relationship with AI, despite all of the benefits it brings, does drive us into a new dark age, might it not also open the way to a new humanist era?18

Projecting this exercise decades into the future, we are faced with incredible uncertainties. But we must face these unknowns in an attempt to ensure that we are leaving our descendants, however and whoever they may be, the best possible set of options.

2075

Olive has passed the golden age of fifty. The world around her has changed. Improved, perhaps, but it has also deteriorated. FamilyWorld© has had its share of competing platforms, takeovers, and controversies, but is still an exceptional success.

Olive's life is quiet. She often goes on walks along the river that flows below her home. She always takes at least two or three hours to go outside on Sundays, and she always sees familiar faces. She is of course known without really being known — people are so saturated with images that they don't necessarily recognize celebrities in real life.

Every Sunday for a year, she takes the same path: seeing the cycle of the seasons in nature calms and reassures her. Sometimes someone stops her, asks for an autograph, or nods at her. Sometimes young people, sometimes old people, often people alone, very rarely couples with their children. One day, she stops to greet an older gentleman who has been her neighbor for a few years. They are just beginning to talk when he opens his eyes wide and looks behind her.

Then everything is black.

Olive wakes in the hospital a few days later. She learns from the nurses — but mostly from the news on the net — that a stranger attacked her. It was a man claiming to be from an anti-FamilyWorld© movement. He created an avatar before his attack that roamed FamilyWorld© to broadcast his motivation: he had spent more than half of his adult life in FamilyWorld©. He became attached to a child he had in the universe, but that child became increasingly demanding. The experience ruined him.

The news is full of controversies over the rightful place of virtual universes and the ethics of virtual families. Olive turns everything off. She stops listening.

She leaves the hospital, but remains sufficiently injured to require a long period of convalescence.

She returns to her parents’ old house, her childhood home. Both of them departed a few years prior.

In this home, surrounded by objects both familiar to her and made strange by the passage of time, she is flooded with memories of her youth. She finds hundreds of recordings of her on an old tablet. Incredibly, the personal AI that her father gave her when she was just two-years-old and that recorded everything still works. She watches the old footage of herself and feels calm, nostalgic. Her parents' house is nice, but a bit old. The lights sometimes turn on and off by mistake, and it's a bit cold in some rooms. She calls a repairman to come and fix it up.

A young man with a toolbelt rings the door. He enters thinking there is a version problem in the home's governing AI or in the climate control subsystem, but he's not sure. These are old systems, very complex in his view, dating back over 20 years. In the end, it's easy to keep records of childhood videos, to rediscover lost moments based on feelings and impressions, but very difficult to find someone who really understands how the technology that undergirds everything works.

This intrigues Olive. She wonders first if there's a business to be built here: maintenance of old AI systems.

Digging deeper, she realizes the problem is more complex: the AI systems are so convoluted and have been created in such a hierarchical way, one on top of another, with one AI generating another, that it has become very difficult to understand how they work together. Human knowledge itself is dwindling. She looks at analysis after analysis: 70% of children between 10 and 12 years old can no longer write on their own without AI assistance. What risk is the world running if no one knows, or at least thinks about, how "the whole" works? What risk is the world running if humans are no longer autonomous enough to think for themselves? If a home repairman no longer knows how a house works, what can anyone really be said to know about the vast virtual universe that buoys us all?

Olive remains holed up at her parents' place, still somewhat afraid to go outside. She refused to take AI psychological support sessions after the accident. It's ironic, given that the sessions are a service provided by one of her holding company’s subsidiaries. (After a decade of success, Olive restructured everything in a holding company with a very generic name to allow her to diversify.) Olive doesn't want any more AI therapy. She wonders if she's becoming resistant to change and she often thinks about the assassination attempt she escaped. She prefers to stay home and think about what will happen after she's gone.

She plays the childhood videos of her and her parents over and over again. They make her realize that she is one of the first human beings to have been recorded for more or less the entirety of her life. In one of them, she sees herself swiping through a book about the Renaissance. Though decades have passed, she remembers its contents almost verbatim. She no longer remembers exactly why, as a child, she was so fascinated by humanism — that is, the study of humanitas, the rediscovery in ancient texts of a lost understanding of civilization, education, and culture that ancient societies had built, and that 14th-century Italian and European scholars rediscovered, a millennium later.

Later, sleeping, Olive dreams of an AI that will bring about a new humanism. She dreams of being the one to invent such a technology, one built for the future but devoted to excavating the past — to rediscovering the old ways of reading, learning, writing, understanding systems, and undertaking scientific exploration; in short, an AI that would unearth the intellectual curiosity that was proper to her species before the massification of artificial intelligence.

Back home, she devotes her time and resources to a new product: an AI testamentary avatar. The idea is to build a sort of AI/smart contract that can manage an individual's wealth after death. With the growing number of childless people, this is a market sure to expand in the future. She designs a system so that the testamentary avatar is calibrated based on the personality and ethics of the individual, as recorded by various home automation and virtual reality systems, which are now omnipresent, capable of capturing ever movement, every hesitation — maybe even every thought? And she uses herself to beta test the product: given her wealth and the incredible amount of recordings she has of herself from the time of her tender childhood, she is the ideal test subject for this new product. She decides to call it "AITrust". "The trust that will act on your behalf for centuries and centuries" — it's a catchy slogan for market launch!

A part of the product design work involves simulating the trust's activity for 10, 20, 50 years before any actual implementation. This is exciting as Olive can watch her own avatar, her AITrust, allocate resources with a strange similarity to her own decision-making habits. But it can also be frustrating: as with all simulations, the model’s behavior seems always to diverge after a few years. It's as if she needs to force variables! Or in other words, she needs to write or describe the goals to be achieved so that the AI can then make creative and coherent choices.

She wonders what really matters to her, and thinks back to her parents' house and the repairman who couldn't fix anything. She recalls her dream of a new humanism.

She tells her AI trust, programming it by voice: “You will seek to invest in projects that revive a form of humanism. You will allocate your investments primarily in this type of project. You will pass on my legacy to the human beings who want to know where they came from, and you will be the AI who will help them remember.”

And now what?

There is a dangerous tendency to believe that AI is inevitable, and that we cannot change it.19 This is similar to the belief that democracy was the inevitable outcome of agriculture, which many experts now dispute.20 At the same time, it can be easy to dismiss the first version of a new technology as having low potential for widespread impact, given that its initial release only demonstrates a fraction of its potential application.21 Both beliefs are dangerous. In the case of AI, we owe it to ourselves and our descendents to ask ourselves, what kind of AI do we want? And what do we need to do to build that?

Footnotes

  1. Here, “our children” implies not only our direct progeny, but all humans coming after us, regardless of whether or not one is a parent themselves.

  2. I have found thinking about decisions from the perspective of a light cone to be a helpful heuristic. By making certain decisions, we remove some potential futures from our light cone, while increasing the probability that others will occur.

  3. A premise of the popular and influential science fiction series Dune is that, thousands of years before the events of the initial series, there was an uprising against "thinking machines". The technology present in the universe, while highly advanced, is purely electro-mechanical. It is a possible future for us, though not one that I address in this essay.

  4. Warstadt, A., & Bowman, S. R. (2022). What Artificial Neural Networks Can Tell Us About Human Language Acquisition. arXiv. https://doi.org/10.48550/arXiv.2208.07998

  5. The study of semantics is fundamental to understanding how we conceptualize the world around us, expressing it in language. It is all the more relevant in a context where AI has shown the ability to producing compelling language that gives the striking impression of understanding.

  6. Christiansen, M. H., Contreras Kallens, P. (2022). AI is changing scientists’ understanding of language learning – and raising questions about an innate grammar. The Conversation. https://theconversation.com/ai-is-changing-scientists-understanding-of-language-learning-and-raising-questions-about-an-innate-grammar-190594

  7. The notion of PASTA (Process for Automating Scientific and Technological Advancement) developed by Holden Karnofsky is helpful in this context.

  8. Castro, Á., & Barrada, J. R. (2020). Dating Apps and Their Sociodemographic and Psychosocial Correlates: A Systematic Review. International Journal of Environmental Research and Public Health, 17(18). https://doi.org/10.3390/ijerph17186500

  9. In 2014, it was widely publicized that Facebook had been running experiments where they were attempting to influence the mood of their users by surfacing different types of content in their news feeds. The public was shocked, but data scientists and marketers shrugged. This kind of A/B testing had been their bread and butter for years: show different user populations different content, measure the outcome, optimize the experience to drive the behavior you want (clicking, buying, engaging, feeling). What was different here? Was it simply a case of the public catching up with the state of technology? Did it seem more invasive? Did it perhaps pop a perception that the content presented on the internet was somehow an objective reflection of reality? The scandal is an interesting example of how the general public can struggle to keep up and form a collective opinion or an ethos around technology, and how quickly technology can outpace the development of social customs that can help to understand and govern that technology.

  10. Davis, G. (2022). I Don't Want to be an Internet Person. Palladium Magazine. https://www.palladiummag.com/2022/11/04/i-do-not-want-to-be-an-internet-person/

  11. Cowen, T. (2022). Who gains and loses from the new AI? Marginal Revolution. https://marginalrevolution.com/marginalrevolution/2022/12/who-gains-and-loses-from-the-new-ai.html

  12. Hidalgo, C.A. Economic complexity theory and applications. Nat Rev Phys 3, 92–113 (2021). https://doi.org/10.1038/s42254-020-00275-1

  13. Cowen, T. (2019, May 8). Neglected Open Questions in the Economics of Artificial Intelligence. NBER. https://www.nber.org/system/files/chapters/c14032/c14032.pdf

  14. Our human minds are bad at imagining exponential functions. In this article, Holden Karnofsky takes a 10,000-year perspective to help us imagine how unsustainable recent economic growth is. https://www.cold-takes.com/this-cant-go-on/

  15. https://www.iea.org/regions/europe

  16. Nilvér, K. (2019, May 23). The Carbon Footprint of Servers. GoClimate Blog. https://www.goclimate.com/blog/the-carbon-footprint-of-servers/

  17. Karnofsky, H. (2022, December 15). High-level hopes for AI alignment. Cold Takes. https://www.cold-takes.com/high-level-hopes-for-ai-alignment/

  18. Cartwright, M. (2020, November 4). Renaissance Humanism. World History Encyclopedia. https://www.worldhistory.org/Renaissance_Humanism/high-level-hopes-for-ai-alignment/

  19. Bender, E. M. (2022, May 2). On NYT Magazine on AI: Resist the Urge to be Impressed. Medium. https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd

  20. Gill, J. (2023, January 13). David Graeber vs Yuval Harari: Exploding the myth of how civilisation began. Middle East Eye. http://www.middleeasteye.net/opinion/david-graeber-vs-yuval-harari-forgotten-cities-myths-how-civilisation-began

  21. Cowen, T. (2022, October 25). Analysis | Get Ready to Relearn How to Use the Internet. Washington Post. https://www.washingtonpost.com/business/get-ready-to-relearn-how-to-use-the-internet/2022/10/25/2337e07c-546e-11ed-ac8b-08bbfab1c5a5_story.html