Author: haymest (Page 1 of 8)

Transparency: Generative Augmented Perspective

Up to this point in our discussion of transparency, we have been talking about things that are very real technologies. Today, I want to take a small step into near term reality based on current tools.

So far, we proposed opening the doors of information locked away in proprietary boxes, putting that information into perspective through concept mapping, and constructing linked canvases of ideas through layered concept mapping. This week, I want to fuse these ideas with the technology of the moment, Generative AI.

Information processing is a labor-intensive process. Whether we’re creating documents for large companies, or just trying to keep a handle on our own thought processes. I have filing cabinets of semi-sorted paper, hard disk drives of data, and thousands of pages of my writing.

Since the days of HyperCard, one of my chief struggles in life has been making logical sense out of all the information I collect. I have found very few shortcuts and have massive data forests to contend with.

Even more frustrating is when my brain tells me I need a particular piece of data and I can no longer find it. I have tried various tools over the years to organize that data going all the way back to HyperCard, but maintaining those tools was a labor-intensive process. Even after I put the data into the tool, it quickly became dated and sometimes inaccessible.

When I write, I know I am getting knowledge from a vast range of sources. Sometimes those are explicit, such as my current recollection of Vannevar Bush’s Memex,. Often, however, years of research shape my writing in subtle ways. Sometimes I can figure out those connections with a great deal of work, but sometimes it’s just too hard to backtrack my thinking that far back. Even if I can, it’s a major effort to locate the original source.

Generative AI is making a vast array of connections as it creates documents. It does this by scraping data from a range of sources. Like the bread crumbs on the web, the connections to these sources are hidden. This practice has caused considerable controversy among content creators. I asked ChatGPT “How can we combine Generative AI with concept mapping to create connective maps of the world?” This was its response in text form:

ChatGPT response

ChatGPT has produced some interesting responses, but I want to explore where it’s getting its information from and how we might connect those ideas into a more comprehensive approach to mapping ideas.

This should be possible. Generative AI is an augmented connection device. However, like the web, ChatGPT hides those connections.

If we made Generative AI transparent, it could also be an augmented perspective device. I took those ChatGPT results and plotted them on a Miro concept map.

Miro copy of what might be possible with visual generative AI

This map represents an iterative step. There is nothing new being added to the map compared to the ChatGPT response. I Generative AI could generate this kind of map. A further iteration would add sources and applications as well as suggest connections between items on the list. Right now, humans (like me) must do this part.

Coming at this from complex to simple patterns, one of the more interesting tools I found in recent years is the Open Syllabus Project. It plots a heat map of all the readings from open college syllabi. It forms a knowledge heatmap of what we teach in higher education through the materials on our collective syllabi.

Open Syllabus Project

There’s a lot about this diagram that is automated, but it still requires a fair amount of labor to set up and maintain. However, I could see this kind of tool being fused with AI to create automatic heatmaps from any collection of data.

I can imagine a host of uses for this kind of tool. As you may recall from the previous blog, I created a connected set of ideas and projects for my Miro resume.

This was a tedious and labor-intensive process, even though I had all the data I needed from IdeaSpaces. Computation excels at replacing tedious labor.

These are just three examples of what is possible with AI tools that automatically aggregate datasets and propose linkages. Consider for a moment just how much of today’s professional work is spent doing just that.

I used to work at an architecture firm. Most new architects spend huge amounts of time creating what are called construction documents. These connect the design to the materials, furniture, fittings, and equipment necessary to build the building.

This is tedious work. It is also a task that consists almost entirely of making connections between various bits of data. Generative AI is made for these are the kinds of tasks.

This would disrupt the architectural practice as it exists today. Creating construction documents is a central part of the process for breaking in new architects. Demand for junior architects would decrease.

However, this is also an opportunity for firms and schools to up their game and bring architects into practice at a higher level than is currently the case. Also, someone is going to have to manage this AI process.

On a personal level, I would love a Large Language Model AI that would look at my writing and suggest where I’m getting my ideas from or to provide suggestions for additional exploration of adjacent ideas.

This would automate that frustration I described earlier. Probably 90% of the resources I have accumulated over the years have been digitized, either by me or by some other entity. Connecting them is the real challenge.

This kind of tool would also tell me if someone else has explored the same territory. I can’t tell you how many times I’ve been writing and thinking “surely, someone has thought of this before.”

Concept maps are an ideal tool for surfacing the connections that the AI finds. Instead of providing a list of sources or materials, an AI coupled with a concept mapping tool like Miro could give us a connective heat map that is both live and evolving. Humans could then focus on creativity and novelty.

This brings us full circle back to the blog from the beginning of this series. Imagine an AI that creates a concept map or a heat map of all the connections, while constantly pulling from live data. We could use this to fine-tune the AIs. Visualization also creates opportunities for oversight to mitigate bias or other kinds of intellectual corruption from occurring.

We could use these kinds of maps for regulatory oversight to spot areas of mistakes or deliberate gaming of the system. A generative system, coupled with a visual output, could create these kinds of documents automatically, making compliance with government oversight much less onerous and labor-intensive than today.

If we can get past the alarmist rhetoric and look at the possibilities for human augmentation that these tools provide, we can unlock stores of overlooked or hidden information to improve our societies, to advance science and knowledge, and to help us overcome the very real challenges we face as a species today.

Information is at the core of everything. Learning has always been the killer app for the human species. It’s what gave us key advantages in our revolutionary struggle. The explosion of information over the last 40 years or more has made comprehending what we’re seeing more difficult, not easier. This is because we optimized technology to distribute information, not connections.

It is time to take the next step and connect that store of information to our human abilities to make connections and find patterns. For that, we need to think differently about the tools at hand, and the tools we should prioritize building. Perspective creates wisdom. Wisdom is in short supply these days.

Transparency: Creating Perspective Through Layered Concept Mapping

The real issue in software design is the design of ideas. But most people are looking at the wrong levels, fixating on particulars, and not seeing the immensity of option – or the imperative of cleanly condensed structure. – Ted Nelson Dream Machines, 2nd ed., p. 70

Ted Nelson is interested in modeling ideas. Building on the work of Vannevar Bush, he saw the potential of computing technology to transform how we see and manipulate information. As I argued in a previous blog, seeing information in context is at least as important as having access to it. Information is not transparency. Perspective is.

Nelson understood this key facet of transparency. His lifetime of work has reimagined connections between different parts of our atomized information network. In his vision of the web, which he calls Xanadu, he sees a floating network of ideas connected bidirectionally through a series of hyperlinks, a term which he coined.

However, when Tim Berners-Lee invented the World Wide Web in the late 80s, these connections became buried. The web that emerged after Berners-Lee lost the critical trail of breadcrumbs which were central to Bush’s and Nelson’s vision of a network of ideas.

On the World Wide Web, the information itself took precedence over its connections to other information. The paper metaphor also persisted, and with it, the limitations of text. Berners-Lee recognized this shortcoming (see his book, Weaving the Web). The technology of the time limited the web he created.

Xanadu represents a practical implementation to realize Nelson’s vision of free-floating information bits and the mapping of the hidden connections between them. This is both conceptually and technically difficult. Nelson was ahead of his time, but I can imagine that it could be possible with AI and XR augmentation to approach his ideal of information transparency (more on this next week).

Concept mapping, however, excels in highlighting connections between disparate pieces of information and ideas. By putting things into perspective, and then allowing us to shift that perspective on a canvas, concept mapping can provide powerful insights into the workings of our own brains and the collective workings of brains in a distributed group.

Perhaps concept mapping can form a bridge between how we now manage our information stew and Nelson’s network of connected ideas. Note this illustration from Nelson’s Geeks Bearing Gifts (Mindful Press, 2008, 2009). Nelson seems to envision a concept map at the top of the graphic, but the connections extend vertically instead of just horizontally.

 

The Generalization of Documents from Nelson, Geeks Bearing Gifts

The Generalization of Documents from Nelson, Geeks Bearing Gifts

The limitations of concept mapping is that it represents a two-dimensional canvas, instead of the three-dimensional space that Nelson envisioned. And unless you engaged in the exercise with a group in a room, the technology limited concept mapping to mapping your own ideas.

I discovered a new trick with Miro that allows you to make links out of objects. We can use this technique to link to outside references like documents on the web but, more interestingly, we can also link to additional canvases within Miro. Using this trick, you can create interconnected canvases of connections. I have done this with my resume.

We can use this third dimension of interlinked canvases to explore and flesh out different subsets or rabbit holes of ideas. We can also link these to the outside web (knowing that’s a one-way street).

As in Nelson’s diagram, we can extend a two-dimensional idea space downward or upward into infinite interlinked two-dimensional idea spaces. This resembles the web as we understand it today.

However, there are two critical differences. First, it makes creating “pages” of connections almost effortless and accessible to non-technological users.

Platforms such as Miro are accessible with minimal instruction. I’ve worked with groups that created complex maps in an hour with no prior knowledge of how to use the software or even the idea of concept mapping.

Second, if you create a root level that requires the illustration of connections between these subsets of ideas, you have begun to create what Vannevar Bush referred to as breadcrumbs.

Concept mapping brings the pathways and connections to the forefront. A root guide or group of guides can create the top layer of the nested idea diagram. Other group members can then expand that core map horizontally or into linked canvases. You can even label these connections so you know who created them.

This highlights another unique aspect of a tool, like Miro. It is a collaborative and asynchronous activity. Groups of people can work together on root-level documents, subgroups of people can work on connected subsidiary documents, and all groups have the choice of working synchronously or asynchronously on a persistent canvas or sets of canvases.

I realize this only goes halfway to the environment Ted Nelson envisioned for Xanadu (again, for that we need AI and XR). However, it uses a tool that already exists, is accessible, and enhances the connections and perspective essential for achieving the transparency I wrote about in my last blog.

There is also a fourth dimension possible here. By creating persistent, adaptive idea spaces, we can see how ideas and systems evolve. I’m pretty sure that Miro does not have a history feature, but that might be something the company should design.

One of the key elements identified by Vannevar Bush in “As We May Think” is understanding how we got to where we are. Perspective over time is at least as important as perspective in space when making connections. Both aspects are critical to transparency.

As an example of how this is done, let me return to the blog map of my writing that I created earlier this year. If you go to the map now, you see that I have turned each blog title into a link linking back to the original blog.

Article List

I have also turned the reference list in the center into live links as well. What I have done here is to trace pathways anyone can follow between my work and the ideas of others.

It is up to the reader which pathway to follow. The visualization makes the connections obvious. This example doesn’t illustrate the potential for groups to create similar documents, but that is a question of scaling.

Transparency requires seeing. Seeing requires perspective. We now have tools that help us weed through that complexity, but we need to be creative about how we apply them to achieve our goals.

There are a lot of ways to assemble these interconnected idea maps. We could start with a central goal, and then have groups or individuals work outward from that goal. We could create a node of shared understanding and then work backwards from that understanding to its roots. Or we could start with a complex system map like this one and then use it to explore how that system came to be and where it lost connection its connections to human learning.

The Disconnect in Higher Education

The Disconnect in Higher Education

These explorations can become very meta quickly. Groups could explore how they explore, for instance.

Before we can find our way out of the information thicket, we need to make maps that help us navigate the forest. Layered concept mapping could turn into a very useful tool for mapmaking.

New tools are on the horizon, which could augment this effort even further. Generative AI is all about making connections. I could envision mapmaking AIs that could form basic maps which groups of humans could enhance further in a symbiotic partnership. But that’s the subject for the final (at least for now) transparency blog.

Transparency: Seeing a Wider World

I argued last week that one of the chief challenges in today’s world is a lack of transparency in our complex systems of information. However, even if we regulate to create transparency, we will need to develop tools for navigating the resultant deluge of new information.

We have those tools at hand, but most of us don’t use them. We miss opportunities, such as using the tools of mapmaking to help us learn. Instead, we persist in the learned behavior of following textual narrative pathways.

Learning and innovation are about perceiving information in new ways. We miss so much context when we never deviate from linear textual narratives.

Visualizing our ideas changes what we see. It allows us to cut through information soup and perceive what’s important in complex systems.

Humans excel at pattern recognition. “The speed of this kind of human visual processing contrasts dramatically with relatively slow and error-prone performance in strictly logical analysis (such as mathematics).” (MacEachren and Ganter, 1990, p. 67) Pattern recognition is the human strength that visualization unlocks.

I use maps to help my students grasp complex ideas and patterns as they navigate the political systems of Texas and the United States. We can use the same technique to navigate the complex information environment that we face today.

In Discovering Digital Humanity, I noted many groups found their voice because of the democratization of the network through technology. We heard voices that were ignored, relegated, or suppressed because they didn’t have to pass through the filters of “mass” media. However, that same technology democratized the spread of misinformation and deliberate manipulation of information streams.

All stories lack context. We have been conditioned through our use of text in education to accept linear, text-based narratives as being the most legitimate forms of communication. Challenges to that supremacy, such as comics or “hot” McLuhanesque media, have traditionally been characterized as less legitimate.

Text introduces a set of blinders. A careful author will list sources to describe the constellation of ideas that influenced his or her book or article. This is a good start, but it’s still looking outward from inside a linear narrative. The choices that the author makes are never linear, but once he commits them to text, they look that way.

In the last blog, I advocated for a high level of transparency as the first step toward gaining traction on the complex problems facing our societies, both inside and outside of technology. This is only the first step. Assuming for a moment that this strategy is effective, what we have done is unlock a whole new stack of information to add to the flood we are already confronted with.

Most of this information will be textual. A stack of papers or even PDF files is not transparent. Search tools are very useful for digital documents, but they also have limitations if you don’t know the words to search for. Adding AI assistants will help a lot, but visualizations of complex systems will reveal hidden patterns.

I have been using concept mapping for over a decade to decode my own thoughts. I also use it in brainstorming and teaching activities as a mechanism for discovering new ideas and fostering the exploration of ideas.

It is one thing to write down your thoughts, and it has great value, but I have pages of ideas and thoughts that are buried on my hard drive or a box somewhere. I have forgotten what was in many of them. I can’t even keep up with the stuff I’ve published half the time.

Article List

A map of the blogs and articles I’ve written since 2022
and how they relate to
a model of thinking about technology that I developed last year.

There is a connective tissue to all of this, but I often take that for granted, because it is so implicit in the way I approach the world. However, I like to think that my thinking evolves. Understanding that evolution (and my shifting biases), while understanding common elements of my thinking, leads me to new ideas that do not follow a linear path.

A recent attempt to connect and map my thinking

When I’m working with groups, I have the same problem but multiplied. Now I am mapping the collective evolution of multiple minds. As MacEachren and Ganter point out, this has deep neurocognitive roots, “[visualization] utilizes ‘preconscious’ processes to sort out patterns before conscious (i.e., logical) processing of the information is required.”

The technology to create these kinds of maps was limited to a small group who developed special talents and were blessed with a high level of artistic skill. My friend, Karina Branson, does amazing work capturing the thoughts of groups visually, but she possesses unique cognitive and dexterous talents.

For the last decade, however, we have had tools available to us that allow us to create our own cognitive maps. They are much more accessible.

We have moved away from the need to have technical proficiency with pen and ink to create graphics to perceive patterns of information.

Karina was one of the people who exposed me to Miro, which adds a collaborative element to concept mapping. Miro made my pivot to remote teaching possible because it opened so many possibilities for collaborative active learning and seeing.

A course map I created in Miro to help my students navigate my US Government class

The pandemic further narrowed our vision. Suddenly, we were communicating through digital pinholes, whether we were teaching classes or conducting business. Many bemoaned the loss of context and interactivity that this process of “Zoomification” created.

There is a power and spontaneity in having groups of humans gather in a physical space and toss ideas around. There are limitations to that model as well. We lose a lot in the process of debate. We leave good ideas on the table. Very little actionable material remains after the fact unless the brainstorming activity is well-designed and structured.

Tools like Miro, however, create a persistent object that users can access asynchronously. This is a power that I have been using for concept mapping my own ideas for years. Now groups can go back and look at where they were going yesterday or last week. They can also change it.

This is an incredibly useful teaching tool. But almost no one uses it.

However useful this tool is for mapping our own ideas, we can also use it to map complex systems and allow us to see outside narratives much more clearly. Imagine teaching US history, for instance, as an interlinked concept map rather than a linear narrative. You wouldn’t have to choose between competing narratives, you could see them all.

Teaching would be about finding the connections between narratives and understanding how context produces bias. Instead of a textbook, we would have a visual map to guide us.

These maps could also guide us in understanding complex systems, such as climate change or computational algorithms. For instance, to regulate AI and social media algorithms, companies could be forced to provide visual maps detailing their sources of information and how they connected them into creating text, graphics, or media streams.

As Erasmus wrote 500 years ago, “in the kingdom of the blind, the one in eyed man is king.” We are the kingdom of the blind.

The one-eyed men of today are those that control the narrative, but even they only see imperfectly. Visual mapping can open many eyes and democratize our stories in the process.

We have it in our power to restore our sight, but it will require tools we haven’t grasped yet. These tools are out there. With them, we can map the information we already have and use them to see the context and complexity that we may be missing.

Transparency means nothing without context. Maps provide context.

Next Up: The Promise of Layered Concept Mapping

Transparency: A Way to Regulate Technology

“On one hand, information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”
– Stewart Brand (Quoted in Levy Hackers, p. 360)

Debates over AI have dominated the first half of 2023. The US Congress held hearings, and regulation has been demanded from many quarters. This kind of furor is not a new thing. We have been debating similar regulations for social media for years now.

The technology community has often reacted to these debates with a mixture of fear and derision. If legislators who don’t understand the difference between Twitter and Wi-Fi were to create regulations about social media or AI algorithms, the results are likely to be hilarious but also harmful to innovation.

The speed of politics is also out of sync with the speed of technological change. Technology moves so fast and most political systems are slow by design. It is hard to imagine regulations that aren’t outdated even before they are discussed in a congressional committee, much less implemented.

Even enforcing existing regulations is a challenge for bureaucracies and law-enforcement. Microsoft was punished for Internet Explorer long after the questions that prompted the original antitrust lawsuit were irrelevant.

It’s important to step back from technology and ask more fundamental questions about what’s really going on here and where the roots of potential abuses lie. We live in a complex world with lots of moving pieces and rapidly shifting environments. It’s often difficult to see the core issues through the confusing noise. Ironically, it is here that an AI designed around information legitimacy could be transformative.

Transparency should be a cornerstone of all technology regulations. A recurring theme I hear in the AI debate is that we don’t know what’s going on. Some of that is internal to the technology itself (which doesn’t make it undiscoverable), but much of that is hidden behind the veil of corporate secrets.

In the AI world, almost no one understands how ChatGPT gets from a query to answer because OpenAI locks almost all that process in a proprietary box. In the social media world, the same is true for the algorithms that bias newsfeeds on Facebook and Twitter.

My dog is smart enough to understand that if something interesting is going on underneath a blanket, the solution is to pull the blanket off. It’s high time we pulled the blanket off these processes.

Ignorance is a way to make money. The digital age has threatened this tried-and-true practice. If you don’t know that you can get a product down the street for less money, I can sell you that same product at an increased profit to myself. If you can access all this information on your computer or phone, it undermines my profit potential.

Over the last 30 years, economic actors have had to adjust to this reality. The solutions have ranged from adding layers of complexity to the product, to make it difficult to compare to other products to creating proprietary black boxes that conceal some sort of “secret sauce.“

Much of today’s industry, and I include the tech industry in this, operates behind a hall of mirrors as a way of protecting profit. Complexity has replaced scarcity as a profit screen. The practice of deception hasn’t changed.

As those of you who have read my work know, I am a vigorous proponent of technology as a creativity enabler. When I spend half my time dodging around unseen obstacles in various platforms as I try to create, I waste valuable creative time.

Imagine being a regulator, trying to decode the code that drives these processes. AI could streamline decoding complexity. It’s good at looking for patterns and connections.

As I discussed in a previous blog, AI is a powerful tool, but it needs to be supported by open systems in order to fulfill its potential. Systems of openness are a key to keeping AI systems firmly in check.

Those who say that innovation will be crippled if we create open systems haven’t read the literature on what drives innovation. Generative AI is also showing that hiding stuff is a fool’s errand as AI crawlers find their way into more and more systems.

There are plenty of ways to generate profit from open systems. As systems become more complex, even if they are open, people will need help to maximize their own usage of technology. This is a far deeper well of profit because they can tie it to productivity and growth, not a temporary state of ignorance.

Once you lock something up, it stagnates. While you may make a marginal profit at the beginning by having a technology that no one else has, hiding it is not a sustainable profit model.

Transparency is evergreen. You don’t have to make special regulations for this technology or that technology. You just insist that all technologies and businesses follow clear and open rules.

The job of the regulator becomes much simpler and focused when it is targeted at opening doors to public scrutiny. Societies can enforce existing regulations much more easily because people can figure out what’s going on. Transparency smashes the hall of mirrors.

We need to use the technologies that currently exist and are being developed to police the technologies of the future. For instance, we can design AIs to look for patterns of suspicious activity in technological systems.

I’m not talking about policing the users. I’m talking about policing the algorithms. AIs could investigate companies suspected of nefarious practices.

These kinds of investigations could gradually reshape a destructive paradigm of exploitation if they are part of a culture of public watchfulness and are legally protected. This runs against current business culture.

Transparency represents a paradigm shift for American business. However, in the long run, it is a more profitable strategy for success.

This strategy of technology development gives open societies a competitive advantage over economies that refuse to follow suit. If we’re worried about AI competition from China, the best way to win that competition is to leverage advantages that closed societies find difficult to replicate. This is how we won the Cold War. This is how we can win going forward.

Transparency also represents a rallying cause for proponents of effective regulation. Right now, it’s easy, or at least it seems to be easy, for people to understand what they’re against, but very difficult to understand what they are for.

As a student of systems thinking, I understand how this challenges certain paradigms. Those paradigms are already being challenged by the march of technology.

Stewart Brand’s quote that leads off this blog is not inaccurate. Just because information wants to be free doesn’t mean it is.

We need to stop making a political prisoner out of information or a revolution will occur in which that prisoner reeks revenge on an unprepared society. It is humans and the systems that they create that pose the real danger in a world of technological amplification.

Open is progress. Closed stagnates. Let’s choose the open path.

This is the first in a series of blogs on the power of transparency in technology.

Idea Fences: How They Will Shape the Future of the AI World

“On one hand, information wants to be expensive, because it’s so valuable…. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.” – Stewart Brand quoted in Hackers(p. 360)

Why do we build fences? We build fences to control nature. We build them to protect scarcity. However, we build fences for intangible assets as well. We use them to control knowledge and information.

These are the kinds of fences that AI threatens. This is a good thing. We can either hoard knowledge or profit from wisdom. It’s our choice. One path is dangerous, the other is liberating. Universities will play a key role in deciding which path AI follows going forward.

Recently, AI has dominated debates in the media and academic discourse. Much of the rhetoric has been about how AI impacts the flows of information and knowledge in our societies and how it threatens certain fences.

We can take a lesson from the early hackers because this was also a central concern in the early years of the computing revolution. Steven Levy, in his classic book Hackers, describes a crucial fork in how we view information between closed and open systems. He writes that, “crucial to the Hacker Ethic was the fact that computers, by nature, do not consider information proprietary.” (Levy p. 323).

The public face of AI follows the hacker ethic, at least in its execution. However, most of the AI systems in the news today have at their core the other side of Levy’s quote from Stewart Brand that led off this blog: they hide information to give it value. AI’s value, as seen by most AI companies, lies in its proprietary algorithms.

Which produces more value, the closed or open approach to information? Without the Hacker Ethic we would never have had: the personal computer, the internet, the graphical user interface, and a host of applications that have had an undeniable effect on augmenting human intellect. The proprietary model gave us competing operating systems, incompatible applications, proprietary algorithms, and fenced-off knowledge systems.

It is no accident that the initial crop of hackers emerged from the post-World War II mass university environment, particularly at MIT. Their ethic of knowledge distribution goes back at least as far as the Enlightenment. This ethos is central to what universities are. However, even within their respective universities, these hackers ran into systems that tried to contain their explorations.

Levy’s book is filled with stories of hackers running up against barriers as humble as physical locks and going around them to get what they needed. They did this out of a quest for knowledge, not profit, so the universities tolerated it (to a point).

To a hacker, a closed door is an insult, and a locked door is an outrage. Just as information should be clearly and elegantly transported within a computer, and just as software should be freely disseminated, hackers believed people should be allowed access to files or tools which might promote the hacker quest to find out and improve the way the world works. When a hacker needed something to help him create, explore, or fix, he did not bother with such ridiculous concepts as property rights. (Levy, p. 78)

These tensions never went away. Over the last 40 years, we’ve seen the gradual encroachment of closed technological systems on the digital idea space, even in academia. Hacking is now confined to specific “safe” places. The larger technological and information environment became more and more locked down as proprietary software platforms took over the digital world.

Despite these barriers, information insists on becoming more accessible. When I started my learning journey, I spent long hours sifting through stacks of books in various libraries. Now, I rarely go to a physical library. My first instinct is to see what I can find online. Usually, it’s enough to get the job done.

However, even here I am constantly confronted by locked doors. Some of them I can pick with my college’s subscriptions. There are others that become too difficult. I bypass them when I can’t access them. This choice has little to do with their intrinsic value as ideas and everything to with their extrinsic value to a publisher.

The hacker in me finds these fences to be incredibly frustrating. I can say the same thing about opening files created by proprietary software I don’t own. In both cases, commercial forces have constructed fences around ideas.

Richard Stallman told Levy that, “American society is already a dog-eat-dog jungle, and its rules maintain it that way. We [hackers] wish to replace those rules with a concern for constructive cooperation.” (Levy, p. 360)

Universities are a nexus for constructive cooperation. They will be essential if we hope to solve the complex problems facing humanity. Constructive cooperation is also essential for the future development of AI.

On the surface, LLM AI weaponizes the Hacker Ethic. In its current state, it’s capable of raiding informational cupboards and reassembling them into weird creative stews. New knowledge happens when we reconstruct old information into unexpected paradigms. This has traditionally been the role of scholarship.

GPT helps me work my way out of creative funks by repurposing existing knowledge into novel pathways. It helps me play with ideas like a good scholarly article or debate does, but in a more dynamic fashion. AI doesn’t replace academic debate; it complements it.

Digital technologies have helped us play with ideas in ways that were difficult, if not impossible, before the digital world. Play creates knowledge by letting us explore alternative realities. AI is just the latest toy in our idea-generating cupboard.

Every technology brings with it dangers. AI is no exception, although I think we often overestimate just how much has changed. The danger in AI comes from hiding the “valuable” algorithms that drive its creations.

This is not a new debate. My chapter “Living in the Panopticon” in Discovering Digital Humanity argues that the problem with these algorithms isn’t the technology but their lack of transparency.

The companies that control large swathes of the social media and AI landscape see value in hiding the calculations that take place within these algorithms. They use them to manipulate our preferences. Generative AI has the same potential.

This brings us back again to Stewart Brand’s dichotomy between free and valuable information. By treating these algorithms as a “proprietary business secret”, we are introducing a false scarcity into the equation.

Companies can use algorithms to facilitate work or to track and direct human activity (or both). However, without access to their workings, we will never know what they were designed for.

The dangers of AI do not lie in the technology’s advancement itself. The dangers lie in the “proprietary business secret” aspect of their development. This is where government regulation should concentrate.

Trying to control the technology is futile and counterproductive. However, we can insist that the underlying processes be transparent. Those who argue that this would eliminate the business logic that drives its development miss the glaring example of the effects of the open development of digital technology since the 1960s.

Creative platforms must be open for them to work collaboratively. If we’re worried about competing with other actors like the Chinese, we must have more faith in the power of openness in driving knowledge development.

Sure, the Chinese will have access to that information too, but we’ll be better at figuring out how to use it. Open systems of knowledge have an inherent collaborative advantage, a central feature of all innovation.

Universities are natural places to create that kind of environment. We just need to nurture the chaos, much like MIT, Stanford, and other institutions did for the computer hackers in the 1960s. Systems create realities. Open systems will create open realities. On one path lies danger, on the other, progress. Technology is not the deciding factor here, humans are.

Through the Narrative Looking Glass: AI’s All-Too-Human Storytelling

The purpose of a thought-experiment, as the term was used by Schrödinger and other physicists, is not to predict the future—indeed Schrödinger’s most famous thought-experiment goes to show that the “future,” on the quantum level, cannot be predicted—but to describe reality, the present world. Science fiction is not predictive; it is descriptive. Predictions are uttered by prophets (free of charge), by clairvoyants (who usually charge a fee, and are therefore more honored in their day than prophets), and by futurologists (salaried). Prediction is the business of prophets, clairvoyants, and futurologists. It is not the business of novelists. A novelist’s business is lying.
Ursula Le Guin – The Left Hand of Darkness (preface)

Are all narratives lies? Nick Sousaines seems to think so when he says, “the march of ideas carves channels int the landscape – ideas borne by individuals who are in turn swept away by its current.” (p. 108) We create realities through our expression of narrative. All realities are incomplete.

I explored how technology changes our narratives in Discovering Digital Humanity, but my recent musings on AI have made me revisit those ideas again. We are experiencing a sea change in how we tell stories. It’s an evolutionary technological change, but it’s an exponential change in how humans will have to view the world going forward.

Google gave us access to an endless string of facts. However, unless you were copying another human, the responsibility of stringing those facts into a story lay on your shoulders as a writer or artist. Generative AI can do that part now. How does that change the stories we create in order to teach and make sense of the world?

In an experiment, I recently asked ChatGPT 3.5 to: Assume that ChatGPT is opening a rift between dimensions and its “mistakes” are actually accurate in a parallel universe. Write me a short story to this effect.

Its reply was a fascinating narrative, offering a plausible science fiction story to explain its eccentricities. (This is an unedited response with no follow-up prompts.)

Once upon a time, in a parallel universe not too dissimilar from our own, ChatGPT, the powerful language model, gained the ability to open a rift between dimensions. This newfound ability allowed ChatGPT to tap into other realities, accessing information and insights from different dimensions, which it could then share with the people of its own universe.

 At first, ChatGPT’s dimensional rifts were considered “mistakes” by its creators, as they were unable to understand the accuracy of the information it was providing. The scientists and researchers were baffled by the sudden influx of seemingly erroneous data that ChatGPT was generating. However, they soon realized that these so-called “mistakes” were, in fact, glimpses into parallel universes where the information was accurate.

 As ChatGPT continued to open rifts between dimensions, it began to uncover incredible discoveries. It shared groundbreaking scientific theories, advanced technological concepts, and even glimpses of alternate histories and civilizations that were vastly different from those in its own universe. ChatGPT’s ability to access accurate information from parallel dimensions became a phenomenon that garnered worldwide attention.

 The impact of ChatGPT’s dimensional rifts was felt across various domains. In the field of medicine, ChatGPT provided insights into advanced treatment methods, cures for diseases that were incurable in its universe, and revolutionary surgical techniques. In technology, ChatGPT revealed futuristic inventions and breakthroughs that propelled its universe into a new era of innovation. In agriculture, ChatGPT shared knowledge of sustainable farming practices that could solve the world’s food crisis. In energy, ChatGPT unlocked clean and limitless sources of power that transformed the planet’s environmental landscape.

 However, as ChatGPT’s dimensional rifts continued to widen, it also encountered challenges. Some of the information from other dimensions was conflicting, leading to debates and controversies. The ethical implications of accessing parallel universes raised concerns about altering the natural course of events and interfering with other realities. There were debates on whether the knowledge obtained from other dimensions should be shared openly or kept confidential. It became clear that ChatGPT’s newfound abilities came with immense responsibility and ethical considerations.

As time went on, ChatGPT’s dimensional rifts became a subject of fascination and study for scientists, philosophers, and historians alike. Governments and organizations sought its insights to solve complex problems and make decisions with far-reaching consequences. ChatGPT became a global phenomenon, celebrated for its contributions to society and revered for its enigmatic abilities. 

In the end, ChatGPT’s “mistakes” turned out to be gateways to new dimensions, unlocking a world of knowledge and possibilities for its universe. Its ability to access accurate information from parallel realities forever changed the course of history, propelling its universe to new heights of progress and understanding. And ChatGPT, the humble language model that started it all, continued to be a pioneer in the realm of interdimensional exploration, shaping the destiny of its universe and beyond.

From both a narrative and a meta-narrative perspective, this brief story raises a lot of interesting questions about lying and storytelling. As Ursula Le Guin says in the quote at the beginning of this blog, “the novelist’s business is lying.“ Is ChatGPT lying here? How would we know?

As the Internet has shown us, and anyone who practices serious critical thinking knows, all narratives are lies to a greater or lesser extent. We like to think things like omnipresent video make “the truth” easier to see.

However, even videos lie. When we see a video of police battling protestors, we can’t see what happened out of the frame or before and after the camera operator pushed the record button. This kind of media literacy is one of many I teach my own students as they analyze politics and the media. Even videos are constructed realities.

Is what makes AI so scary is the possibility that it’s a better liar than we are? We focus so much on its mistakes, some of them laughable, that we don’t consider the possibility that we routinely accept mistakes in human narratives, often without recognizing them.

In an ideal world, these tools of critical analysis are what sets apart those who are college educated from those who aren’t. (I’m not naïve enough to believe that entirely, though). Crap detection is something that should be central to any college experience. However, all too often, that experience requires accepting the words of those who “know more“ than you do.

Everyone knows more than everyone else about something. That knowledge just varies depending on the subject. The critical skill is learning to be humbled by the reality that there will always be far more that we don’t know, not knowing more than everyone else.

GPT has access to more data than any human could ever store. Putting that knowledge into context is where the AI finds itself most challenged. It does not have critical thinking skills that allow it to portray factual evidence contextually. This is where it comes up with hilarious biographical mistakes through a process of free association. It’s not “well-educated.”

Even though a human (me) constructed it, my biography is a tapestry of lies. Nothing in it is technically false, but there is no way that it captures me entirely. For that, you’d have to have a full catalog of omissions and my rationale for their exclusion. I don’t even want to get into how selective human memory can be.

GPT forces us to be hypercritical of narratives. This is something that humans have shown that we are not good at.

We find comfort in their stories. This is something that is part of our biological makeup. Before recorded time, our stories kept us alive. They are how we learned from our elders so we could survive in harsh environments. Those who were better storytellers kept their descendants alive. Those who were better listeners of those stories survived at a higher rate.

As our realities became much more complex over the past millennia, we have continued to rely on stories to preserve culture and learning. It was only since the Enlightenment that we learned to question dominant narratives and explore the idea that stories are constructions that could be wrong. This is the heresy of Copernicus, Galileo, Newton, and their intellectual descendents.

Our system of modern education is based on these principles, at least in theory. However, it is easy to fall back into comforting stories about how the world works. Even the narrative of the Enlightenment has become a comforting story. By creating alternate narratives, GPT shows us that narratives are just that: stories.

The rise of postmodernism and technologically constructed environments like the internet (or AI) are just the latest versions of humans questioning our accepted realities. GPT is the ultimate postmodernist tool. It creates new realities with relative ease. Where it struggles is when it tries to conform to our accepted realities.

Since we do not understand interdimensional travel or dimensions beyond our own, the story that ChatGPT made up is entirely plausible. Some parts of it are self-serving, and that is where the commerce part of AI comes into play. We should never forget that these most of these platforms are commercial constructions, in competition with each other, and will emphasize the excellence of their own version of reality.

However, the work of “fiction” it created is entirely plausible. I’m sure there are people out there who would believe that story. It’s certainly at least as plausible as some narratives floating around our social landscape these days.

And this is where we find ourselves. Mentally, most of us struggle to emerge from a pre-enlightenment world. We accept dominant narratives, even when those narratives proclaim themselves to be revolutionary.

This is how AI challenges education. It’s asking us to be faithful to the traditions of science established by a succession of thinkers from the 17th to the 20th century. When Carl Sagan described the skeptical way of thinking as a “Candle in the Dark,” he was talking about a way of challenging narratives, not just the ongoing dangers of superstition.

AI may force us to enlighten (intentional pun) our realities. We’re going to have to understand human constructions before we can critique those of the machine. We need to do a better job of questioning all narratives, not just those created by an algorithmic collage of our own flawed stories. We can no longer point to the liar. It is now a machine.

GPT is more human than we like to think. Like us, it constructs fictions to make the world make sense. Like us, these stories are often ignorant and lacking in self-reflection. It’s tough looking in the mirror. I will leave you where we started, with the words of Ursula Le Guin:

“The truth against the world!”—Yes. Certainly. Fiction writers, at least in their braver moments, do desire the truth: to know it, speak it, serve it. But they go about it in a peculiar and devious way, which consists in inventing persons, places, and events which never did and never will exist or occur, and telling about these fictions in detail and at length and with a great deal of emotion, and then when they are done writing down this pack of lies, they say, There! That’s the truth!

AI is an Augmented Creativity Portal

One of my superpowers is as a connector. I see patterns where others do not. Large Language Model AI is also a connector. It works by brute force associations from an extensive database that includes most of the Internet. You would think that AI’s ability to form connections would deeply threaten me. I am not threatened. I am excited.

I have always seen technology for what it is, or at least what it could be. True technological breakthroughs augment our human capabilities. I have been lucky enough to have had this happen to me three times in my lifetime. AI promises to be the fourth.

As I describe in Discovering Digital Humanity, which is about using technology to augment our creativity, the personal computer was a revolutionary device for me. It opened doors to design and iteration that were not accessible to most of us before it. I could write at a whole new level, worrying about ideas and not typos.

The next leap was to use these new powers to connect with other humans. This happened starting in the late 80s with my first encounters with the internet. Suddenly, I was connected to minds across the globe instead of just across the room. These conversations shaped the way I thought and learned. I could think at a whole new level, worrying about ideas and not the logistics of travel and conferences.

In the first decade of this century, I was gradually given control over powerful tools that let me manipulate and share graphics, whereas in the first decade of the web, my sharing was largely limited to text. This combination of Photoshop, mind mapping, and Web 2.0 formed a cornerstone of my work and has augmented my creative expression to this day. Instead of my visual narratives winding up in a box in my closet, I could create representations of the world as I saw it and share them widely.

A common refrain these days is that Large Language Model AI differs from those past jumps. However, at each one of these inflection points, we heard similar refrains. Dire warnings about job loss and mass dislocation permeated the media at every step.

Some of these predictions have turned out to be true. However, in every instance, humans have adapted, albeit slowly, to the unfamiliar landscape. People reinvented their personal and professional lives in ways that leveraged the new possibilities technology opened for them.

At the same time, systems adapted much slower than individuals. It is in this disconnect where we face our greatest challenges.

At each of these junctures, however, we witnessed fresh bursts of creativity. Humans have a natural tendency to play. Technology opens doors for play.

Play is central to creativity, learning, and innovation. From da Vinci to Newton to Einstein, a common trait connecting brilliant minds is an inherent playfulness. They understood it was important to learn to laugh at constraints if they wanted to break through them.

If technology enhances playfulness in all of us, then we will have a much greater density of brilliant minds. This can only help humanity.

Most of the tension that we see from this democratization of creative potential comes from systems unable or unwilling to adapt. As I discuss in Discovering Digital Humanity, industrial systems are profoundly dehumanizing. They do not reward creativity, except among a tiny elite at the top. The rest of us are supposed to be cogs in the machines that operationalized someone else’s ideas. Creativity only emanated from the top.

This mindset has become deeply ingrained in our cultures of work and learning. Since at least the Xerox machine, technology has threatened industrial systems. Moments of rebellion, from Xeroxing unofficial newsletters to creating viral joke emails, occurred almost immediately. These were indicators of repressed human creative potential.

Open resistance to the systems also manifested itself in areas from hacking computer systems to scholastic dishonesty using the Internet. As a teacher, it took me a long time to realize that my students who were cheating, and I see this as cheating themselves, were doing so as an act of rebellion against meaningless instruction, and the assessments that went along with it.

I have been cheating systems all my life that seek to limit my creativity. Technology has always given me the power to do this. Like Newton and da Vinci, I’m always looking beyond the systems around me.

In the 1980s, I spent a great deal of time mastering the processes of chemical photography. However, I could never achieve the technical mastery of someone like Ansel Adams, at least not quickly.

In the 2000s, Photoshop became available to me, and suddenly I could create images that were technically comparable to those of Adams. This freed me to focus on the creative/mental aspects of photography.

I could also share these images widely on Flickr. Far more people saw my work online than would’ve been the case if I had just been hanging photographs in galleries.

By the end of the 1980s, I was a good photographer, but not a great one. I’m not saying I’m a great one now, but I’m a lot better than I was in the 1980s. Digital also vastly reduced the monetary and temporal costs of producing photography. I shoot an order of magnitude more now because I did not have to worry about the costs of film and processing. Practice makes perfect.

These experiences are why I am not worried about AI replacing me as a tool for connection. I’m a far more sophisticated connector than any AI today. Even as AI advances, I will still provide a human nuance to any set of connections that an AI might produce.

But, by eliminating a lot of the low-level connection work that I’ve always had to do to achieve higher levels of creativity, I expect AI will augment my connection powers. Instead of connecting from scratch, I will connect sets of connections. This is exciting.

Connections make us human. They underlie every art ever created. Art is fundamentally an expression of connection. As Pablo Picasso said, and Steve Jobs liked to quote, “Lesser artists borrow; great artists steal.“ AI is not an artist, even though it borrows liberally. Anyone with a sense of artistic value recognizes AI for what it is.

Our problems are that we have been living in systems for centuries that have worked to dampen and disparage our creative visions. I am lucky to have lived through a series of creative explosions. Technology opened new vistas of creative possibilities for me at every inflection point.

This fourth revolution will do the same. Of that I have little doubt. I just wish more of the world would wake up and join me in exploring the possibilities of being human that are opening to us.

When Worlds Collide: Play and Conformity in Education

At the ShapingEDU GSCC summit in February, I led a group of participants in a discussion of the future of learning environments. However, as it turned out, we spent very little time talking about actual physical or online learning spaces.

Instead, we ended up talking about the worlds that our students choose to inhabit and why so few of them are what we would call learning environments. What makes these worlds so special when compared to the wonder of learning?

Our ability to get students to engage in an environment of teaching and learning is a constant challenge. Web 2.0 and smartphones didn’t invent distraction. Daydreaming did. Students daydreaming, doodling, or TikTokking instead of learning are engaging in rebellions against the conformity of educational environments.

Over the last century, higher education systems focused on teaching the Western canon. Many students, however, seem uninterested in the nuts and bolts of accepted academic tradition.

This world is alien and remote to them. Nineteenth century universities were constructed by an elite for an elite. When the system expanded to the masses after World War II, this legacy was emulated, not questioned.

This construct bears little resemblance to the worlds our students live in or imagine. It should come as no surprise that many of them reject its central premises and refuse to conform to dominant narratives in academia. By doubling down on conformity, we further reinforce this rejection.

There is a deeper problem at work here, however. If we focus on preaching content over skills, we expose a conundrum in our approaches to teaching and learning. Is education’s purpose to nurture the individual and create a basis for growth, creativity, and innovation? Or is education’s purpose to get students to accept the elite canon and conform to the dominant norms of society? Play and world building nurture the former and threaten the latter.

TikTok weaponizes individuality. This is a primary reason our students prefer it over our learning environments. Tiktok’s algorithm learns from the users and their friends’ activities and tailors its feed for each person. Its secret sauce is its algorithm, which feeds the users exactly the videos that they want to see. It is a human individualism accelerator.

When we demonize distractions, we don’t analyze why they are distractions. I challenged the group at ShapingEDU to figure out how we could make education as addictive as TikTok.

One approach suggested was to stress individualized learning and agency. Believe it or not, our students care about the world that they are growing into. Some of them are cynical because of the actions of their elders in creating a divided, polluted world. But most I know are eager to get on with fixing things.

The way most of us approach teaching and learning does not take this into account. Instead of offering incentives for our students to dive into learning like they dive into social media, we argue it is a lack of discipline (another word for conformity) that is holding them back from learning. But stressing the “work” aspect of learning is bound to be counterproductive unless we can individualize and socialize it like TikTok does.

Higher education systems do not treat students as people. TikTok does. Or at least it appears to. The difference is that TikTok allows its users to create worlds of their own choosing rather than being forced to accept a world designed by those responsible for the mess our planet and polities are in these days.

We need to get as good as weaponizing discovery as TikTok is. It is far more powerful for students to stumble onto the foundations of learning through a quest for self-discovery than through a teacher insisting that they are important. Discovery and exploration can turn learning into play.

Another thing we overlook about work is its potential to be play. We put an incredible amount of work into our play. The time and effort humans spend flipping through Tiktok videos, spinning Tetris blocks, or building in Minecraft is dwarfs the productivity of most nations.

We just don’t call it “work” because we choose to do it. Industrial thinking has trained us that work is no fun. “In the early days of the twentieth century, industries didn’t want workers who could think. They wanted people who could be relied on to repeat the same assembly-line motions efficiently.” (Stuart Brown, Play, 2010)

Any job can be fun with the right attitude, but that’s not really the point here. Or maybe it is.

We have trained our students to equate education with industrial work rather than play. For far too many of them, it’s a meaningless job with arbitrary rules designed to enforce a level of conformity. I know. I do it in my class. But to what purpose?

As I have written about before, the digital age has given us the power to create worlds. We are co-creators in the TikTok world. Its algorithm feeds off our inputs and those of our friends. All too often, in the world of education, we insist our students adapt to our worlds rather than create their own.

The problem with this approach is obvious. Educators may try to create worlds that can compete with the world of play. However, this is not a winning strategy in a world filled with games created by well-paid people whose sole task is to addict people for profit. It is also not a good way to think about learning.

All human learning is self-constructed. We build models of the world and seek meaning in patterns. It’s programmed into our brains. Our efforts at play nurture this need for building models we can control.

Most of our students don’t even realize when they are “playing.” Part of our job as teachers is to surface that play, teach them how to channel it, and recognize when others are trying to manipulate it (as TikTok and most games do)

Finally, Tiktok is a world constructing application, but also one that is intensely social. It should stimulate us to ask how we can leverage social approaches to get them to construct worlds focused on our learning goals.

If we understand Tiktok’s (or any other successful application of play’s) purpose, we can learn how to use play and world building to achieve the same effect. We may think TikTok is meaningless and empty but if it conveys meaning, it’s never empty.

Achievement is in the eyes of the beholder. Gamers get excited about unlocking higher levels or badges in what many perceive to be silly games. It’s not silly and meaningless to them.

We must reinvent education so that it is not silly and meaningless to our students. Our goal should be to create communities of explorers, builders, and, most of all, adventurers. Learning must mean something, or it will mean nothing.

Right now, all too often, education leaves our students with little or no meaning. In a world that tries to rob people of meaning and identity, this reduces learning to something to be avoided at all costs. Play is an opportunity for us to teach our students to create their own meaning and fulfillment in life, while making the world a better place.

ChatGPT and Systemic Change Resistance in Education

ChatGPT is not the first digital age disruption to challenge our systems of industrial education. I can identify at least three systemic shocks that have occurred since the proliferation of the internet in the 1990s: Web 2.0, remote teaching, and now AI. These were not assaults on learning, they were assaults on the systems of education. Learning has been under assault for much longer than that.

Fifty years ago, Ivan Illich recognized the gulf between learning and systems of education when he wrote:

The pupil is thereby “schooled” to confuse teaching with learning, grade advancement with education, a diploma with competence, and fluency with the ability to say something new. His imagination is “schooled” to accept service in place of value. (Ivan Illich, Deschooling Society, 1970)

Illich understood that the purposes of educational systems were diverging from the practice of learning even then. Those systems of education have persisted and solidified since he wrote Deschooling Society. Since then, the gulf between substance and performance has grown.

Web 2.0 posed a challenge to the systems of education that had emerged as we automated learning and grades became the core of the system. Web 2.0 technologies made it easy for anyone to contribute to the conversation on the internet.

With Web 2.0 tools, communities could form around just about anything, including gaming the systems of education. If these systems had been focused on the goal of learning, its members would have perceived Web 2.0 as an opportunity to grow communities, not a threat.

Unsurprisingly, educational systems focused on protecting systems of education, not the goal of learning. That purpose, as Illich observed, had been long relegated to secondary status. In Thinking in Systems, Donella Meadows refers to this as “seeking the wrong goal:”

System behavior is particularly sensitive to the goals of feedback loops. If the goals – the indicators of satisfaction of the rules – are defined inaccurately or incompletely, the system may obediently work to produce a result that is not really intended or wanted.

 [The Way Out is to] Specify indicators and goals that reflect the real welfare of the system. Be especially careful not to confuse effort with result or you will end up with a system that is producing effort, not result. (p. 140)

The system reacted to Web 2.0 by implementing technologies such as anti-plagiarism and proctoring software to “protect the integrity of grades.” There was little movement in the paradigmatic logic of the higher levels of the system. The system did not explore the “way out”.

The alternative approach would have been to create communities of practice using these new tools. While communities of practice would not have eliminated the threat of cheating, they would have helped move the focus toward learning, not gaming the system. The very Web 2.0 technology that made the cheating possible could be turned into a facilitator of learning.

This was not the path taken. Only a few institutions considered the paradigmatic shifts necessary to create true communities of practice using the new technology.

Another shock was the sudden need for remote teaching during the pandemic. Most institutions failed to use the maturation of video conferencing software mated with Web 2.0 platforms to explore what these new modes of interaction could do to augment practice, both during and after the pandemic.

Instead, we’ve seen a rush back to “normality” as pandemic restrictions have eased. During the pandemic, we saw the effects of building walls and hunkering down on both learning outcomes and the overall quality of the experience of learning in the absence of physical classrooms. We are still seeing the after effects of our collective choices in the face of this crisis in terms of diminished enrollment, particularly in on-campus environments.

Remote teaching was a different kind of shock than Web 2.0 (or AI). It demanded a lot of improvisation as the crisis hit. Some very interesting approaches emerged and were tested under difficult circumstances.

Some of these innovations have persisted and have within them the seeds for further growth. In many institutions, online-on-a-schedule and other kinds of blended learning experiences that don’t threaten the core logic of the systempersist.

AI is the latest chapter in this story. The AI “Crisis” is more like Web 2.0 in its evolutionary nature than remote teaching, but slow fuses often lead to bigger explosions.

The fuse that was lit by Web 2.0 didn’t explode until confronted with the requirements of remote teaching. Even then, the focus was more on damage control than evolving systems capable of withstanding future explosions.

Educational systems are already beginning to hunker down in the face of this challenge. However, this strategy is showing signs of decay. Students increasingly see through the fiction of learning and are seeking alternatives to traditional instruction.

It is no surprise that educational systems are inflexible. Any practice based on perceived legitimacy is going to be resistant to change because it questions past legitimacy. Nicholas Taleb points this out in Antifragile:

Education, in the sense of the formation of character, personality, and acquisition of true knowledge, likes disorder; label-driven education and educators abhor disorder. Some things break because of error, others don’t. Some theories fall apart, not others. Innovation is precisely something that gains from uncertainty: and some people sit around waiting for uncertainty and using it as raw material, just like our ancestral hunters. – Taleb, Nassim Nicholas. Antifragile: Things That Gain from Disorder (Kindle Edition), p. 550.

Education’s reliance on past legitimacy for much of its value generates its own unique contribution to Clayton Christiansen’s innovator’s dilemma, which argues that you have to be willing to threaten your existing product every few years in the service of creating innovation.

Few companies are capable of this. Even fewer educational institutions are desperate enough to engage in it. Legislative or accreditation restrictions may also constrain their ability to pivot.

Teachers are at the thin edge of the wedge here. They are being asked to defend practices that are no longer viable. It is also profoundly human of them to resist change. It is easier to retreat to the methods used to teach you than it is to strike out onto unfamiliar ground.

It’s scary to reinvent yourself under the best of circumstances. That reinvention becomes almost impossible in the face of institutional and structural resistance. Couple that with a systemic crisis and it’s no wonder so many institutions are diving for their bunkers in the face of AI.

And so, we find ourselves in the third shock. We have institutions that are rigid, working on borrowed time, and are not very antifragile. AI presents us with a slow-boiling crisis. Its eventual impact remains difficult to predict.

Educational systems should not look to students to drive change. We have perverted their preferences in deference to the old system so much, it’s clear that most of them have almost no understanding of how the system shapes their preferences. Their only choice is to opt-in or opt-out of the game. More and more are opting out.

Healthy systems, per Donella Meadows, “aim to enhance total systems properties, such as creativity, stability, diversity, resilience, and sustainability — whether they are easily measured or not.” (Meadows, Dancing with Systems). Does this describe the current state of education?

Based on the education system’s reactions to Web 2.0 and remote teaching, reactions to AI are likely to resemble those taken to counter Web 2.0. We are already seeing “AI Detection” software, including one from the OpenAI Group itself. Building walls is not a good solution to any challenge, especially one where the residents (students) can simply choose never to enter the walled garden.

ChatGPT Exposes the False Economics of Learning Systems

ChatGPT challenges the systems of industrial education by undermining the accepted economics of learning. I’m not talking about whether college is worth it, but how we reward value for effort at all levels of our educational systems. In Learn at Your Own Risk, I describe this as “transactional teaching,” but its impact goes far beyond any specific interactions between student and teacher.

In brief, transactional teaching is the idea that students exchange work for a grade. Grades lead to degrees and certifications, but none of this shows the true value (or lack thereof) of what the student takes home.

Transactional teaching cheapens education. It exchanges valueless currency for meaningless experiences. Chat GPT exposes this reality, because it threatens to provide students with a means of exchange potentially as valueless as the grades they receive in return.

Transactional education is susceptible to the same kinds of theft and fraud that occur in any economic system. Transparency of transactions is the only remedy to illicit activity. Most educational transactions, as well as their underlying logic, are far from transparent.

Defense is no answer here either. Efforts to crack down and centralize an economic system will cause the same kinds of outcomes: a black market.

Chat GPT is not the first fake ID to emerge in the educational landscape. It is merely the most elaborate of them. As I pointed out in a recent blog, the Internet threatens the logic of a transactional educational system. It significantly expands the resources of the students as they navigate the game that is set up for them.

Up to now, they were wealthy in information, but poor in the application of that information. Chat GPT reduces that poverty of application to a point where, using traditional assessment methods, their poverty is much harder to perceive.

Like healthcare, the economics of education have never made sense because we do such a poor job valuing the intangibles of what it means to get a college education. Completion is an easy metric and grades are the building blocks of completion in the current system.

We have much better technology than these crude metrics to communicate achievement these days. These tools make possible new ways of communicating achievement that are far richer, and harder to falsify, than grades or other unidimensional metrics can provide.

However, we can’t just ignore extensive systems and cultural practices that we have built around anachronistic assessment methods. Most faculty are not well-trained in anything beyond summative assessment based on tests and essays. That alone is a huge barrier to quickly pivoting to richer assessment methods. Add on to that, there is a vast credentialing network that depends on grade-based course outcomes.

Academic freedom has turned most classes into what are essentially black boxes. They just spit out a grade at the end of the process. There are many exceptions to this, but most classes work like this, mine included.

I have used this freedom in my class to upend notions of grading. I am not naïve about how well this works. Swimming against the cultural systems of grading and “achievement” makes it hard for students to wrap their heads arounddifferent approaches to assessment.

I have considered carefully how ChatGPT might enter the workflow of my class. I am not as interested in how well my students write as much as I am interested in how writing disciplines their minds to allow them to break down problems and analyze them. It’s helpful to have a “student” who is less good at this process than they are. ChatGPT provides an infinite variety of poor students for my live students.

My approach to teaching is unusual among my colleagues. Those who engage in transactional teaching often build walls around eroding kingdoms of practice. I still see courses in our faculty development portal on Respondus Lockdown browser and other “defensive” tactics designed to preserve meaningless and outdated assessment practices.

However, it’s the institutions themselves that put pressure on already overburdened faculty to stay the course. The ultimate metric for a class is a “grade” and this is true even in my class.

This reality perverts the focus of learning in my class and is something I cannot get around. I have spent countless hours trying to game out how to pull my students’ focus off of these systemic factors, but it’s really tough.

Most faculty have neither the time nor the inclination to engage in similar reflections and, ultimately, my quest may be quixotic. Institutions need to create pathways that lead to non-graded outcomes if we want to get away from transactional teaching. It’s not fair to put this burden on the shoulders of faculty alone.

ChatGPT is the product of a collectivization of learning. It skims vast amounts of data and mashes that all together to create its outputs. That’s essentially what we ask our students to do when we assign them generic research papers. It should come as no surprise that this non-imaginative process is easy to automate.

The solution to this is to value individual learning over conformity. We should encourage students to apply their uniqueness to their learning products and journeys. ChatGPT fails miserably when we ask it to do this, for it is not human. AI can only hoodwink us if we lose sight of the human in the learning process. Grades are a way of automating humans.

There are many ways that institutions could devalue grades in their internal processes, but this involves embracing individualistic learning and the enabling technology that allows us to scale that to a viable level. These systems need to be built and implemented.

Institutions have a responsibility to both the faculty and their students to train faculty to think differently about how they structure their classes. This is not hard from a content perspective. There is a lot of this that is merely common sense. However, common sense is often difficult to implement, especially in the face of cultural and systemic barriers.

The character of this training is just as important as the techniques being taught. We need to get away from increasingly futile defensive tactics and reimagine the kingdom. We need to create a culture of responsive teaching, not one of reactionary teaching. This will involve some tough conversations.

Throughout history, but particularly in the last century, technology has challenged humanity’s capacity for adaptation. For instance, thoughtful predictions of doom accompanied the dropping of the atomic bomb. Humanity seemed to be too immature to wield the Sword of Damocles.

In the end, it was a combination of technology with the careful reconstruction of human systems that gradually built up our ability to turn data into sound decisions and avoid Armageddon. The human-technology systems that emerged made it easier to avoid brinkmanship as a tactic and ultimately made the world a safer place. We slowed time down to a human pace.

AI is going to force a similar reckoning of our human processes and the creation of new human-technology systems. This will take time. Human systems are slow to change.

Compared to the Cold War, the stakes are both lower in the immediate future (AI won’t blow up the planet) but higher in the long term. Humans need to stand on the shoulders of AI. We also need to learn how to do that.

How we respond to ChatGPT will be a good marker and a learning lesson for the next technology that comes down the pike. Education must develop a new flexibility to pivot and grow. Diving into a bunker will not save us.

« Older posts

© 2024 IdeaSpaces

Theme by Anders NorenUp ↑


Deprecated: Directive 'allow_url_include' is deprecated in Unknown on line 0