LLMs sind zu gut, um damit nur bestehende Aufgaben zu automatisieren
Ethan Mollick schreibt:
A lot of the focus on AI use, especially in the corporate world, has been stuck in what I call the “automation mindset” - viewing AI primarily as a tool for speeding up existing workflows like email management and meeting transcription.
This perspective made sense for earlier AI models, but it's like evaluating a smartphone solely on its ability to make phone calls.
The Gen3 generation give the opportunity for a fundamental rethinking of what's possible.
As models get larger, and as they apply more tricks like reasoning and internet access, they hallucinate less (though they still make mistakes) and they are capable of higher order “thinking.”
For example, in this case we gave Claude a 24 page academic paper outlining a new way of creating teaching games with AI, along with some unrelated instruction manuals for other games.
We asked the AI to use those examples and write a customer-friendly guide for a game based on our academic paper.
The results were extremely high-quality.
To do this, the AI needed to both abstract out the ideas in the paper, and the patterns and approaches from other instruction manuals, and build something entirely new.
This would have been a week of PhD-level work, done in a few seconds.
And, on the right, you can also see an excerpt from another PhD-level task, reading a complex academic paper and checking the math and logic, as well as the implications for practice.
This shift has profound implications for how organizations should approach AI integration.
First, the focus needs to move from task automation to capability augmentation.
Instead of asking "what tasks can we automate?" leaders should ask "what new capabilities can we unlock?"
And they will need to build the capacity in their own organizations to help explore, and develop these changes.
Hier der ganze Beitrag: