CARVIEW |
TV networks should have an extra channel that shows exclusively portrait mode content.
i.e. BBC Vertical.
Also: Netflix Vertical, National Geographic Vertical, ESPN Vertical, etc.
Why?
- Because there’s demand
- To discover how to make good portrait-mode content
- To promote new consumer electronics
- To counter slop.
Good for phones and big black bars down the sides on regular TVs, at least until they ship new ones that hand differently.
Strategic play init.
Why don’t networks have vertical channels already?
They should.
It’s how most people consume most video, judging by how I see people watching on the train.
Sometimes I do see people watching in landscape on the train, it’s true. But it’s always something massively popular like live football or Love Island. Anything less, people avoid regular horizontal TV. (I’ll come back to why that is.)
I think there’s an assumption from the commissioners at traditional TV that portrait mode = short form = story mode. So they look down on it and that’s why they don’t produce any long-form vertical content.
That was the mistaken assumption also made by Quibi (Wikipedia) which launched in April 2020 with a ton of traditional content partners and investor backing of $1.75bn. It flamed out by the end of the year.
The premise was portrait mode, mobile-first TV and ALSO in 10 minute segments.
Too much. To me their approach over-determined the format. (Also I feel like it had misaligned incentives: the only way Quibi would have worked long-term is if viewers habitually opened the Quibi app first and what, content partners are going to cannibalise their existing audiences by pointing them at this upstart rival brand?)
The core premise - that portrait mode TV should be a thing - is good.
But forget the “10 minute segments” constraint. Forget story mode.
So why do people avoid watching horizontally?
Well, it’s so distancing to watch footie or regular telly on a horizontal phone, you feel so far away. It’s un-engaging.
But more than that, it’s uncomfortable…
My dumb explanation of portrait mode dominance is that phones are easier to hold that way, and physically uncomfortable to hold in landscape – increasingly so as they get bigger (compensating for that feeling of being far away). It’s just the shape of our hands and our wrists and the way our necks attach to our heads.
And the only screen-filling content available when you have your phone vertical is story mode content.
But that doesn’t intrinsically mean that it’s the best we can do.
It wouldn’t be enough to take regular TV and cut the sides. Vertical formats will be different.
This is the kind of content that you’d want on BBC Vertical:
- long-form drama
- long-form video podcasts
- news
- live sport
Live sport in particular. The age of blockbuster MCU-style movies is over. Spoilers are on the socials instantly, there are no surprises in cinematic universes anymore; the real time spent is in the fandoms. You can get an AI to churn out wilder special effects than anything Hollywood can do. Or see it on the news. Who cares about a superhero thumping another superhero.
But live sport? Genuine surprise, genuine drama, genuine “appointments to view” and social shared experiences.
But vertical live sport will need to be shot differently. Different zoom, different angles.
Ditto drama. While vertical drama is not shorter necessarily, it is more first person.
e.g. what the kids are watching:
I walked into a room of tweens watching YouTube on the big screen in the dark at a party yesterday and that’s how I learnt about NEN FAM.
The NEN FAM YouTube channel has over 3m subscribers and a ton of vids over 7m views, which is more than a lot of popular TV shows.
(Sidenote: But no Wikipedia page. Wikipedia is stuck with topics which are notable for millennials and older, right? This is a sign that it has ossified?)
afaict NEN FAM is a bunch of 16 kids in Utah who live in the same house and they get up to hi-jinx like sneaking out at midnight but telling their parents first (YouTube, 1 hour) and hanging out at a playground, all filmed like a super vapid Blair Witch Project. It’s v wholesome.
Join me next week for another instalment of Matt Encounters Modern Culture
Anyway formats will be different is what I’m saying.
So NEN FAM and its like is the future of entertainment, given what kids do today is what mainstream culture is tomorrow.
Honestly I think the only reasons that NEN FAM is not vertical-first are
- The YouTube algo pushes against that
- Big screens remain obstinately horizontal.
But how would you do vertical Pride & Prejudice?
omg can you even imagine?
House of Cards with its breaking the 3rd wall and mix in characters carrying the phone pov-style. It would be amazing.
All of this needs to be figured out. Producers, directors, writers, actors, camera crew, lighting – this won’t happen automatically. There needs to be distribution (i.e. the new channels I propose) and the pump needs to be primed so everyone involved has room to learn.
None of this will happen without vertical big screens in the front room.
And this is where there’s opportunity.
Samsung, Sony, LG, all the new Chinese consumer electronics manufacturers: every single one of them is looking for the trigger for a refresh super cycle.
So here’s my proposal:
The consumer electronics companies should get together and start an industry consortium, say the Campaign for Vertical Television or something.
C4VTV would include a fund that they all chip in to.
Then any broadcast or streaming TV network can apply for grants to launch and commission content for their new channels.
It’s a long-term play, sure.
But it has a way better chance of shifting consumer behaviour (and the products we buy) than - to take an example - that push for home 3D a decade ago. At least this pump-priming push is in line with a consumer behaviour shift which is already underway.
The imperative here is that vertical video is getting locked in today, and a lot of it is super poisonous.
Story mode is so often engagement farming, attention mining capitalist slop.
And we’ve seen this week two new apps which are AI-video-only TikTok competitors: Sora from OpenAI and Vibes by Meta AI.
Look I love AI slop as previously discussed.
But what is being trained with these apps is not video quality or internal physics model consistency. They’re already good at that. What they’re pointing the AI machine at is learning how to get a stranglehold on your attention.
The antidote? Good old fashioned content from good old fashion program makers.
But: vertical.
Not just television btw.
Samsung – when you establish the Campaign for Vertical Television, or whatever we call it, can you also throw some money at Zoom and Google Meet to get them to go portrait-mode too?
Honestly on those calls we’re talking serious business things and building relationship. I want big faces. Nobody needs to devote two thirds of screen real estate to shoulders.
Auto-detected kinda similar posts:
- Drones and renders (30 Jan 2015)
- Now people are comfortable with video (31 Mar 2020)
I looked in my home directory in my desktop Mac, which I don’t very often (I run a tidy operation here), and I found a file I didn’t recognise called out.html.
For the benefit of the tape: it is a half-baked GeoCities-style homepage complete with favourite poems, broken characters, and a "This page is best viewed with Netscape Navigator 4.0 or higher!" message in the footer.
The creation date of the file is March of this year.
I don’t know how it got there.
Maybe my computer is haunted?
I have a vague memory of trying out local large language models for HTML generation, probably using the llm command-line tool.
out.html is pretty clearly made with AI (the HTML comments, if you View Source, are all very LLM-voice).
But it’s… bad. ChatGPT or Claude in 2025 would never make a fake GeoCities page this bad.
So what I suspect has happened is that I downloaded a model to run on my desktop Mac, prompted it to save its output into my home directory (lazily), then because the model was local it was really slow… then got distracted and forgot about it while it whirred away in a window in the background, only finding the output 6 months down the line.
UPDATE. This is exactly what happened! I just realised I can search my command history and here is what I typed:
llm -m gemma3:27b ‘Build a single page HTML+CSS+JavaScript UI which looks like an old school GeoCities page with poetry and fave books/celebs, and tons and tons of content. Use HTML+CSS really imaginatively because we do not have images. Respond with only the HTML so it can be run immediately’ > out.html
And that will have taken a whole bunch of time so I must have tabbed elsewhere and not even looked at the result.
Because I had forgotten all about it, it was as if I had discovered a file made by someone else. Other footprints on the deserted beach.
I love it.
I try to remain sensitive to New Feelings.
e.g…
The sense of height and scale in VR is a New Feeling: "What do we do now the gamut of interaction can include vertigo and awe? It’s like suddenly being given an extra colour."
And voice: way back I was asked to nominate for Designs of the Year 2016 and one my nominations was Amazon Echo – it was new! Here’s part of my nomination statement:
we’re now moving into a Post PC world: Our photos, social networks, and taxi services live not in physical devices but in the cloud. Computing surrounds us. But how will we interact with it?
So the New Feeling wasn’t voice per se, but that the location of computing/the internet had transitioned from being contained to containing us, and that felt new.
(That year I also nominated Moth Generator and Unmade, both detailed in dezeen.)
I got a New Feeling when I found out.html just now.
Stumbling across littered AI slop, randomly in my workspace!
I love it, I love it.
It’s like having a cat that leaves dead birds in the hall.
Going from living in a house in which nothing changes when nobody is in the house to a house which has a cat and you might walk back into… anything… is going from 0 to 1 with “aliveness.” It’s not much but it’s different.
Suddenly my computer feels more… inhabited??… haunted maybe, but in a good way.
Three references about computers being inhabited:
- Every page on my blog has multiplayer cursors and cursor chat because every webpage deserves to be a place (2024) – and once you realise that a webpage can show passers-by then all other webpages feel obstinately lonely.
- Little Computer People (1985), the Commodore 64 game that revealed that your computer was really a tiny inhabited house, and I was obsessed at the time. LCP has been excellently written up by Jay Springett (2024).
- I wrote about Gordon Brander’s concept for Geists (2022). Geists are/were little bots that meander over your notes directory, "finding connections between notes, remixing notes, issuing oracular provocations and gnomic utterances."
And let’s not forget Steve Jobs’ unrealised vision for Mister Macintosh: "a mysterious little man who lives inside each Macintosh. He pops up every once in a while, when you least expect it, and then winks at you and disappears again."
After encountering out.html I realise that I have an Old Feeling which is totally unrecognised, and the old feeling, which has always been there it turns out, is that being in my personal computer is lonely.
I would love a little geist that runs a local LLM and wanders around my filesystem at night, perpetually out of sight.
I would know its presence only by the slop it left behind, slop as ectoplasm from where the ghost has been,
a collage of smiles cut out of photos from 2013 and dropped in a mysterious jpg,
some doggerel inspired by a note left in a text file in a rarely-visited dusty folder,
if I hit Back one to many times in my web browser it should start hallucinating whole new internets that have never been.
More posts tagged: ghosts (7).
Auto-detected kinda similar posts:
- Three feelings that I don’t have words for (29 Sep 2020)
- Filtered for stream of machine consciousness (12 Oct 2017)
- I wish my web server were in the corner of my room (10 Oct 2022)
- Music for microwaves (6 Apr 2023)
- Filtered for space (5 Oct 2015)
1.
Researchers studying birdsong in the San Francisco Bay found the sparrows’ mating calls became quieter, more complex, and just generally “sexier” now that they don’t have to compete with the sounds of cars and cellphones, says study co-author Elizabeth Derryberry.
A side-effect of the pandemic:
Without cars, mating calls travel twice the distance, and also more information can be transmitted.
Does this analogy work? It’s the one they give: "As the party winds down and people go home, you get quieter again, right? You don’t keep yelling, and you maybe have your sort of deeper conversations at that point."
I would love to see a follow-up study? For a brief period, long-form discursive song was favoured. So was there a generation of famous sparrow rhetoricians, like the orators of Ancient Greece? Do they look back on the early 2020s as the golden age of sparrow Homer?
PREVIOUSLY:
Just pandemic things (2023).
2.
Parrots taught to video call each other become less lonely, finds research (The Guardian, 2023): "In total the birds made 147 deliberate calls to each other during the study."
Some would sing, some would play around and go upside down, others would want to show another bird their toys.
More at the FT: Scientists pioneer ‘animal internet’ with dog phones and touchscreens for parrots (paywall-busting link).
26 birds involved … would use the system up to three hours a day, with each call lasting up to five minutes.
Why? Because pet parrots:
typically live alone in their owners’ homes though their counterparts in the wild typically socialise within large flocks.
Flocks.
You know, although the 1:1 parrot-phone is interesting, I wonder whether a zoom conference call would be more appropriate? Or, better, an always-on smart speaker that is a window to a virtual forest, collapsing geography.
Another project, mentioned in that same article:
Ilyena Hirskyj-Douglas, who heads the university’s Animal-Computer Interaction Group, started by developing a DogPhone that enables animals to contact their owners when they are left alone.
Ref.
Birds of a Feather Video-Flock Together: Design and Evaluation of an Agency-Based Parrot-to-Parrot Video-Calling System for Interspecies Ethical Enrichment, CHI ‘23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
3.
Empty cities, virtual forests… this bird sings across time.
Lubman first became intrigued by reports of a curious echo from the Mayan pyramid of Kukulkan at Chichen Itza, in Mexico’s Yucatan region. The odd “chirped” echo resounds from the pyramid’s staircases in response to hand claps of people standing near its base. To hear for himself, Lubman packed up his recording gear and traveled to Chichen Itza last January.
After studying the staircases and analyzing his recordings and sonograms of the echoes, Lubman came back convinced that this was no architectural freak. In his paper, Lubman argued that the design of the staircases was deliberate and that the echo is an ancient recording, coded in stone, of the call of the Maya’s sacred bird, the quetzal.
omg too sublime, I am a sucker for avian archaeoacoustics
PREVIOUSLY:
Microdosing cathedrals and the synthetic acoustic environment of the ancient world (2022).
4.
You will NEVER GUESS what I found out about CATNIP this week.
You know when cats go crazy about catnip and start rubbing their faces all over it?
Yeah well catnip is a mosquito repellent. So it’s evolutionary behaviour to avoid mosquito-borne parasites.
Catnip, Nepeta cataria, contains nepetalactol:
Rubbing behavior transfers nepetalactol onto the faces and heads of respondents where it repels the mosquito, Aedes albopictus. Thus, self-anointing behavior helps to protect cats against mosquito bites.
Not quite the same but:
Tonic water contains quinine which is used to treat malaria and gin & tonic was invented by the East India Company to keep its colonising army safe in India.
Bet the covid vaccine would have been more popular if it also got you drunk. A lesson for next time around.
Ref.
Uenoyama, R., Miyazaki, T., Hurst, J. L., Beynon, R. J., Adachi, M., Murooka, T., Onoda, I., Miyazawa, Y., Katayama, R., Yamashita, T., Kaneko, S., Nishikawa, T., & Miyazaki, M. (2021). The characteristic response of domestic cats to plant iridoids allows them to gain chemical defense against mosquitoes. Science advances, 7(4), eabd9135.
More posts tagged: filtered-for (118).
I suggested last week that Claude Code needs elevator music…
Like, in the 3 minutes while you’re waiting for your coding agent to write your code, and meanwhile you’re gazing out the window and contemplating the mysteries of life, or your gas bill, maybe some gentle muzak could pass the time?
WELL.
Mike Davidson a.k.a. Mike Industries only went and built it.
Get Claude Muzak here (GitHub):
Elevator music for your Claude Code tasks! Automatically plays background music while Claude is working, because coding is better with a soundtrack.
Easy to install. Includes three pitch-perfect elevator music MP3s to get you started.
Wild. Wonderful.
Although:
Boris Smus adds a wrinkle: "I do my Claude codes at least two at a time. Is there polyphonic elevator music?"
YES PLEASE.
btw I’ve heard this a few times, Claude Code experts do indeed task up multiple claudes in parallel. Either working on different parts of the same app, or entirely different projects.
I would love to see what a custom UI to manage multiple claudes even looks like. Just, zero downtime, giving go-ahead permissions to one instance, pulling the throttle on another, instructing yet another…
I have a buddy who builds UI for traders at a global bank. The way he describes it, the traders are no longer doing individual stock trades. Instead they have a whole bunch of algorithms written by the quants running high-frequency trading on the market.
And the traders’ job is to sit there taking in the various signals and numbers, I imagine like Ozymandius bathing in the screens, "just me and the world," and instinctively steer the ship by ramping different HFT algorithms up and down.
The UI is to facilitate this fugue state crossed with a airplane pilot console situation.
One day driving our claude swarms will work like this.
So polyphonic Claude Codes?
Imagine that each claude instance plays just one track each and they come together…
I’m imagining Steve Reich’s Music for 18 Claudes.
Or to put it another way, something like video game music which is layered and looped and adaptive and I have discussed before (2020). In Red Dead Redemption,
the music reacts in real-time to the player’s actions … a more foreboding sound is required during moments of suspense. All of these loops have to segue into each other as events evolve on screen.
So the experience of playing RDR is that you’re galloping along at sunset in this beautiful American south-west landscape, and you notice the music quietly weave in an ominous new refrain… so you look out for what’s about to happen before it happens.
I want my polyphonic Claude Codes to intensify the minor key rhythms as the code tests fail and fail and fail again, before it is even escalated to me, drumbeats over the horizon, the distant rumble of thunder, approaching…
Also mentioned in that post:
Matthew Irvine Brown’s project Music for Shuffle (2011) which he composed for the wonderful belt-clip-mounted iPod Shuffle:
music specifically for shuffle mode – making use of randomness to make something more than the sum of its parts.
18 tracks sub 10 seconds each that can be played continuously in any order.
Like… Anthropic and OpenAI and Google, they must have arts programs, right? Are they commissioning for projects like this? Because they should.
Writing code with Claude Code doesn’t feel the same as any other way of writing code.
Quick background: it’s an AI tool that you install and run in your terminal in a code repository. It’s a little like a chat interface on the command line, but mainly you type what you want, and then Claude churns away for a few minutes, looking up docs on the web, checking through your files, making a to-do list, writing code, running tests and so on, until it’s done.
This is not the same as other AI code-writing tools such as Github Copilot (as previously discussed (2023)) which is a joy, and you stride along 20 auto-generated lines at a go, but is ultimately just very very good auto-complete.
No, spending a morning coding with Claude Code is different.
You just loop one minute composing a thoughtful paragraph to the agent, and three minutes waiting, gazing out the window contemplating the gentle breeze on the leaves, the distant hum of traffic, the slow steady unrelenting approach of that which comes for us all.
Yes yes other terminal-based coding agents are available. Claude Code made it work first and it’s the one I’ve used most.
Writing that “thoughtful paragraph”…
The trick with Claude Code is to give it large, but not too large, extremely well defined problems.
(If the problems are too large then you are now vibe coding… which (a) frequently goes wrong, and (b) is a one-way street: once vibes enter your app, you end up with tangled, write-only code which functions perfectly but can no longer be edited by humans. Great for prototyping, bad for foundations.)
So the experience is that, before you write, you gaze into space, building castles in the imagination, visualising in great detail the exact contours of the palisades and battlements, the colours of the fluttering flags, turning it round in your head, exploring how it all fits together. Then you have to narrate it all in clear paragraphs, anticipating Claude’s potential misunderstandings, stating precisely the future that you want.
You can and should think hard about your exact intent: here’s a wonderful (and long) case study (taylor.town) and you can see there are pages and pages and pages of careful design and specification documents, before Claude is even allowed to touch code.
Claude Code didn’t work well for me the first few times I used it. I asked for too much or too little. It takes a while to calibrate your seven-league boots.
So the rhythm is slower versus the regular way.
I’m interested in the subjective feeling of coding (2023) because (to me) firmware feels like precision needlework, nested parentheses feel like being high up, etc.
I think a lot of this is about breath?
Conventionally: I’m sure I hold my breath when I’m midway through typing a conditional, just a little. The rhythm of my breath takes on the rhythm of writing code.
Many years ago, Linda Stone observed email apnea (2014):
I noticed, almost immediately, that once I started to work on email, I was either shallow breathing or holding my breath.
She studied it. 80% of people experienced compromised breathing working on email (the 20% who didn’t had, in their regular lives, been taught breathing techniques, and were unconsciously managing it).
BUT, "cumulative breath holding contributes to stress-related diseases. The body becomes acidic" – there’s feedback; when you shorten your breath, even if the cause was not initially stress, you become stressed.
WHEREAS:
With Claude Code, I don’t have that metronome shortening my breath. I do not subject myself to “code apnea.”
So it becomes calm, contemplative.
New job concept: a hold music composer for the 3 minute waits while Claude Code is Thinking…
Analogy: elevator music.
I’ve been reading about the company Muzak, the subscription music company founded by George Owen Squier in 1934. The History of Muzak:
In the early 1920s, Squier discovered a method of transmitting information via electrical wires and realized that this new method could be used to distribute music.
But:
Even in the 1930s, music licensing was a difficult beast to tame. At the time, music played on the radio was broadcast live, while recorded music was only licensed for personal use at home on gramophones.
And so Muzak boiled the ocean and simply recorded their own music, hundreds of musicians over the 1930s, "sometimes capturing as many as twelve tracks in a day."
And then piped music into:
- factories
- restaurants
- hotels
- elevators: "It was a fairly common practice to play music in elevators to both soothe passengers and pass the time since elevators were not as smooth or as fast as they are today."
Music has a psychological effect, promoted by Muzak in the 1950s:
The basic concept of Stimulus Progression is the idea of changing the styles and tempos of music to counterbalance and influence the natural rhythms of the human body. Studies showed that employee production would dip during certain times of the day; before and after lunch, for example. Muzak playlists were then programmed against those patterns by playing more upbeat music during the slower times of the day, and vice versa.
Anyway.
Muzak, elevator music, has a reputation for being bland and beige.
But it is functional: Stimulus Progression, see. (Calm shoppers buy more.)
And it conceives of the elevator as a space to be filled with music; for all its liminality it is a space which we inhabit and do not simply pass across.
And so: when Claude Code is elevating my code, we should not be waiting… we should fill the space!
ChatGPT now has the ability to change the accent colour of the chat UI. Same same. Give me light! Give me sound!
A Social History of Background Music (2017):
In the 70s, Brian Eno sat in an airport for a few hours waiting for a flight and was annoyed by the canned background music. In 1978 he produced Ambient 1: Music For Airports, a mellow, experimental soundscape intended to relax listeners.
Who will be the Brian Eno of coding agent hold music?
Music for Claude Coding.
I also use Claude Code in the process of writing normal words.
Code is text, words are text. So they built it for code but it can work just the same.
As you can see in my colophon I keep a lot of notes going back a couple decades, and these notes are a big folder of Markdown text documents. (I use iA Writer these days.)
So I pop open the root directory in the terminal and init Claude Code.
Then I say: "please look over the 30-40 most recent files in the blog posts folder and - concentrating on the ones that aren’t like finished posts (because I will have published those) - give me half a dozen ideas of what to write a blog post about today"
I don’t use it to do any actual writing. I prefer my words to be my own. But it’s neat to riff over my own notes like this.
So you don’t actually sit and do nothing for 3-4 minutes.
While it works, Claude runs commands on your computer which do anything from editing code and searching the web to, uh, deleting all files in your home directory (it can make mistakes). Fortunately it asks each time for permission. And you respond each time from a menu:
- Yes
- Yes and don’t even ask me next time
- No but here’s what to try instead
So your inner loop interaction with Claude Code is approval, course correction, and Claude accelerating in autonomy and power as your approvals accrete.
It’s a loop built around positive reinforcement and forward motion. And, because of this, you personally end up building a lot of trust in Claude and its ability to plan and execute.
What you want to do but absolutely MUST NOT do is start Claude Code with the flag --dangerously-skip-permissions
which slams it into yolo mode.
Don’t do it! But you know you want to.
Then of course you want to put Claude Code in control of everything else.
e.g. Claude on the web can now deal with spreadsheets.
So could we give it a Hugging Face robot arm and stick the arm on Roomba and let it loose in my front room?
claude "tidy my house" --dangerously-skip-permissions
Claude Code when pls
I’m the tube a bunch right now (cooking something new and borrowing desks, thanks!) and one of the frustrating bits of the commute is going street level to underground. Escalators are slooooow.
(Whereas being static on trains is fine as I can tap blog posts with my thumbs while standing/sitting. Evidence A: you’re reading it.)
So I wonder if there’s a radically quicker way to descend.
Falling would be quickest (unaccelerated), but then there’s stopping.
A net would be difficult because of standing up after. You’d get hit by the next person while you were untangling. So individual transit would be quicker but overall throughput lower because you need to add buffer time.
But maybe jets of compressed air could help?
So what I’m imagining is a pit that you step into and simply drop, with AI-controlled jets of compressed air all the way down that
- control your attitude (no tumbling pls)
- rapidly decelerate you at the end
- and direct you off to the side (rotating around the base clockwise, person by person, to avoid collisions) to step away and walk to the platform.
An alternative to air jets:
Sufficiently powerful magnets can also spontaneously create magnetism in flesh (or other nonferrous material) because electron orbitals are current loops. This is diagmagnetism.
e.g. in 1997 Nobel laureate Andre Geim put a live frog in a 16 tesla magnetic field and made it float: "all one needs to levitate a frog is a magnetic field 1,000 to 10,000 times stronger than a refrigerator magnet, or 10 times stronger than an MRI machine."
So superconductors and 16T magnets could be used for horizontal underground tunnels too.
Although anything metallic like smartphones and earrings, well I don’t know what would happens. Like bullets in a rifle probably.
Let’s stick with air jets.
We should be using AI for weird new physics (2022). Why not free-fall pits with too-hard-to-model-by-humans AI-controlled air jets? WHY NOT? Cowards if we don’t, that’s what I think.
So Simon Willison always asks AI models to draw a pelican riding a bicycle. It gives him a way to track performance. Here are pelicans for the first 6 months of 2025.
My personal benchmark is to ask deep research AI agents to give me an R&D plan and investment deck for a space elevator.
(A space elevator is an interesting task because it requires breakthroughs in material science but it not fundamentally impossible, and the investment viability requires a reach into the future, to show when it becomes profitable, so the challenge it to break it up into steps where each is viable in its own right and de-risks the next. So it’s multidisciplinary and complicated, but this kind of breadth-first, highly parallel search through the idea maze is precisely what AI should be good at.)
I feel like the threshold for AGI is not whether the AI can do the task, but whether it can show it to be so economically inevitable that it happens immediately via capitalism.
If a space elevator to the Karmin line is too much of a stretch then I would settle for a pneumatic elevator to the Northern line.
Auto-detected kinda similar posts:
- A series of tubes (16 Jun 2022)