• Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search
Tech Book of the Month
  • Tech Book of the Month
  • Archive
  • Recommend a Book
  • Choose The Next Book
  • Sign Up
  • About
  • Search

September 2023 - The Mountain in the Sea by Ray Nayler

This month we dive into a book that can only be described as philosophical, futuristic AI fiction.

Tech Themes

  1. Drones. Altansetseg controls an autonomous drone fleet to attack, defend, and protect Evrim and Ha. Nayler describes her controlling the swarm "as though it were a symphony orchestra," moving them with a fluidity that makes them seem less like tools and more like extensions of her own nervous system. Altantsetseg controls her drones via haptic interface, able to wave her arms to coordinate multiple attacking drones, working in unison. Drone warefare has increased significantly in recent years. While the first unmanned aerial vehicles (UAVs) were used as early as the mid 1800s, the first big modern drone launch, was the DJI Phantom, which included a GPS and GoPro for live action camera. DJI, a Chinese company founded in 2006 by Frank Wang, who began tinkering with aerospace components in College. DJI was initially an enormous runaway success, but became more controversial after its drones were banned by the US Army in 2017 following cybersecurity concerns. Later, DJI was put on the economic blacklist by the Commerce Department in 2020. DJI was eventually allowed to operate again in the US. Anduril, a Palmer Luckey-founded company, now offers Drones and its Lattice OS to the US and UK militaries. Drone warfare is set to increase due to their dispensible and discrete nature.

  2. Artificially intelligent beings. Evrim, the gender-less, conscious android that Ha befriends at the island, begins our philosophical journey into a future where Artificial General Intelligence has been reached, and the world is not too happy about it. Evrim is relegated to the Island after Dr. Arnkatla Minervudottir-Chan is unsure what to do with the pinnacle of her scientific achievement. Evrim is all knowing, and Ha constantly wonders about his memory and what it means to be human if all is known. Evrim’s capabilities call to mind Moravec’s Paradox, coined by Carnegie Mellon professor Hans Moravec in 1988: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility." Today, robots mostly struggle to do somewhat basic tasks. The reason is that robots generally do not have any sensation of touch or skin, which carry powerful receptors that allow us to sense and manipulate our environment. While Boston Dynamics has been early with animal-like robots (with mixed success as a business) others like Figure AI have gone for fully humanoid robots. I remain skeptical that humanoid robots will be widely deployed any time soon. It feels like we are designing robots like us, rather than designing them to be maximally efficient at things we (as humans) can precisely not do.

  3. AI Maximizing Robotic Ships. While Evrim’s god-like mental and physical capabilities are pushing the reader to want to consider them “human,” Eiko’s journey on an AI slave driven fishing vessel called Sea Wolf pushes us to consider what it means to be human. These ships were built by DIANIMA to be fully intelligent auto-fishers that could scrape the ocean of its lucrative protein. However, the complexity of robotic arms and high cost of maintenance forced the fishing companies to replace all of the robots with human slaves, watched over by hired thugs to enforce compliance. Here, the humans are the simple input into the AI algorithm that maximizes output at the cheapest cost. We are already seeing this today in the knowledge economy with Mercor, Scale AI, Surge, and Micro1 all offering human-driven “high-skill” data sets. As we move deeper into the LLM world, will we be driving the inputs to the model or directing the outputs?

Business Themes

The_Penguins_Of_Madagascar_Dave.webp
  1. Natural Resource Depletion. Nayler pitcures a future where ALL of our natural resources are utilized as either inputs to an algorithmic machine, or protected for their unique valuable resources. There is no uncommercialized area of earth. The ocean has been overfished by AI sea vessels. The archipelago has been cleared by DIANIMA, whose paid the locals to vacate the land, so they can hunt Octopuses to improve their AI models. What a world we live in! This is extractive capitalism. The AI models don’t even care about marginal costs. As the fish become more depleted, the AI gets even more brutal in its resource-seeking behavior.

  2. Losing Control. Dr. Mínervudóttir-Chan loses control of her company. Dr. Arnkatla Mínervudóttir-Chan is the archetype of the "Philosopher-CEO." She reminds me of a Steve Jobs-type character, whose primary goal is beauty. However, her control over her creation - and her company - is an illusion. Throughout the book, she operates from a fortress of intellectual superiority, believing she owns the "IP" of the octopuses and Evrim. But when Mínervudóttir-Chan shows up unexpectedly, we find out that DIANIMA is the subject of a hostile takeover. Amazingly, this is a coordinated takeover, from several smaller shareholders. “If we knew that, there might be a chance we could stop it. But that isn’t the way this kind of thing works. Whoever it is, they not only appear to have enough money to buy us out, one subsidiary company at a time - they also have enough money to hide the ownership of the companies they are using to do it; to play a shell game a thousand shells deep. Every time we try to figure out who is behind it all, we end up with empty shells - holding companies registered to independent nations declared on the rustry platforms of abandoned oil rigs, names of CEOs that trace back to cemeteries, and more names, and more shell companies, apparently rich on nothing at all. We’ve dug and dug, but they have the money not only to buy us, but to escape our spies’ investigations. That kind of money is frightening. By the time we know who they are, it will be too late.” Evrim pushes back: “How could you not know what subsidiaries you own?” To which Dr. Mínervudóttir-Chan concedes that she “never cared about any of that” and effectively let it happen under her science-focused, absent-minded watch of the company. Dr. Mínervudóttir-Chan, who thought she was the puppet master, realizes she is just another cog in the machine she built. She cannot protect her creations because the corporate entity she founded has an algorithm of its own: profit protection. I couldn’t help but feel like this hostile takeover is somewhat implausible. First of all, in the US, since the introduction of the Williams Act in 1968, established that anyone acquiring more than 5% of a company must disclose their identity. The law was specifically proposed to combat so called the “Saturday Night Special” where a corporate raider would acquire a large chunk of stock on Saturday, and then would launch a tender offer for the rest of the company over a set period of time. These tactics would put pressure on a company or board to look for a White Knight to save the company from the corporate raid. On top of this, there are Know-Your-Customer (KYC) and Anti-Money Laundering (AML) laws specifically targeting corporate transparency. Furthermore, the sale of subsidaries would require people on the inside working in concert with these external shareholders. I find it surprising that Dr. Mínervudóttir-Chan would be in a position of power but helpless to this sort of takeover attempt.

  3. Hacking. Rustem’s character line feels somewhat random throughout the entire book, until all three stories converge together, and we learn that Evrim’s AI has been breached by an unknown entity. Rustem enters Evirm’s mind, not through code but by searching throughout it like a Cathedral. One area of LLM’s that’s become more interesting is Mechanistic Interpretability, or “Why the model did what it did?” Anthropic released a paper called “Golden Gate Claude” where they found a series of neurons that encapsulate the concept of the Golden Gate Bridge. They could then turn up the firing of that neuron, and the LLM’s would consistently respond with comments about the Golden Gate Bridge even when not prompted. As we build larger LLM’s, why the models behave in certain ways will become increasingly important.

    Dig Deeper

  • From Oculus to Anduril: Palmer Luckey on Power, Technology, and the Future

  • Ray Nayler answers your questions about The Mountain in the Sea!

  • The Insane Biology of: The Octopus

  • The Headache of Taking a Company Private - Hostile Takeovers to Poison Pills Explained

  • The AI Arsenal That Could Stop World War III | Palmer Luckey | TED

tags: Ray Nayler, Palmer Luckey, AI, Anthropic, DJI, Boston Dynamics, Figure AI, Mercor, Surge AI, Scale AI
categories: Non-Fiction
Thursday 09.14.23
Posted by Tyler Zon
Newer / Older

About Contact Us | Recommend a Book Disclaimer