We think we're the first advanced earthlings - but how do we really know?

We think we're the first advanced earthlings - but how do we really know?

Imagine if, many millions of years ago, dinosaurs drove cars through cities of mile-high buildings. A preposterous idea, right? Over the course of tens of millions of years, however, all of the direct evidence of a civilization -- its artifacts and remains -- gets ground to dust. How do we really know, then, that there weren't previous industrial civilizations on Earth that rose and fell long before human beings appeared?

Evidence Other than Artifacts

It's a compelling thought experiment, and one that Adam Frank, a professor of physics and astronomy at the University of Rochester, and Gavin Schmidt, the director of the NASA Goddard Institute for Space Studies, take up in a paper published in the International Journal of Astrobiology .

"Gavin and I have not seen any evidence of another industrial civilization," Frank explains. But by looking at the deep past in the right way, a new set of questions about civilizations and the planet appear: What geological footprints do civilizations leave? Is it possible to detect an industrial civilization in the geological record once it disappears from the face of its host planet? "These questions make us think about the future and the past in a much different way, including how any planetary-scale civilization might rise and fall."

  • Laser Surveys in Cambodia Reveal Unparalleled Pre-Industrial Working of the Landscape
  • Ancient industrial-scale wine press and monastery discovered
  • Mountains of Silver: Tiny Bolivian village of Potosi was once the largest industrial mining complex in the world

Anthropocene period is the period when fossil fuels will dictate the footprint humans leave on Earth. (Image: CC0)

In what they deem the "Silurian Hypothesis," Frank and Schmidt define a civilization by its energy use. Human beings are just entering a new geological era that many researchers refer to as the Anthropocene, the period in which human activity strongly influences the climate and environment. In the Anthropocene, fossil fuels have become central to the geological footprint humans will leave behind on Earth. By looking at the Anthropocene's imprint, Schmidt and Frank examine what kinds of clues future scientists might detect to determine that human beings existed. In doing so, they also lay out evidence of what might be left behind if industrial civilizations like ours existed millions of years in the past.

Fossil Fuel Imprint

Human beings began burning fossil fuels more than 300 years ago, marking the beginnings of industrialization. The researchers note that the emission of fossil fuels into the atmosphere has already changed the carbon cycle in a way that is recorded in carbon isotope records. Other ways human beings might leave behind a geological footprint include:

  • Global warming, from the release of carbon dioxide and perturbations to the nitrogen cycle from fertilizers
  • Agriculture, through greatly increased erosion and sedimentation rates
  • Plastics, synthetic pollutants, and even things such as steroids, which will be geochemically detectable for millions, and perhaps even billions, of years
  • Nuclear war, if it happened, which would leave behind unusual radioactive isotopes

Industrialized town in Germany, circa 1870.

"As an industrial civilization, we're driving changes in the isotopic abundances because we're burning carbon," Frank says. "But burning fossil fuels may actually shut us down as a civilization. What imprints would this or other kinds of industrial activity from a long dead civilization leave over tens of millions of years?"

  • High-tech pottery center discovered at Bronze Age China site, 3,000 years before Industrial Revolution
  • Conquistadors caused Toxic Air Pollution 500 years ago by changing Incan Mining
  • BREAKING NEWS: New Telescope Observes Otherwise Invisible Terrestrial Entities with Intelligent Movement

The Astrobiological Perspective

The questions raised by Frank and Schmidt are part of a broader effort to address climate change from an astrobiological perspective, and a new way of thinking about life and civilizations across the universe. Looking at the rise and fall of civilizations in terms of their planetary impacts can also affect how researchers approach future explorations of other planets.

"We know early Mars and, perhaps, early Venus were more habitable than they are now, and conceivably we will one day drill through the geological sediments there, too," Schmidt says. "This helps us think about what we should be looking for."

Schmidt points to an irony, however: if a civilization is able to find a more sustainable way to produce energy without harming its host planet, it will leave behind less evidence that it was there.

"You want to have a nice, large-scale civilization that does wonderful things but that doesn't push the planet into domains that are dangerous for itself, the civilization," Frank says. "We need to figure out a way of producing and using energy that doesn't put us at risk."

That said, the earth will be just fine, Frank says. It's more a question of whether humans will be.

Pripyat town square. Abandoned ghost town in northern Ukraine. ( CC BY-NC-SA 2.0 )

Can we create a version of civilization that doesn't push the earth into a domain that's dangerous for us as a species?

"The point is not to 'save the earth,'" says Frank. "No matter what we do to the planet, we're just creating niches for the next cycle of evolution. But, if we continue on this trajectory of using fossil fuels and ignoring the climate change it drives, we human beings may not be part of Earth's ongoing evolution."


    Forum:Saiyans = Humans

    I know this may sound weird, but I think that Saiyans are Human. This is because Saiyans and Humans can reproduce and make fertile offspring (Gohan, for example) and if two beings are able to do that, then they are of the same species. Therefore I believe that Saiyans are just an isolated race of Humans that have developed unique traits, but are human none the less. They are comparable to pygmies in that they are still human but because they have been isolated, they developed differently from other humans. I have already changed the pages for Saiyans and Humans to be more accurate according to this. Tell me what you think about this. 71.194.99.209 14:49, 15 July 2009 (UTC)


    Call participants:

    Annie Leschin -- Head of Investor Relations

    Dan Springer -- Chief Executive Officer

    Cynthia Gaylor -- Chief Financial Officer

    Sterling Auty -- JPMorgan Chase & Co. -- Analyst

    Karl Keirstead -- UBS -- Analyst

    Alex Zukin -- Wolfe Research -- Analyst

    Tyler Radke -- Citi -- Analyst

    Stan Zlotsky -- Morgan Stanley -- Analyst

    Scott Berg -- Needham & Company -- Analyst

    Rishi Jaluria -- RBC Capital Markets -- Analyst

    Kirk Materne -- Evercore ISI -- Analyst

    Brad Sills -- Bank of America Merrill Lynch -- Analyst

    Jake Roberge -- William Blair & Company -- Analyst


    End the Alien threat

    There is nothing valuable we can learn from the alien scum coming here, except perhaps effective ways to disincentivize future visitations. Any “thing” that comes here is coming here for profit, power, control, and/or territory, not to teach "primitive earthlings" how to effect a better existence, and this is why we should vehemently oppose their presence, because their presence on Earth is abusive in nature, solely intended to serve an Alien agenda.

    We can look to any colonial/profiteering endeavour where people have travelled vast distances, and the effect it has had on native peoples to understand this. You can also look to our own historical accounts of alien encounters to know that they just want to mate with us, eat us, control us, use us, or all of the above (Sumerian Kings anyone? Sacrifices to the "Gods"?).

    You can't trust anything the alien says or does, because it lies to suit its own agenda of control and abuse, you can't relate to the alien because it is unrelatable the alien is lacking in compassion, ethics, and dignity because it has rejected any notions of compassion, virtuous living, and divinity. They are purely beasts with technology, so do not admire the alien because of its technology, it's a distraction from the true contemptuous beastly debased and materialistic nature of the Alien.

    The alien presence on Earth is analogous to the Belgian colonists that lied, abused and used the Congolese people to serve their deviant material greed, but it is many orders more sophisticated and manipulative. The Congolese were not even considered "human" in the eyes of the abusers, and therefore not worthy of respect or consideration. Look to our own history as an analogue to understand the true nature of the Alien threat, and also look to our own historical accounts of alien encounters with humanity to see how we have been abused, brutalised, and fooled in the past. The example I provided in relation to the Congolese and Belgians does have its limits, because the potential for abuse between an alien interloper and human beings is far greater due to the fundamental differences in nature and capability.

    All Aliens coming here should be rejected outright any humans cooperating with a dangerous alien agenda should be treated as a fifth-column, and dealt with appropriately and swiftly (this goes for all abductees, humans that have participated in hybridisation and re-introduction of hybrids, hybrids themselves, and governments that have made deals with the treacherous alien scum) such humans are the worst of our species and the greatest threat to our survival as a species and future autonomy, as they facilitate a controlling and abusive Alien agenda they are sheep, sycophants, psycopaths, narcissists and deluded head cases of the worst order.

    We need to take action before it is too late as we are quickly running out of time


    How to Think Outside Your Brain

    Ms. Paul is a science writer who has reported extensively on cognition and learning.

    Years ago, when I was in college, I visited the dorm room of a fellow student I was dating. On the wall above his desk he had posted a handwritten sign. “Just do it,” it read, in blocky letters. Nike’s slogan was intended to capture an attitude toward athletic endeavors, but this undergrad was applying it to mental exertions. I pictured him sitting at his desk, working hour after hour on his German verb conjugations or econ problem sets. At some point he would become restless, lose focus — then look up at his sign, set his jaw and turn back to his studies, determined to crush them like a 100-meter dash.

    My classmate back then was doing exactly what our culture commands when we are faced with challenging cognitive tasks: Buckle down, apply more effort, work the brain ever harder. This, we’re told, is how we get good at thinking. The message comes at us from multiple directions. Psychology promotes a tireless kind of grit as the quality essential to optimal performance the growth mind-set advises us to imagine the brain as a muscle and to believe that exercising it vigorously will make it stronger. Popular science accounts of the brain extol its power and plasticity, calling it astonishing, extraordinary, unfathomably complex. This impressive organ, we’re led to understand, can more than meet any demands we might make of it.

    In the 25 years since I graduated from college, such demands have relentlessly ratcheted up. The quantity and complexity of the mental work expected of successful students and professionals have mounted we’ve responded by pushing ever harder on that lump of gray matter in our heads. This tendency became more pronounced during the Covid-19 pandemic, when many of us had to take on new duties or adjust to new procedures. Without even a commute or a coffee-station chat to provide a break in our cognitive labors, we’ve been forcing our brains to toil continuously from morning till night.

    The result has not been a gratifying bulking up of our neural “muscle.” On the contrary, all the mental effort we’ve mustered over the past year has left many of us feeling depleted and distracted, unequal to the tasks that never stop arriving in our inboxes. When the work we’re putting in doesn’t produce the advertised rewards, we’re inclined to find fault with ourselves. Maybe we’re insufficiently gritty maybe, we think, we’re just not smart enough. But this interpretation is incorrect. What we’re coming up against are universal limits, constraints on the biological brain that are shared by every human on the planet. Despite the hype, our mental endowment is not boundlessly powerful or endlessly plastic. The brain has firm limits — on its ability to remember, its capacity to pay attention, its facility with abstract and nonintuitive concepts — and the culture we have created for ourselves now regularly exceeds these limits.

    The escalating mental demands of the past quarter-century represent the latest stage of a trend that has been picking up speed for more than 100 years. Starting in the early decades of the 20th century, school, work and even the routines of daily life became more cognitively complex: less grounded in the concrete and more bound up in the theoretical and abstract. For a time, humanity was able to keep up with this development, resourcefully finding ways to use the brain better. As their everyday environments grew more intellectually demanding, people responded by upping their cognitive game. Continual engagement with the mental rigors of modern life coincided in many parts of the world with improving nutrition, rising living conditions and reduced exposure to pathogens. These factors produced a century-long climb in average I.Q. scores — a phenomenon known as the Flynn effect, after James Flynn, the political philosopher who identified it.

    But this upward trajectory is now leveling off. In recent years, I.Q. scores have stopped rising or have even begun to drop in countries like Finland, Norway, Denmark, Germany, France and Britain. (The reverse Flynn effect has not yet been detected in the United States.) Some researchers suggest that we have pushed our mental equipment as far as it can go. It may be that “our brains are already working at near-optimal capacity,” write the neuroscientist Peter Reiner and his student Nicholas Fitz in the journal Nature. Efforts to wrest more intelligence from this organ, they add, “bump up against the hard limits of neurobiology.” This collision point — where the urgent imperatives of contemporary life confront the stubbornly intractable limits of the brain — is the place where we live at the moment, and rather unhappily. Our determination to drive the brain ever harder is the source of the agitation we feel as we attempt the impossible each day.

    Fortunately, there is an alternative. It entails inducing the brain to play a different role: less workhorse, more orchestra conductor. Instead of doing so much in our heads, we can seek out ways to shift mental work onto the world around us and to supplement our limited neural resources with extraneural ones. These platforms for offloading, these resources for supplementation, are readily available and close at hand.

    They fall into four categories, the first and most obvious being our tools. Technology is designed to fulfill just this function — who remembers telephone numbers anymore, now that our smartphones can supply them? — and we’re accustomed to using our devices to both unburden the mind and augment its capacity.

    But there are other resources, perhaps even more powerful, that we often overlook. For example, our bodies. The burgeoning field of embodied cognition has demonstrated that the body — its sensations, gestures and movements — plays an integral role in the thought processes that we usually locate above the neck. The body is especially adept at alerting us to patterns of events and experience, patterns that are too complex to be held in the conscious mind. When a scenario we encountered before crops up again, the body gives us a nudge: communicating with a shiver or a sigh, a quickening of the breath or a tensing of the muscles. Those who are attuned to such cues can use them to make more-informed decisions. A study led by a team of economists and neuroscientists in Britain, for instance, reported that financial traders who were better at detecting their heartbeats — a standard test of what is known as interoception, or the ability to perceive internal signals — made more profitable investments and lasted longer in that notoriously volatile profession.

    The body is also uniquely capable of grounding abstract concepts in the concrete terms that the brain understands best. Abstract concepts are the order of the day in physics class conventional modes of instruction, like lectures and textbooks, often fail to convey them effectively. Some studies in the field of physics education found that students’ understanding of the subject is less accurate after an introductory college physics course. What makes a difference is offering students a bodily experience of the topic they’re learning about. They might encounter torque, for example, by holding an axle on which two bicycle wheels have been mounted. When the wheels are spun and the axle is tilted from horizontal to vertical, the student handling it feels the resistive force that causes objects to rotate. Such exposures produce a deeper level of comprehension, psychological research has found, leading to higher test scores, especially on more challenging theoretical questions.

    Another extraneural resource available for our use is physical space. Moving mental contents out of our heads and onto the space of a sketch pad or whiteboard allows us to inspect it with our senses, a cognitive bonus that the psychologist Daniel Reisberg calls “the detachment gain.” That gain was evident in a study published in 2016, in which experimenters asked seventh- and eighth-grade students to illustrate with drawings the operation of a mechanical system (a bicycle pump) and a chemical system (the bonding of atoms to form molecules). Without any further instruction, these students sketched their way to a more accurate understanding of the systems they drew. Turning a mental representation into shapes and lines on a page helped them to elucidate more fully what they already knew while revealing with ruthless rigor what they did not yet comprehend.

    Three-dimensional space offers additional opportunities for offloading mental work and enhancing the brain’s powers. When we turn a problem to be solved into a physical object that we can interact with, we activate the robust spatial abilities that allow us to navigate through real-world landscapes. This suite of human strengths, honed over eons of evolution, is wasted when we sit still and think. A series of studies conducted by Frédéric Vallée-Tourangeau, a professor of psychology at Kingston University in Britain Gaëlle Vallée-Tourangeau, a professor of behavioral science at Kingston and their colleagues, has explored the benefits of such interactivity. In these studies, experimenters pose a problem one group of problem solvers is permitted to interact physically with the properties of the problem, while a second group must only think through the problem. Interactivity “inevitably benefits performance,” they report.

    This holds true for a wide variety of problem types — including basic arithmetic, complex reasoning, planning and challenges that require creative insight. People who are permitted to manipulate concrete tokens representing elements of the problem to be solved bear less of a cognitive load and enjoy increased working memory. They learn more and are better able to transfer their learning to new situations. They are less likely to engage in symbol pushing, or moving numbers and words around in the absence of understanding. They are more motivated and engaged and experience less anxiety. They even arrive at correct answers more quickly. (As the title of a research paper that the Vallée-Tourangeaus wrote with Lisa G. Guthrie puts it, “Moves in the World Are Faster Than Moves in the Head.”)

    One last resource for augmenting our minds can be found in other people’s minds. We are fundamentally social creatures, oriented toward thinking with others. Problems arise when we do our thinking alone — for example, the well-documented phenomenon of confirmation bias, which leads us to preferentially attend to information that supports the beliefs we already hold. According to the argumentative theory of reasoning, advanced by the cognitive scientists Hugo Mercier and Dan Sperber, this bias is accentuated when we reason in solitude. Humans’ evolved faculty for reasoning is not aimed at arriving at objective truth, Mercier and Sperber point out it is aimed at defending our arguments and scrutinizing others’. It makes sense, they write, “for a cognitive mechanism aimed at justifying oneself and convincing others to be biased and lazy. The failures of the solitary reasoner follow from the use of reason in an ‘abnormal’ context’” — that is, a nonsocial one. Vigorous debates, engaged with an open mind, are the solution. “When people who disagree but have a common interest in finding the truth or the solution to a problem exchange arguments with each other, the best idea tends to win,” they write, citing evidence from studies of students, forecasters and jury members.

    The minds of other people can also supplement our limited individual memory. Daniel Wegner, a psychologist at Harvard, named this collective remembering “transactive memory.” As he explained it, “Nobody remembers everything. Instead, each of us in a couple or group remembers some things personally — and then can remember much more by knowing who else might know what we don’t.” A transactive memory system can effectively multiply the amount of information to which an individual has access. Organizational research has found that groups that build a strong transactive memory structure — in which all members of the team have a clear and accurate sense of what their teammates know — perform better than groups for which that structure is less defined. Linda Argote, a professor of organizational behavior and theory at Carnegie Mellon University, reported last year that results from an observational study showed that when a trauma resuscitation team developed a robust shared memory system and used it to direct tasks to the team members most qualified to take them on, their patients had shorter hospital stays.

    All four of these extraneural resources — technology, the body, physical space, social interaction — can be understood as mental extensions that allow the brain to accomplish far more than it could on its own. This is the theory of the extended mind, introduced more than two decades ago by the philosophers Andy Clark and David Chalmers. A 1998 article of theirs published in the journal Analysis began by posing a question that would seem to have an obvious answer: “Where does the mind stop and the rest of the world begin?” They went on to offer an unconventional response. The mind does not stop at the usual “boundaries of skin and skull,” they maintained. Rather, the mind extends into the world and augments the capacities of the biological brain with outside-the-brain resources.

    Much of the initial reaction to their thesis focused on disputes over whether the stuff of the world could really constitute an element of the thought process. For a culture so neurocentric — so brain-bound, as Mr. Clark later called it — this was an insupportable notion, a bridge too far. But their claim acquired more plausibility as daily life in the digital age provided a continuous proof of concept, with people extending their minds with their devices. Initially derided as wacky, the theory of the extended mind eventually came to seem rather prescient. Ned Block, a professor of philosophy at New York University, said that Mr. Clark and Mr. Chalmers’s thesis was false when it was written but subsequently became true.

    Mr. Block’s quip notwithstanding, the fact is that humans have been extending their minds for millenniums. Ancient peoples frequently engaged in offloading their mental contents and augmenting their brainpower with external resources, as evidenced by objects they left behind. Sumerians employed clay tokens to keep track of livestock and other goods when trading Incas tied knots in long cords, called quipus, to memorialize events administrators and merchants across a broad swath of the ancient world used abacuses and counting boards. Likewise, the notes and sketches of artists and thinkers over the centuries bear testament to “that wordless conversation between the mind and the hand,” as the psychologist Barbara Tversky puts it in “Mind in Motion: How Action Shapes Thought.” When Leonardo da Vinci sought to understand “the flow of blood in arteries and the flow of water in rivers,” Dr. Tversky observed elsewhere, he leaned on both body and space, using “the actions of his hand as he drew as if they were mirroring the actions of nature.” And of course, history offers a rich record of how groups of people thinking together have managed to do what a single person could not. The unaccommodated brain is a poor, bare thing indeed. Mental extension is involved in most of humanity’s feats, from the transcendent to the mundane.

    We, too, extend our minds, but not as well as we could. We do it haphazardly, without much intention or skill — and it’s no wonder this is the case. Our efforts at education and training, as well as management and leadership, are aimed principally at promoting brain-bound thinking. Beginning in elementary school, we are taught to sit still, work quietly, think hard — a model for mental activity that will dominate during the years that follow, through high school and college and into the workplace. The skills we develop and the techniques we are taught are mostly those that involve using our individual, unaided brains: committing information to memory, engaging in internal reasoning and deliberation, mustering our mental powers from within. Compared to the attention we lavish on the brain, we expend relatively little effort on cultivating our ability to think outside the brain.

    The limits of this approach have become painfully evident. The days when we could do it all in our heads are over. Our knowledge is too abundant, our expertise too specialized, our challenges too enormous. The best chance we have to thrive in the extraordinarily complex world we’ve created is to allow that world to assume some of our mental labor. Our brains can’t do it alone.

    Annie Murphy Paul (@anniemurphypaul) is a Learning Sciences Exchange fellow at New America and the author of “The Extended Mind: The Power of Thinking Outside the Brain,” from which this essay is adapted.


    Why do we want to live forever?

    Although the quest for immortality is as old as humanity itself, it's surprisingly hard to find across the diverse natural world. Truth be told, evolution doesn't care about how long we live, so long as we live long enough to pass on our genes and to make sure our children are vaguely looked after. Anything more than that is redundant, and evolution doesn't have much time for needless longevity.

    The more philosophical question, though, is why do we want to live forever? We're all prone to existential anguish, and we all, at least some of the time, fear death. We don't want to leave our loved ones behind, we want to finish our projects, and we much prefer the known life to an unknown afterlife. Yet, death serves a purpose. As the German philosopher Martin Heidegger argued, death is what gives meaning to life.

    Having the end makes the journey worthwhile. It's fair to say that playing a game is only fun because it doesn't go on forever, a play will always need its curtain call, and a word only makes sense at its last letter. As philosophy and religion has repeated throughout the ages: memento mori, or "remember you'll die."

    Being mortal in this world makes life so much sweeter, which is surely why lobsters and tiny jellyfish have such ennui.

    Jonny Thomson teaches philosophy in Oxford. He runs a popular Instagram account called Mini Philosophy (@philosophyminis). His first book is Mini Philosophy: A Small Book of Big Ideas


    Tracking transits to find other planets

    Before we talk about how to hide a planet from distant voyeurs, consider the best way we’ve figured out to find one.

    Humanity’s most successful technique for detecting other planets is the transit method. A transit occurs when a planet appears to pass in front of its parent sun, blocking out some of its starlight for a few hours. So if we have our telescopes trained at one part of the universe and a star seems to fade out for part of a day, that tells us that a planet has temporarily come between us as it goes about its orbit.

    It seems likely that any advanced civilization would be aware of this simple method. Each time a planet transits its star, its existence is essentially being advertised to all points lying along the same plane as the planet and star.

    An advanced civilization might be okay having its planet’s location, size and even atmospheric chemistry advertised across the cosmos. Or it might wish to conceal its presence. If the latter, it might choose to build a cloak.


    The Claim: Protect Ecosystems, Not Species

    For Peter Kareiva, the president and CEO of the Aquarium of the Pacific, the term biodiversity crisis wrongly inflates the role of individual nonhuman species in human well-being and prosperity, when in reality, holistic ecosystems are more important. A marshland th

    at stifles tidal waves may not need 16 different species of shellfish to hold it together. It’s a “dramatic extrapolation,” he says, to conclude that the extinction of one species would imperil the coastline and thus human well-being.

    Kareiva adheres to a metaphor developed by biologist Paul Ehrlich: Nature is like an airplane, and species are the rivets that hold it together. Without a few rivets, a plane can still fly, but if you take too many out, the plane will fall apart and crash. The trouble is, Kareiva says, we don’t know how many rivets we can take out. Instead of trying to nail down that number, we would be better served focusing on an ecosystem’s functions — that is, keeping the plane in the sky, rather than saving every rivet. “The first question you ask is, if this species goes functionally extinct, what will be different about the world?”

    A scientist’s role, he says, should be to answer that question as the evidence society uses to decide whether a species needs to be saved. Some conservation biologists, however, have turned into activists, when instead it is up to society to weigh social and cultural values against what it would take to protect a species. Often that means deciding whether to spend hard-to-find money in conservation or elsewhere.


    What should we believe?

    Both Carroll and Rovelli are master expositors of science to the general public, with Rovelli being the more lyrical of the pair.

    There is no resolution to be expected, of course. I, for one, am more inclined to Bohr's worldview and thus to Rovelli's, although the interpretation I am most sympathetic to, called QBism, is not properly explained in either book. It is much closer in spirit to Rovelli's, in that relations are essential, but it places the observer on center stage, given that information is what matters in the end. (Although, as Rovelli acknowledges, information is a loaded word.)

    We create theories as maps for us human observers to make sense of reality. But in the excitement of research, we tend to forget the simple fact that theories and models are not nature but our representations of nature. Unless we nurture hopes that our theories are really how the world is (the Einstein camp) and not how we humans describe it (the Bohr camp), why should we expect much more than this?