AI technology: what it is and what it’s not, and how it can (potentially) help us solve the climate crisis
Prepared by: Tom Hengl (OpenGeoHub), Davide Consoli (OpenGeoHub), Marina Bagić (FER), Luca Brocca (CNR) and Martin Herold (GFZ)
AI (Artificial Intelligence) technology, with the launch of OpenAI’s ChatGPT (the fastest growing app ever) and similar, is now a buzz: a new technological jump of the human race, but potentially a Pandora box for information manipulation and misuse. AI could soon replace thousands of jobs and revolutionize how we travel (self-driving cars), purchase items, do admin/office work, communicate with computers (and people), but also how governments fight wars and control people. AI is making a lot of people enthusiastic, but even more nervous. We review the potentials and perils of AI tech; how it can also help us with extremely important things such as solving the climate crisis and better monitoring and conservation of natural resources. Links and references are extensive and hopefully will motivate you to read more on the topic.
AI vs Machine Learning
“Stop Calling Everything AI.” (Michael I. Jordan)
Intelligence is the capacity of a system to learn, adopt and make decisions that help solve problems and find optimal solutions, especially in “uncharted territories”: the more complex, more out-of-box the problems, the higher the intelligence. For biological beings, intelligence is in principle a survival skill and is part of a more complex system that also includes senses, memory, instincts, emotions, consciousness etc. Artificial Intelligence (AI) is the synthetic intelligence (“the intelligence of computers”) that has been designed and implemented by humans.
Most AI’s in today’s world are in essence software solutions (computer programs) that are combined with High Performance Computing platforms and large training data / large data pools. AI is, in principle, not possible without Machine Learning (ML) and high level programming. So in summary, an AI system consists of: (1) software / functionality / rules, (2) hardware i.e. computers, sensors and sensor networks, (3) training data i.e. knowledge, (4) team maintaining and moderating the system.
The term “AI” has been maybe somewhat overhyped as now everyone using some basic ML might believe that they are also developing AI technology. Correct thing to say would be that AI is (only) technology that has the capacity of an automated decision system i.e. that can solve human-level tasks without human intervention. Examples of operational AI’s are: self-driving cars, operational chat-bots (ChatGPT, Siri, Alexa etc), chess masters, Google digital doctor, automated door-opening based on facial recognition etc, are typical examples of AI. The typical (complexity) stages of AI include:
- Rule-based AI systems: the lowest stage AI implies that decisions are based on pre-programmed rules implying that all decisions are basically deterministic and the system is not able to navigate and make decisions outside of the rules (examples: robotic vacuum cleaner);
- Context-awareness and retention AI systems: has a capacity to adopt to specific situations and learn from e.g. user-interactions so it can even deliver personalized decisions (examples: Siri, Google Assistant, Alexa etc);
- Domain-specific mastery AI systems: the current standard state of AI; it implies that basically the system uses all available data to train algorithm and decisions are sometimes unpredictable but the system typically performs better than standard human, but only within the narrow domain of work (examples: AlphaGo, ChatGPT);
- Thinking and reasoning AI systems: these systems will mimic human brain functioning and reasoning, and will depend on complex sensor-systems and complex memory systems and should possess ability to adjust and adapt to complex problems even the out-of-box problems (examples: Tesla self-driving car, ChatGPT >3, but in principle still experimental stage);
- Artificial General Intelligence: unlike narrow or specialized mastery AI systems, AGI would have capacity to tackle any problems and adapt to new conditions and most likely it will require self-awareness (no examples yet exist);
- Artificial Super Intelligence: this intelligence would not only surpasses human intelligence but most likely no living humans would be able to fully follow its reasoning and decisions i.e. it would begin to transcend our own capacity of understanding (no examples yet exist);
So in summary: AI builds upon strong ML but it is usually at the order of magnitude more complex and requires much more complex programming than ML. We can try to define AI stage 3 in year 2023 using the following three key aspects:
- It is operational i.e. it does not require human intervention and typically outperforms humans: AI is a system that can replace human labor in an operational setting i.e. a self-driving car that will have (in average) a lower chance of failure or accident than an average human. AlphaGo plays computer games with people from start till the end without intervention by the AlphaGo developers.
- It is complete and current i.e. it builds upon using all data available and hence represents the cutting-edge summary of human knowledge on the topic: for example Deep Blue, AlphaGO etc use all possible registered chess-games in human history to train algorithms.
- There is a development team + plan behind some AI tool / project i.e. there is a system design, development agenda and plan for the future.
This might appear to be a somewhat strict definition, but we propose it here as a practical solution to avoid misusing the term AI all too much. Being a ML expert instead of having its own AI is still quite an achievement!
AI vs human intelligence
“For the foreseeable future, computers will not be able to match humans in their ability to reason abstractly about real-world situations… We need to understand that the intelligent behavior of large-scale systems arises as much from the interactions among agents as from the intelligence of individual agents.” (Michael I. Jordan)
“It’s hard not to say “AI” when everybody else does too, but technically calling it AI is buying into the marketing. There is no intelligence there, and it’s not going to become sentient. It’s just statistics, and the danger they pose is primarily through the false sense of skill or fitness for purpose that people ascribe to them” (Eugen Rochko)
The second biggest myth about AI is that it already thinks for itself. As we slowly develop more and more complex AI systems that can replace more and more human processes, can we also expect that soon we will be able to create Synthetic Conscious Intelligent Entities (SCIE)? This is also referred to as the “Artificial General Intelligence’’ i.e. a capacity of a computer system to try to solve ANY complex problems including the ones requiring “out-of-box” solutions. To Roger Penrose, intelligence without awareness (consciousness) and understanding is superficial and only serves some limited specific purposes. Problem solving and automated decision systems are in principle computational. One can program a software to beat all humans in chess or similar games with strict rules by using a lot of training data. But consciousness and/or deeper self-awareness is most likely not computational and in principle we still know very little about it. It is reasonable to think that current computers / technology will not match the ability of humans to reason, develop a personality and be aware of themselves (not to mention to have empathy or pose existential questions) in near future. Machine Learning, for example, is known to perform poorly outside of the training data used to build decision trees. In fact, it has been shown in many studies that ML can perform even worse than some simple models that are based on simple assumptions, indicating that the ML tech of today (e.g. Random forest, neural nets, gradient boosting algorithms) might not ever give us usable solutions for out-of-box problems. Machine Learning is definitely not magic. Albert Einstein doing thought experiments and consequently deriving theory of relativity in his head out of pieces of information, however, is definitely as close as one gets to magic.
But today there are increasingly realistic chat-bots that emulate human behavior, leading to the Turing test (Wikipedia: “Turing test is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human”) being now constantly passed. The most famous example probably being the Google engineer Blake Lemoine going public with his concerns that the LaMDA language model is sentient. But we know with 100% certainty that NONE of today’s chat-bots / AI systems in the world have consciousness, even though they might appear to be self-aware i.e. passing the Turing test. If a machine passes a Turing test today it is because it is really an excellent simulation, but it is still only a behavioral zombi. Any behavioral zombi will eventually fail the Turing test as long as you know how to test their empathy, individuality, fears and similar. A zombie is a zombie and even if it is most impressive, it is ok to turn it off or not consider its opinions too seriously: while the power of AI models looks mind blowing, for computer scientists this technology is currently in its infancy and there is much place for improvements due to numerous limitations, e.g. Large Language Models (LLMs) might generate outputs that are biased or don’t make sense because of the data they learned from or the complex nature of human language. Moreover, these LLMs often face difficulties in grasping subtle context, resulting in errors in their answers and interpretations.
So when can we expect the first SCIE’s? Most likely not in the foreseeable future. Many computer scientists probably heavily under-estimate complexity of biological intelligence. We, humans, have about 100 billion neurons in our brain interconnected into a complex network that has been shaped through >2 billion years of biological evolution. These are all very large numbers, and hence we should not underestimate the complexity of consciousness or think that we can still reach it with ever more servers. On top of everything, even if we manage to mimic nano-scale neural network systems that can understand and have fantasies, this SCIE would likely also need to go through a personal evolution i.e. be mentored, learn from mistakes, even go through a “digital puberty”.
At the heart of present-day AI systems is matrix multiplication that is behind the famous transformer architecture (from the paper “Attention is all you need” by Vaswani et al.). For example, a language model can be thought of as a mathematical function that very well spits out the next word given some context or input. Thanks to these mathematical operations, an LLM acts as a reasoning engine, but is computationally very expensive as it has been trained on huge amounts of data resulting in a vast number of parameters. For example, GPT-3 (Generative Pre-trained Transformer 3) has 175 billion parameters and was trained on 570 gigabytes of text., and GPT 4 is estimated at more than a trillion, where we can imagine these parameters like neurons in our brain. In summary, OpenAI currently lacks a solution to control potentially superintelligent AI and prevent it from going rogue. Their existing techniques, like human feedback-based alignment, won’t work reliably with much smarter AI, necessitating new breakthroughs to ensure control. But as we change the way we program software to teach itself and as we learn more about how the brain works, we might eventually develop a capacity to mimic some of the systems and build SCIEs. Since these would be deeply rooted in our human experience and backgrounds, these could be considered synthetic humans and could potentially become a more advanced version of humans i.e. the next step in evolution. Hopefully more emphatic, more self-sustainable, more energy efficient than the current biological version.
Digital humans
“It could be the evolution of intelligence that becomes much more responsible than humans ever will be (and hopefully guides us along in a positive way)” (Brian Mitchell)
Nature is our best teacher: by building AI we imitate biological systems. In a way, AI is our attempt to build artificial thinking entities that could potentially go beyond us in evolution. But how well do we understand living beings and human intelligence in the first place? We currently understand that Life is primarily associated with nucleic acids i.e. molecules that have a capacity of auto-reproduction. One could say that Life in essence is driven by three basic circular self-perpetuating forces:
- Self-conservation through self-copying of DNA;
- Free movement and exchange with environment / metabolism i.e. supply of energy and matter needed for self-copying;
- Self-conservation through protection mechanisms such as cell membranes, immune systems, sensory warning systems and similar;
These three basic principles are common to all living organisms from most simple to most complex. One highly sophisticated protection and navigation mechanism in the living organisms is the system of networked neural cells, eventually leading to what we call a brain. Biological beings that eventually have such complex brains that use that brain beyond purely navigating through simple processes, i.e. that appear to be aware of themselves and are eager to learn, we consider to be self-conscious intelligent beings. The topic of consciousness, however, is subject to many controversies. Nevertheless, we do consider our species (+ several other species) to be conscious and capable of thinking, being aware and able to solve out-of-box problems. We just seem to know that we are conscious, even if we do not yet know how to prove or measure consciousness.
Purpose of self-conscious intelligent entities also builds on self-conservation and supply of energy and matter (including information) listed above, but also brings number of additional higher level objectives (from the Wikipedia meaning of life article):
- To find and realize one’s potential and ideals,
- To evolve, or to achieve biological perfection,
- To seek wisdom and knowledge,
- To love, to feel, to enjoy the act of living,
- To be socially accepted and valued / to live with other beings in harmony,
- To have power.
Thus, in the future, only the AGI-system that ticks all the boxes above could be considered a SCIE. It’s not just the Turing test! Various groups are now looking at formalizing tests for consciousness, which should of course be exact, reproducible and objective. What is also important to emphasize here is that we, humans, have only managed to reach some higher level culture due to our social capacity to share and pass knowledge from generation to generation and not as individual beings. If we have not discovered ways to store and conserve information and pass knowledge and culture as a society, we could have been an extinct species already (which is exactly what happened to some hominids such as Neandertals).
It is thus reasonable to expect that also SCIE’s will depend on social intelligence of a federation of systems and it is also likely to expect there will be some natural selection with many experiments and technology possibly failing and being replaced. Not to mention that AI tech might get embedded and mixed in with our biological systems so that it might even become impossible to draw a line between the two. This idea of technology-biology fusion is one of Ray Kurzweil’s most significant predictions of the most probable future; also recognized by James Lovelock in his “Novacene: The Coming Age of Hyperintelligence” book.
The bottom line is: consciousness is not just building ever bigger supercomputers crunching ever bigger digital data. What makes us, humans, able to think out-of-box and solve amazing mysteries, is not only our IQ / brain capacity, but also our social IQ, cultural heritage, motivation, instincts, passion and character (as illustrated in the figure above). We are a product of billions of years of biological evolution with trillions of trial-n-error iterations and with possibly many lucky chances. Maybe it is at the order of magnitude more difficult to mimic human brain and our social and emotional intelligence than most computer scientists are currently fantasizing? Maybe we will actually never get there with AI.
AI and supporting technology in 2023
AI technology is, of course, not an independent isolated technology: it develops in parallel and/or in synergy with other similar revolutionary technologies. In fact, no-one can fully excel AI technology without the co-evolution of all other information technologies that use and/or extend AI tech. Here are some cutting edge technological developments that connect to AI technology:
- Robotics: robots are mechanical / synthetic devices that can be programmed to do labor, move and navigate through space, and replace humans as a workforce. Robotics combined with miniaturization and 3D printing is especially exciting as we could print millions of micro-drones to help us move massive and complex pieces of materials or living beings.
- Nanotechnology and nanobots i.e. miniaturization: nanotechnology is the final frontier of robotics. We are still probably decades away from having nano-robots that print highly sophisticed instruments, but miniaturization is simply inevitable and would likely lead to higher energy efficiency, higher precision and would also allow us to “hack” living structures at the level of cells.
- Internet-of-things (IoT): multitude of devices + sensors connected directly into complex networks and exchanging information seamlessly even without human intervention. This large mimics complex biological networks including ant colonies and similar.
- Super-high-speed internet: especially thanks to fiber-optic cable technology we are at the brick of exponential increasing bandwidth and transfer speeds allowing us to pass ever more data across the globe.
- Blockchain and encryption algorithms: Although block-chain technology seems to be somewhat specific, it is also a revolutionary technology because it helps reduce security risks, eliminate fraud, and bring transparency to a scale.
- Quantum computing: even still quite experimental, quantum computers could perform some calculations exponentially faster than any modern “classical” computer. This will revolutionize AI technology, especially the Machine Learning part that is heavily computational.
- Laser technology: is used today in many fields including for measuring distances and scanning, to information transfer, 3D printing and general industry.
- LiDAR technology: “light detection and ranging” (LiDAR) instruments can be used to make digital 3-D representations of objects, areas and similar. It is an essential sensor-component of a self-driving car or robots, but also already above our heads i.e. the NASA ICESat.
- Remote sensing / Earth Observation (EO) / hyper-spectral imaging: modern sensors allow us to see beyond visible light, beyond passive sensors. There are already multiple hyper-spectral imaging missions run by the ESA (e.g. ENMAP, CHIME) and NASA that produce hundreds of bands that have capacity to detect types of material on Earth’s surface, biological species and similar, directly from space. View of the processes at Earth’s surface is becoming increasingly transparent.
AI technology becomes especially important for development of complex machines as some systems in the near future will only be possible to operate using AI tech. For example, one can build a drone that can be controlled by a human co-pilot, but having millions of micro-drones that need to jointly navigate through an area and complete complex tasks, in near-real-time and that require continuous adjustments, will most likely only be possible by using AI technology and super computers. Likewise, as we collect larger and larger volumes or increasingly complex data (hyper-spectral images of Earth, hypersonic readings of activities at the sea bottom and similar) we will become increasingly dependent on AI system processing, filtering and summarizing such data, extracting knowledge and supporting decisions.
Evolution of AI from ML to AGI seems to be running slowly, but it is really the synergy of various technological breakthroughs that might surprise us. Imagine a breakthrough in understanding how the human brain works (based on cellular biology, cell physics, complex systems science) combined with nano-tech, super-fast network and quantum computing all happening within the next 10+ years. Such parallel breakthroughs and synergies between our software development skills, computing and life-mimicking skills could speed up evolution of AGI at the order of magnitude, but it is not something that happens automatically or without innovation geniuses that are able to connect all dots. Ray Kurzweil’s “prophecy” of singularity i.e. “a machine waking up” and leaving all standard v1.0 humans behind by 2045 (or earlier), might still happen.
Importance of quality of training data in AI
In any Machine Learning system, beyond the algorithm used, the power and quality of the system is directly related to the quality, availability and transparency of training data. In recent years AI has been especially criticized for propagating discrimination and various types of cultural biases. One example is the use of AI for hiring new employees, where historic data was used to make decisions and which obviously concluded that some races and disabilities do not help increase profits. We should not blame AI for these outcomes but possibly the AI development teams that have allowed such biases by using biased training data. This is a known problem not only for AI but for any statistical testing in general, except with AI we can propagate such problems at the order of magnitude.
We recommend the following rules of thumb to organizing training data to help produce more reliable non-discriminatory models:
- Training data should be based (as much as possible) on probability / unbiased sampling (rather than on blindly feeding ML with large datasets from a single data producer) and should also represent the time-domain of interest. Ideally, results of training should be documented using the Open Science principles with both input data, models (software) and output results available as open source / open data and preferably on not-for-profit repositories e.g. Zenodo.org or similar, allowing for anyone to reproduce and/or scrutinize results.
- Training data should be large enough, representative of the target populations and geographical areas. If there are serious gaps in data, then it is probably better to collect new data to reduce gaps than to force algorithms to adjust decisions.
- Parts of data that are clearly biased or against the local and/or international conventions (human rights and similar) should not be used for training algorithms that are used on human populations. Special attention needs to be given to all possible cultural biases. This means that professionals such as anthropologists, psychologists, political scientists, sociologists and similar should be included in the teams where training data is selected for ML.
- Data scientists need to take special care that raw, original measurement and observations and not mixed in with manipulated, gap-filled and/or AI-generated data as these can potentially lead to artifacts and bias. Scientists at Rice and Stanford University have discovered that AI-generated content eventually erodes data quality in a self-consuming loop: “without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease. We term this condition Model Autophagy Disorder (MAD)”.
- Important metadata on training data should be documented in detail so the misinterpretation and misuses can be avoided.
On one hand it is scary how much tech-giants know about us (even hold our most private information!), on the other hand we can still all help make better training data for various ML and that can help increase the life quality of everyone. You can also contribute to making a better AI by contributing data, especially ground observations and measurements, and help with large modeling projects through citizen science and similar infrastructures. OpenAI, among other inputs, is largely based on using Wikipedia data (“Without Wikipedia, generative A.I. wouldn’t exist”), which is basically citizen-science data, available as open data under the CC-BY-SA license. When it comes to environmental data, you can install on your phone and use the iNaturalist app to detect plant species directly from photographs, use Geo-wiki apps to track land cover changes, track biomass of trees by taking simple photographs with your phone, contribute to OpenStreetMap to help a better open map of the world. Such data contributions can help make better prediction models, which then lead to more accurate decisions, less risk taking. If your data is registered as open data it can be used by anyone in the world, including SMEs to generate new innovations and products.
The perils of AI technology (vs side-effects)
Many correctly recognize that AI technology can be compared to the invention of the nuclear bomb. The speed of development of AI tech is also scary as it appears that various groups in the world are now experimenting with tools that we do not even fully understand. The biggest threats of AI in modern world seem to be (sorted):
- Use of AI technology for mass manipulation. Next generation deep-fakes i.e. totally manipulated images, videos, news items, conversations, undetectable to current technology and that can be used as an information war weapon — to manipulate large masses and even turn people against themselves. Have no doubt that any significant intelligence agency / secret service in the world is already testing using AI technology for information wars, cyber attacks and similar.
- AI will become more and more powerful in monitoring and analyzing humans. The 20th century lie detectors might be soon replaced with super successful AI-imaging systems that allow for no anonymity, no hidden messages or opinions. If all this falls in the hands of totalitarian governments and/or is based on significantly biased training data, these could have enormous consequences.
- Fully automated killer machines (built by militaries or terrorist organizations) that are produced in massive quantities and eventually can cover all traces / leave no witnesses (as illustrated so accurately in the Black Mirror episode “Metalhead”). These would have no empathy and are especially dangerous as there would also eventually be no witnesses left at all, so we would not even be able to imagine the horror.
- Uncontrolled experiments with AI that lead to massive disasters that can not be turned off anymore. AI could totally disrupt the internet, crash bank security systems, design powerful chemical toxins that become a super weapon, and send us back to the stone age.
- Paradoxically, AI might decrease our own intelligence as we would largely replace a need to learn and practice our own brain. To understand math or psychology or any similar developed scientific field requires intense reading and training as it did 50 years ago (computers help a bit but >90% is still just good old simple reading, understanding, memorizing and practicing), but if we pass all the work to programs, we might in fact decrease own intelligence (as in the movie “Idiocracy”).
- If the world’s inequality of wealth continues to increase, this unfortunately means that the value of human labor will significantly deflate as those who own most of the capital and pay salaries will only hire human staff if they are still cheaper than robots + AI. As tech becomes cheaper and cheaper, soon after, large masses of people would become totally unemployable. This is at the order of magnitude scarier than any future food crisis as we might finish with billions of people that no market really needs. Again, this dark projection should not be blamed on AI tech but really on the political systems perpetuating inequality and infinite growth of wealth.
It is important to emphasize that the threats of AI should not be mixed with negative effects of AI. For example, research shows that AI will replace and leave large quantities of population without work, especially women: nearly 80% of women’s jobs could be lost in the next 10+ years, bringing us basically back to the 19th century. It is certainly not fair to blame some software developers and electrical engineers for the rising unemployment and gender inequality. Alan Finkel: “Calls by technology leaders for a hiatus in AI development are not the answer. Regulations that only affect the well-intentioned, high-integrity developers and users of AI are not the answer”. We believe that the solution to increasing well-being, security and equality of citizens is in the political evolution of societies, especially with basic income arrangements, gender equality, and women in STEM programmes. Solution to the problem of AI taking 80% of women jobs should definitely not be burning down servers or arresting AI developers.
How can AI technology help us combat climate crisis?
“I’m actually excited about AI in this regard, while also bracketing, yeah, I understand there’s also risks and people are terrified of AI. But I actually think it is quite interesting this moment in time that we may have in the next 50 years to really, really solve some really long-term human problems, for example, in health. The progress that’s being made in cancer treatment, because we are able to, at scale, model molecules, and genetics, and things like this, it gets huge. It’s really exciting. So, if we can hang on for a little while, and certain problems that seem completely intractable today, like climate change may end up being actually not that hard.” (Jimmy Wales)
The human-induced ecosystem degradation and connected climate disturbances are among the biggest challenges of the human race. So far, researchers have widely opened their hands to using ML / AI hoping that AI tech could help us solve many of the climate crisis problems. Here are some key review papers that summarize opportunities of using AI tech:
- Aguilar-Lazcano, C. A., Espinosa-Curiel, I. E., Ríos-Martínez, J. A., Madera-Ramírez, F. A., & Pérez-Espinosa, H. (2023). Machine Learning-Based Sensor Data Fusion for Animal Monitoring: Scoping Review. Sensors, 23(12), 5732. https://doi.org/10.3390/s23125732
- Chen, L., Chen, Z., Zhang, Y., Liu, Y., Osman, A. I., Farghali, M., … & Yap, P. S. (2023). Artificial intelligence-based solutions for climate change: a review. Environmental Chemistry Letters, 1–33. https://doi.org/10.1007/s10311-023-01617-y
- Isabelle, D. A., & Westerlund, M. (2022). A review and categorization of artificial intelligence-based opportunities in wildlife, ocean and land conservation. Sustainability, 14(4), 1979. https://doi.org/10.3390/su14041979
- Thomas, L. B., Mastorides, S. M., Viswanadhan, N. A., Jakey, C. E., & Borkowski, A. A. (2021). Artificial intelligence: review of current and future applications in medicine. Federal Practitioner, 38(11), 527. https://doi.org/10.12788%2Ffp.0174
- Tuia, D., Schindler, K., Demir, B., Camps-Valls, G., Zhu, X. X., Kochupillai, M., … & Schneider, R. (2023). Artificial intelligence to advance Earth observation: a perspective. arXiv preprint arXiv:2305.08413. https://doi.org/10.48550/arXiv.2305.08413
Concrete applications and enthusiasm over using AI technology to monitor the environment, optimize transportation and food production systems and prevent natural hazards is obviously extensive. AI may help us “save the planet”; the following key areas where AI is used for operational work include e.g.:
- Autonomous and connected electric vehicles,
- AI-Optimized distributed energy grids,
- Smart agriculture and food systems,
- Next generation weather and climate prediction,
- Smart disaster response systems,
- AI-designed intelligent, connected and livable cities,
- A transparent digital Earth,
- Reinforcement learning for Earth sciences breakthroughs,
- Systems monitoring the climate anomalies,
- Systems for monitoring Carbon Footprints,
- Automated monitoring of changes in Land Use and soil health,
- Systems for generating new Eco-Friendly ideas,
- Automated environmental hazard warning systems,
Just to mention some known examples. In summary, some very exciting potential developments of AI technology, especially in helping combat the climate crisis and protecting biodiversity include:
- AI tech in combination with micro-drones (imagine massive numbers e.g. millions and millions of drones) could be used to speed up re-greening projects, especially to help plant forest species, combat fires and natural disasters and similar. Such micro-drones would be both networks of sensors helping us monitor natural landscapes and tools to reach the most complex most remote terrains. We could literally expect to re-green the Sahara i.e. geoengineering world’s ecosystems. Once the number of micro-drones becomes massive, and once the amount of the data starts exceeding what our semi-automatic systems can process, it is only the AI technology that would be able to optimize such large systems and extract decisions.
- Some researchers today suggest that the key to CO2 sequestration today could be in massive sea farming, iron fertilization and vertical farms covering large parts of the oceans. To manage such remote and underwater farms we would again need a large mass of drone-submarines, live data sensor-networks and similar, that would require a powerful automated AI to track such mega farms.
- AI technology, in general, allows to speed up modeling and prediction, especially of future states of environment i.e. could help us make forecasting highly accurate. This is crucial for minimizing risks for citizens and helping save millions of lives including ecosystem health. For example, across the European Union, multiple project teams are now working on developing Digital Twin Earth systems that are largely based on using AI technology combined with High Performance Computing.
- AI technology could also be extremely beneficial for the education of the new generations and for bridging the knowledge gap. In principle, education is still exclusive to the wealthy with large publishing corporations owning copyrights. Next generation of AI-based virtual teachers could help educate masses and reduce the knowledge gap by an order of magnitude. One year of study at Stanford or Harvard is cca $50,000 to $75,000 just for tuition fees. AI teachers could democratize education and bring top classes to the masses. It is maybe naive to think that such education centers can come at no cost (hence the Wikipedia model), but it would certainly be hundreds of times more cost-effective to set-up AI education centers than to pay salaries of all staff of Harvard University or similar. In addition, AI advisors running on a mobile phone (or an ear-plug i.e. voice in your ear) virtual botanists, ecologists, landscape designers could be used by rural / remote populations, indigenous nations and similar to educate and better understand land-climate-life interactions.
- AI technology will help reduce the problem of information gap i.e. gap between the wealth of data we have and our capacity to extract useful information from it. Especially for EO systems that now generate terabytes or images / Earth surface scans on a daily basis. Our capacity to visually interpret data follows a linear growth and has probably already been exhausted. AI can help find hidden patterns and connections, possibly across petabytes of data and possibly in close-to-real time. Imagine AI-based digital doctors but not only for humans but also for ecosystem health, soil health, health of groups of species rapidly detecting problems and warning land managers. So much loss of biodiversity and land degradation could be prevented just with good diagnostics.
Overall, it is good to be enthusiastic about technology, especially its potential for geoengineering, nature conservation and increase in life quality including the global GDP. Having energy-efficient robots should reduce or diminish human labor (especially repetitive labor which does not help us develop socially or intellectually) should bring us all closer to basic-income societies where everyone is focused primarily on their own emotional and intellectual growth. Hopefully this should then also give us more time to teach our kids the restoration culture and evolve into primarily creation cultures vs today consumption cultures.
Urgent actions needed to prevent negative effects of AI technology
“Artificial intelligence has hacked the operating system of human civilisation… a halt must be put to deploying AI tools in the public sphere” (Yuval Noah Harari)
When the first atomic bomb was constructed and used as a military weapon (and which unfortunately resulted in the loss of hundreds of thousands of innocent civilians), it became clear to all the scientists that have helped develop that weapon that they need to respond and distance themselves from use of the technology that is based on their research. Albert Einstein specifically said after the Hiroshima & Nagasaki bombing: “The time has come now, when man must give up war. It is no longer rational to solve international problems by resorting to war”. Likewise, with AI, its key brains and innovators need to step-up and think of all things that could possibly go wrong and put univocal messages about how this technology should be used correctly and how to avoid any misuses and risks.
We believe that the following five actions should be at our fore-front of adaptation of AI in everyday life and protection of possible tragic, apocalyptic effects:
- New legislation that protects citizens from mass manipulation by AI is needed asap. People have a right to know if they are dealing / communicating with AI, if and how their content is misused and what are their citizen rights during the process. In the EU, we already have GDPR that aims at protecting our data and privacy. We now also need urgent protection from any AI-generated outputs including the Confidence Building Measures. The bare minimum is that any person in the world has a right to know if he/she is communicating with a real person or a chatbot; if an artwork, book, visualization, music or video that you are interested in was generated by a real person or a computer, and if the content is a remix from multiple sources, then original licenses should be respected (e.g. CC-BY-SA license used by most of the Wikipedia content). The European Commission and European Parliament are building what they call the “AI Act”, a set of legislation to urgently deal with main problems and to avoid mistakes we made because of delays in GDPR. Zoom and many other big tech companies are now ruthlessly mining our videos, text, i.e. content we produce without people even being aware. This needs to be regulated at the highest level, not just through some microscopic “terms of use”.
- AI technology (together with nuclear weapons and similar weapons of mass murder), should be completely banned for the purpose of killing civilians and designing biological weapons or similar. As soon as some military develops killer robots and starts using them to exterminate massive groups of people, these groups / military organizations should be considered enemies of human civilization i.e. human race in general and this should be regulated via UN and similar international conventions. Because technology is developing rapidly we also must react rapidly: once some military develops efficient killer-robots, it will only take another few years until they produce robots that could exterminate all humans on the planet.
- Policy-makers should dominantly promote, fund, and subsidize using open source solutions for AI technology, especially to support interpretable ML, reproducible research and decentralized use of technology. Transparency of AI is at the order of magnitude more important than the profit large corporations might make for themselves. Concentration of powerful black-box technology in the hands of few private corporations carries serious risks for society in terms of monopolization and misuse. The highly popular ChatGPT is a product of a company called “OpenAI”. But the catch is that none of OpenAI’s technology is open source, and it’s hard to argue that the company is open in any broader sense. As an alternative to so-called “OpenAI”, the open source initiative is now building a framework for “Open Source AI”; Hugging Face, Creative Commons, and others, are encouraging more support for the open-source development of different AI models. Such truly open source initiatives are possibly our best bet on keeping AI transparent, openly discussed, scrutinized and hence democratic.
- AI should not, however, be blamed and banned or constrained for some side effects of use of AI and robotization. There needs to be a clear distinction between cause and effects i.e. tool and crime. AI technology should not be directly blamed for increasing gender inequality, loss of jobs and/or automation of work. If anything, AI helps reveal (fake) gender equality systems and hidden institutionalized discrimination. In parallel to developing AI technology, we also need to debug the critical cultural and organizational causes of inequality and help societies transition to jobs with less manual work, basic income systems, fair trade, fully giving everyone equal opportunity and leaving none behind.
- In addition to AI legislation to protect people, we need to start thinking ahead of us and already now start preparing legislation and guidelines on how to build an ethical AI, and how to protect future SCIE’s. Yes, future HAL 9000 should also have its own rights (as any self-aware entity passing the highest level Turing test!) and no other entities should be allowed to damage / control them outside of court decision or similar.
Recently, to minimize societal damages and interoperability of AI governance frameworks, Singapore started developing an AI governance testing framework and toolkit that enables industries to demonstrate their deployment of responsible AI. Their framework is based on the following six principles:
- Transparency: making sure that all processes are documented and that consumers are aware of the use and quality of AI systems.
- Explainability i.e. repeatability/reproducibility: making sure that AI operation/ results are explainable, accurate and consistent. Here availability and transparency of the training data is the critical issue.
- Safety + Robustness: ensuring AI systems are reliable and will not cause harm.
- Fairness: ensuring that user data is used in a fair way, preventing any possible discrimination and following regulations and consumer rights.
- Accountability: ensuring human accountability and control.
Another important thing about AI technology is its carbon footprint. While AI systems have the potential to help with climate change, it should be noted that today’s AI systems contribute significantly to the carbon footprint, particularly when data centers rely on fossil fuels for electricity generation. As AI models continue to grow, it is necessary to explore ways to mitigate their negative impact on the environment to pave a sustainable path forward. Additionally, the hardware supporting these models has a limited lifespan, generating electronic waste. To address these concerns, efforts should focused on energy-efficient algorithms and hardware, transitioning to renewable energy for data centers, employing model compression techniques, and promoting responsible AI practices that prioritize sustainability.
In the years to come we can expect that standardized guidelines and tools for using AI will become ever more important, especially to help reduce any bias in training data and to enable open development communities that make sure that the technology is for global good. The key thing to have such principles in action is to have efficient and easy to use tools that can automatically test and confirm safety, use of content, reproducibility of any AI etc.
How can you start using AI technology for your work already today?
“Understand it, control it, don’t fear it. The development of AI is inevitable, for both good and evil. Let the good prevail.” (John McFarlane)
Some practical steps to following and understanding development of the AI technology are:
- Test using ChatGPT within your own field. Follow some online courses e.g. https://www.deeplearning.ai/. Be critical and try to understand where and why it fails. Here is an example of prof. Alex Brenning testing if ChatGPT understands geography.
- Subscribe and follow movements and programs such as AI act, ethical AI and AI4good. Educate yourself to understand what is a behavioral zombie and how AI can be miss-used. Work on boosting your own social and emotional IQ so you can do the highest level Turing and consciousness tests. Paradoxically, AI is expanding but in this brave New World you will need more of your own IQ especially the emotional and social IQ!
- Educate yourself on 21st century computer & information science, especially in the open source software: learn how to use decentralized, open source solutions so you can always be independent of big-tech monopolies and government controlled systems. Switch to using Linux, Firefox, NextCloud, Zenodo, Mastodon and similar genuine open source solutions to store, share and organize your data.
- Always check your human and customer rights. Protect your own content from being used without your knowledge, approval and without respecting your data license.
Beyond this, you can also consider contributing to making better AI models that are used for public good: contribute data to large open data projects either by registering your own data and attaching open license, or by contributing to data editing, filtering and testing; if you develop code, join open development communities and contribute through open source software programming languages such as R, python, Julia and similar.
Summary points
“The intelligence of the universe is social” (Marcus Aurelius)
So in summary, the four biggest myths connected to AI need especial attention: (1) use of the AI term has probably been over-hyped and we should also consider using the term for operational automated decision systems (then better use the term “Machine Learning” for all training and prediction operations); (2) Siri, Alexa, ChatGPT and similar are all amazing developments but these are still only behavioral zombies — these systems have no character and are not self-aware however convincing they might seem; we are not even close to building Synthetic Conscious Intelligent Entities (SCIE) so any fear of AGI technology “waking-up and exterminating us” is probably not rational; (3) AI technology should not be blamed for inequality and discrimination problems of many societies as they are not its cause, (4) the biggest threat of AI technology today are probably mass manipulation and its use for information wars, and not AI taking our jobs.
Even though technology is developing rapidly, there is still a lot of work for everyone to help build various components and train ever more accurate automated systems that can help replace all repetitive tasks, most of routine tasks and help humans focus on emotional and intellectual growth instead of having them drive to work for hours in busy metro-cities to do work that can be efficiently replaced by machines already today.
Likewise, we have probably still heavily under-estimated complexity of living beings, ecosystem networks and self-organizing societies. We have maybe managed to hack Machine Learning and this is a powerful technology, but life, biodiversity, consciousness and biological networks are probably more complex (and more magical) than we think! We need to support future scholars to study biology, evolution, ecology, bio-geochemistry and similar, with the same passion we innovate computer science and make cool robots. Solution to building more accurate and more self-sufficient automated decision systems is in both excelling computer science and excelling knowledge of biology, ecology, ecological networks, and Earth System science.
AI technology, micro-robots, internet of things, super high speed internet and similar could help significantly improve human health and well-being. Specifically, these technologies could help us combat the climate crisis, do geo-engineering to massively re-green the planet and reduce desertification and loss of biodiversity. AI technology could also literally save our lives e.g. from some asteroid falling from the sky. But we do not have all solutions yet to combat climate change and environmental degradation, and some solutions we might not reach in foreseeable future (they might be >50 years away), hence we probably need to do ALL possible actions in our power to combat the climate crisis:
- Significantly reduce GHG emissions proportionally to sources of emission at shortest possible timeline,
- Reduce carbon footprint for all citizens without leaving anyone hungry or under-developed,
- Protect biodiversity with significantly more strict sanctions and more robust monitoring and warning systems,
- Quickly innovate and develop technology (including AI) for re-greening the planet, land restoration and geoengineering, including industrial CO2 sequestration,
- Educate a new generation of youth to understand that happiness is not only GDP and to value and understand land restoration and as much as they love and understand computer games, Instagram or similar.
As Samie Dorgham puts it: “AI is a transformative technology that will change human society in ways we can’t yet imagine. It will also provide surprising solutions to parts of the climate crisis and will be useful in meeting our Net-Zero targets. However, it can only ever play a supporting role in the big picture of reducing our emissions: restoring nature and natural carbon sinks and rolling out renewable energy as fast as we can, is ultimately a buck that stops with us.”
If you want to take concrete actions in relation to evolution of AI technology here are few final tips:
- Consider all ethical issues of using AI technology. Test your moral dilemmas using https://www.moralmachine.net/ or similar. Consider that moral dilemmas and ethics can be highly complex and are also culturally determined. Vote for political parties that recognize the right of citizens to know if they are communicating with AI and that aim at governments that support open source versions of AI technology.
- Consider that the use of AI for mass manipulation and mass murder, the design of biological weapons should be banned and sanctioned across borders including through UN conventions and similar.
- Point your children / your colleagues to the right literature — make them read and educate themselves about AI and other emerging tech before they are left behind. Start reading about Novacene and Singularity; watch the AlphaGo documentary, Yuval Harari’s talk about AI, inspire youth to innovate!
Please cite as:
Hengl, T., Consoli, D., Bagić, M., Brocca, L., & Herold, M. (2023). “AI technology: what it is and what it’s not, and how it can (potentially) help us solve the climate crisis” (v0.1). OpenGeoHub foundation. Published in MLearning.ai; https://doi.org/10.5281/zenodo.8300534
OpenGeoHub and partners are actively working on using high-level Machine Learning for mapping dynamics of the environment (so-called “spatiotemporal ML”) using data fusion and ensemble methods through our projects Global Pasture Watch and Open-Earth-Monitor Cyberinfrastructure. We are also actively working on developing automated soil health diagnostics and chat-bots to support on-field land managers and agronomists through our AI4SoilHealth project. Sign-up to our newsletter and connect with us via our public workshops and discussion forums.