logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

May 20, 2024

What data would ChatGPT add to train a weather model

My prompt

If you were going to train a model to predict rain showers, what data might you use that is currently not being widely used?

ChaptGPT 4o

Many advanced weather models incorporate a variety of data sources, but some of the more innovative and less traditional data types mentioned are not commonly used in current weather prediction models. Here are some of the data types from the previous list that are typically underutilized or not used at all:

Environmental and Geographical Data

  1. Soil Moisture Levels:
    • While some models use soil moisture data, it is not as widely incorporated or detailed as it could be.
  2. Vegetation and Land Cover Data:
    • Detailed vegetation and land cover data are often not integrated into weather prediction models at the level of granularity that might be beneficial.
  3. Surface Roughness:
    • Data on surface roughness is generally underutilized in most weather prediction models.

Atmospheric Data

  1. Aerosol Concentrations:
    • Detailed aerosol concentration data, particularly on a fine scale, is not commonly used in many weather prediction models.
  2. Atmospheric Composition:
    • Comprehensive data on various gases beyond the primary greenhouse gases (like ozone and carbon dioxide) are generally not integrated into weather prediction models.
  3. Cloud Microphysics:
    • Detailed cloud microphysics data, such as cloud droplet size distributions and ice crystal concentrations, are typically underutilized.

Human and Technological Data

  1. Pollution Levels:
    • Pollution data, including detailed air quality indices, are not commonly used in weather prediction models.
  2. Traffic and Transportation Data:
    • Vehicle emissions and heat data from traffic and transportation are rarely used in current models.

Temporal and Historical Data

  1. Real-time Social Media and Crowd-sourced Data:
    • While there are some experimental uses, real-time social media and crowd-sourced weather observations are not widely used in standard weather models.

Biological and Ecological Data

  1. Plant Phenology:
    • Data on plant life cycles and their impact on local humidity and temperature are generally not used in weather prediction models.
  2. Animal Movement Patterns:
    • Animal behavior in response to atmospheric conditions is not utilized in current weather models.

Advanced Technological Data

  1. LiDAR Data:
    • While LiDAR is used in some atmospheric research, it is not commonly integrated into standard weather prediction models.

Integration of Multidisciplinary Data

  1. Health and Epidemiological Data:
    • Data on health conditions correlated with atmospheric changes are generally not used in weather prediction models.
  2. Economic Activity Data:
    • Information on industrial activity and energy consumption is typically not integrated into weather prediction models.

Summary

To improve weather prediction, especially for phenomena like rain showers, integrating these underutilized data sources could provide a more comprehensive and accurate understanding of atmospheric processes. Leveraging innovative data sources such as aerosol concentrations, detailed cloud microphysics, and real-time social media observations could enhance the predictive power of weather models.

Tweet
Follow me

Categories: ai Tagged with: ai • complexity • knowledge • machine learning • weather Date: May 20th, 2024 dw

Be the first to comment »

February 11, 2018

The story of lead and crime, told in tweets

Patrick Sharkey [twitter: patrick_sharkey] uses a Twitter thread to evaluate the evidence about a possible relationship between exposure to lead and crime. The thread is a bit hard to get unspooled correctly, but it’s worth it as an example of:

1. Thinking carefully about complex evidence and data.

2. How Twitter affects the reasoning and its expression.

3. The complexity of data, which will only get worse (= better) as machine learning can scale up their size and complexity.

Note: I lack the skills and knowledge to evaluate Patrick’s reasoning. And, hat tip to David Lazer for the retweet of the thread.

Tweet
Follow me

Categories: ai, science Tagged with: 2b2k • ai • complexity • machine learning Date: February 11th, 2018 dw

Be the first to comment »

September 15, 2015

Against hard cases

I was talking with someone the other day who who was telling me about her response to the “trolley” problem that professors in a surprising number of different fields like to pose. (It was first posed by the philosopher Philippa Foot in 1967.) In essence: you’re standing next to a switch. A trolley is barreling down the tracks. Weirdly, there are five people tied up at the end of the tracks. You can pull the switch to put the trolley onto a new track but — what a coincidence! — there’s a man on that track also. What do you do?

This woman said she’d throw herself in front of the trolley. Creative and noble, but …

…Even assuming that that would stop the trolley and that it wouldn’t overturn the trolley which happens to be full of the most adorable babies who would all have grown up to be Nobel Prize winners , it only affects decisions if morality is the adherence to principles or is the outcome of personal virtues, or some such. Whether you or the solitary man on the track dies is of no interest to the utilitarian calculus, unless you throw in some more information, such as you are a reprobate who only has two weeks to live anyway, and the man on the tracks is an adorable baby whom we know will grow up to be the greatest Nobel Prize winner of them all.

But the real problem is that the woman I was speaking with violated the Rule of Hypotheticals: The person who makes the hypothetical gets to define the hypothetical.

Hypotheticals in moral reasoning often are intended to confound us. The trolley case challenges our intuition: Of course the rational action would be to sacrifice the one for the many, but if we vividly put ourselves in the position of the person at the switch, we may find it hard to imagine ourselves taking an action that we know will kill someone. Variations of this try to make it even harder for us to imagine ourselves taking that step: Suppose we could push someone in front of the trolley to save the many? Suppose the person we pushed were young and healthy, in a wheel chair, fat? (Fat? Yup, the person has to have sufficient mass to stop the trolley, but, really, the Hypothesizer could just have specified that even a thin person would stop this particular trolley and avoided any implication that the weight of the person has something to do with her/his value.)

So, we construct hypotheticals, making them as weird as we need, in order to show that a moral principle or guideline is unreliable. In the classic case, we first convince our students that utilitarianism makes sense. Then we give them a hypothetical in which it’s pretty clear that utilitarianism leads to an unjust outcome. The canonical example is a sheriff who hangs an innocent man because it’s the only way to reassure a terrified town that a killer has been caught; the sherif knows the real killer drowned but can’t prove it for some unlikely hypothetical reason. (And thus was rule utilitarianism born.)

I am very sympathetic to the idea that moral reasoning is premised on moral empathy: to be moral requires recognizing that we share a world with people to whom that world matters differently but equality. But I have problems with morality-by-extreme-hypothesis.

These hypotheses are extreme on purpose. They want to clarify our thinking, so they remove all extraneous context and they remove every possible escape from the dilemma. For example, we’re asked to imagine that a terrorist has planted a dirty bomb in NYC and the only way to get the information out of him (inevitably a him because that’s more “neutral”) is to torture him. “But how do we know that he has the information?” “A reliable informant.” “How do we know the informant’s reliable?” ” That’s part of the hypothetical.” Oooookay, but life doesn’t work that way.

Because hypotheticals are usually weird — if they weren’t we would’t need them — it’s hard to know that we can trust our reactions to them, and it’s hard to know if the right action in that case generalizes to all cases.

I suppose these hypotheses can disprove that any particular moral theory is sufficient for all cases. But once we give up on that idea, the question becomes: What’s the right thing to do? What’s the right theory — or theories — to apply in this non-hypothetical case?

Ultimately, I believe that as sentient creatures we have the obligation to do right, but there isn’t a right thing to do. Why would we think that there is? The people affected, and even those who merely observe, are right to carry on their arguments and to make their positions and their plights clear. We should and will never stop. But there can be no resolution because every aspect of our existence as individuals and as groups is in play and has its own interests. So we do the best we can. But the notion that there is a single right answer to any sufficiently complex moral question strikes me as wishful thinking. There is no single action that is all right because the world is not the same to any one of us.

That’s the real problem I have with these sorts of hypotheticals. Their virtue is clarity and simplicity, which means they miss the essential reality of our lives as moral creatures.

So, my answer to the trolley question is: Pull the switch. Sacrifice the one for the many. Then grieve for the rest of your life because its never enough just to be right.

Tweet
Follow me

Categories: philosophy Tagged with: complexity • morality Date: September 15th, 2015 dw

1 Comment »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!