How are you handling shading for dense wooded areas? I'm looking around at some neighborhoods with dense redwood trees in the northern California region. I know in the middle of the day a decent amount of shadow will still be in the area, which comes from the density of the forest and the length of the trees. From what I'm seeing, this shading is not accurate to the wooded areas I frequent.
But, that's probably a really hard problem to tackle. If there's no data on tree height, it seems impossible to accurately portray shadow extrapolations for forests. Especially since the forests can have a high frequency of change.
Super cool project, I hope this continues to grow!
Really interesting article, excited to see the followup!
A few things that I think the author could take into consideration. What is a country's population actively discussing versus other countries? Is there a factor between direct and indirect exposure of problems that may influence a person's stance?
What type of company does China keep?
Last I checked, they were pretty buddy-buddy with Russia, and North korea on an estranged leash. If we know other countries fund terrorist organizations for intent of undermining western influence, is it that big of a stretch to say China is probably doing the same thing?
You can't use 'direct conflict' as a measure in this type of game. Russia has done damning harm to american politics, but its not clear how to measure that effect.
That's a bit of a ridiculous standard.
Mostly because I don't think China is liable to hand out the records they've been gathering to cross check findings from other study.
You don't need records conduct research if a specific message is being spread on TikTok against chance, to at least back up an unsubstantiated theory — even if not practical in the court of law.
For anyone who uses TikTok regularly, it's evident there frequently political content that outright contradict's China's positions, spreading unfettered through the platform.
Even if there is zero evidence supporting an influence campaign on the platform, the ease of collecting user data or spying on users is something I would expect an active adversary to do. Like it or not, China and America are at odds with each other, and it's almost silly to assume that China would not be exploiting a successful tool for their own means.
I wish the editor (or team) did a better job in proofreading their article. The multiple spelling mistakes and grammar mistakes makes me question the credibility of their ability to report.
Just skimming to two parts in the second to last paragraph, these two sentences, "I am often desperately looking inward to find the strength to go on", and "Accepting a constant stream of low quality things for most of my life, including interactions with other people, only contributed to the way my life has gone and the way I feel now" were really striking to the author's day-to-day. It sounds like he' struggling to doing what he needs to do and they're content with the way this is. We should want more for ourselves and for our relationships to have meaning.
I'm curious as to what makes a good size of N. To me, N=10 would be do small, N=48 seems a little under what is should be, and N=100 seems sufficient, but I don't have a real basis for this (in fact, it's probably because of the original count that had me settle on a min and max of 10 and 100). Maybe a better question is: What are factors that studies consider, besides the logistics you rightly pointed out, in determining a sufficient N?
That depends on the size of the population being sampled from, the margin of error, and the confidence level.
For a huge effect like the one shown in the study, where one side performed 2x as well as the other, a sample size of 48 is more than large enough to say that the result is statistically significant. If there was as small effect, that wouldn't be the case.
Put it this way. You want to find whether people from California prefer Taco Bell or Pizza Hut, so you randomly sample 100 people. If all 100 people say Taco Bell, then you can be reasonably confident that more people from California prefer Taco Bell. Because if at least 51% of your population preferred Pizza Hut, the odds of not getting one of those people in your sample are minuscule (the odds of getting all Taco Bell people in your sample if 49% of the population prefers Taco Bell is 0.49^100).
If 51 prefer Taco Bell and 49 prefer Pizza Hut, your confidence level is too low to be useful--you need a larger sample size.
I answered it in another response, but I think this is why we're seeing a rise in meta-analysis papers. Take a bunch of small N's, consolidate them, and analyze their trend. This analysis can also be strengthened by evaluating the effect size of the phenomena [1]. However, I would say using effect in meta-analysis is a very complex approach that limits the set of researchers that could conduct the analysis appropriately.
It's always amusing how different institutions number their classes. At the Claremont Colleges, I took a class Math 103 - Fundamentals of Mathematics which was an upper division course geared towards preparing students for analysis, abstract algebra, etc. Because I started college having completed my math coursework through linear algebra and differential equations, this was the lowest-numbered math course on my transcript from Claremont and when I started grad school for a teaching credential a couple decades later, the program director thought that it was a remedial math course.
Recent Pomona College alum here checking in to say that the course numbering system has not changed (though Math 103 is now Intro to Combinatorics), and anecdotally it's still a point of confusion for those who go on to the grad schools the Colleges feed into.
I've never before paid course numbers too much mind, but it does surprise me there's not yet some widespread standard of to help graduate admissions officers, graduate advisors, and grad students themselves when determining prerequisite eligibilty.
Yeah, I was looking for the class to see what the number was and it doesn't appear to exist anymore. I remember being amused that at Mudd, Calculus as Math 1a/b back in my day. It appears to have been renumbered a bit higher since then and now they only offer one semester of calculus (back in the 80s it was radical that Mudd did the Calculus sequence in two rather than three semesters, although I noticed that our local high school offers a third-year high school calculus class covering multivariable calculus).
It's surprising to me - 300+ level courses were part of my undergrad required coursework - of course I wasn't studying at Stanford. Are course numbers standardized across academia or unique to an institution?
Course numbers are not standardized, although there are common numbering schemes. There are some uncommon ones such as MIT's, which uses a number and a dot instead of a subject name. And I've never known what institution the typical "CS 101" numbering scheme applies to.
Afaik they're unique—and not monotonously increasing in difficulty either. Here's Stanford's numbering system for reference: https://cs.stanford.edu/academics/courses
But, that's probably a really hard problem to tackle. If there's no data on tree height, it seems impossible to accurately portray shadow extrapolations for forests. Especially since the forests can have a high frequency of change.
Super cool project, I hope this continues to grow!