Hacker Newsnew | past | comments | ask | show | jobs | submit | electricwater's commentslogin

Wasnt' Silicon Valley born as a military R&D cluster? DARPA, Cold War defense contracts, and space race funding built the region. To me, the consumer internet phase from the late 1990s to early 2000s was actually the anomaly. The so called pacifist tech culture was a product of “peak liberalism” roughly 1991–2001 (fall of the soviet union to 9/11), the unipolar moment after the Cold War but before 9/11. US tech companies operated in a geopolitical environment without military rivals, so they could afford to frame themselves as apolitical, globalist, and focused on connecting the world. That cultural posture began to collapse after 9/11, and it has been eroding ever since under the pressures of great-power competition, terrorism, and the realization that software/chips are strategic assets. The pendulum is swinging... We aren't at McCarthyism yet but we are on a path to it.


It predates Cold War and comes from WWII when US Navy was throwing money around to get better code cracking, radar, gunnery computers and so forth.


Steve Blank's 'Secret History of Silicon Valley' is great background here. [0]

[0] https://steveblank.com/secret-history/


I don't know; my neighbors never talked about it.

But one guy came back from retirement after 9/11 and moved into a trailer in the Lockheed parking lot.


From the article:

>Silicon Valley’s militarization is in many ways a return to the region’s roots.

>Before the area was a tech epicenter, it was a bucolic land of fruit orchards. In the 1950s, the Defense Department began investing in tech companies in the region, aiming to compete with Russia’s technological advantages in the Cold War. That made the federal government the first major backer of Silicon Valley.

>The Defense Advanced Research Projects Agency, a division of the Department of Defense, later incubated technology — such as the internet — that became the basis for Silicon Valley’s largest companies. In 1998, the Stanford graduate students Sergey Brin and Larry Page received funding from Darpa and other government agencies to create Google.


What do you mean we aren’t at McCarthyism yet? We have blown well past McCarthyism. The US is a bona fide fascist dictatorship RIGHT NOW. Anonymous state agents are kidnapping US citizens off the street for deportation to third world countries. This is wildly and transparently illegal, but there is nothing in place to enforce the law because they’ve corrupted the DOJ, which is supposed to enforce the law.

They haven’t destroyed every vestige of liberal democracy — the states can fight and there are still courtroom battles — but the fascists own the enforcement mechanisms for justice, and don’t feel bound by the outcomes. The guardrails are gone.

Do not harbor illusions that we’re going to return to normality in our lifetimes. It’s possible something new and better will take over once the Trump regime is somehow ended, but I doubt it; they’re trashing the place pretty thoroughly.


Agree... In Pharma it is already happening: Niche drugs market looks set to balloon | Financial Times https://www.ft.com/content/110846d4-06a4-11e8-9e12-af73e8db3...


Link to the bill: https://app.leg.wa.gov/billsummary?BillNumber=5001&Year=2019... (for those who don't read the article and only follow the discussion)


I think they should be legal but both parties need to agree to it after a lawsuit is filed. The courts may approve or deny this request on the basis of public policy.

I wonder if a political party (Democrats) will take it up on them to restrict the ability of forced arbitrations between an employer and employee. I think it is just a question of time before someone like @AOC starts talking about it.


The CFPB tried to make a rule preventing forced arbitration in financial services agreements. You will be unsurprised to hear that Republicans in Congress struck that rule down.


University censorship has always existed. I think that right-wing universities will start to emerge all around the U.S. in a form way more prevalent than what they are now, similar to how Fox News emerged. Instead of co-existing with another point of view, we drive them out, segregating and polarizing the other side.

I remember back when Kissinger came to campus. There were a group of students that protest all week before and during his 4 hour stay. It almost got cancelled.

I think it is healthy to hear their arguments of the people you don't agree with and come up with counter-arguments proving them wrong instead of shunning them away.

(English is not my first language - sorry for the grammar mistakes.)


> I think it is healthy to hear their arguments of the people you don't agree with and come up with counter-arguments proving them wrong instead of shunning them away.

There may be a small problem with that. It is easy for people to keep making the same arguments over and over and over again. You will get tired of coming up with counter-arguments. Eventually you will quit, and they will be the only people talking.

It is especially difficult if the other person chooses their argument so that it is easy to say, but difficult to explain why it is wrong. An example might be: Eating eggs is unhealthy because there is a lot of fat. It's an easy argument to make and defend, but it is too simple to be true. Eating eggs can be very healthy in many circumstances, but to explain why requires a lot of technical explanation.

Usually people will prepare ways to defend their argument. For example, they will collect a list of papers that show that eating fat is unhealthy. It's easy to find hundreds of them. If you say, "We need to eat some fat", then they can list a hundred papers and say, "You have to argue against all of these papers". Of course it is impossible.

Because of this, it is often best to avoid discussing things with people who have no intention of listening to you.


> Because of this, it is often best to avoid discussing things with people who have no intention of listening to you.

you are right, ignore them. but banning the event is not a solution.


so, in your opinion, it's alright to ban people from speaking because it gets tiring when they're obviously wrong, and still they refuse to stop talking


I was responding to the OP who said that it's better to hear people and try to counteract their argument. I think that's a bad idea. Instead I recommend ignoring them.

I would ban people on a private system if they are causing a problem with the service. If I were running the service then the criteria I would use is whether or not it's impacting how I run the service. Probably to your horror, I would think very hard about banning people who I don't personally want to attract and who are chasing away the people who I do want to attract. As much as possible I would avoid it, but a big part of running a good establishment is choosing your clientele. It is analogous to refusing to serve someone who is loud and beligerent in a find dining establishment, while encouraging that person in a raucous bar.

I feel that people should be able to say whatever they want in their own homes. They should be able to say whatever they want in their own establishments. They should be able to publish what they want and sell/give it away to people who want it.

However, I think there are limits to what should be accepted in public spaces. I think that people should not be forced to publish things they disagree with, unless they have a monopoly or near-monopoly for publishing in a medium. Controversially, I think that people should be allowed to refuse service for any reason, unless they have a monopoly or near-monopoly on the service (and I happen to live in a country where this is the case).

So, that's my opinion. Believe it or not, I'm not actually interested in debating it, but because you asked me what my opinion was, I gave it to you. I suspect it differs substantially in some ways from your opinion and I have absolutely no problem with that.


Yes, banning trolls is morally fine. If you want to design a system to automatically detect them, that's where things get hard


the way you said it, it looks like we can just model the system based on your understanding of what's right or wrong, or maybe you have some other infallible person in mind?


But what if right-wing universities do spring up and start de-platforming the viewpoints you side with.

Think what bothers me most these days is people cheering for censorship just because it's currently being used against people they don't like without thinking ahead that one day the power structure could shift and the same powers could be used against them.


GCU is that "alternative facts," anti-science, magical thinking "university." They have the money and the Christian cult (Chris Hedges' words) to pull it off.


Do you know why are they doing this? Lower orbit is probably cheaper?


Low orbit is cheaper to get a satellite to just considering launch costs (less fuel, easier to recover the first stage), but as siblings have mentioned, it's more expensive overall, particularly considering that you need lots more satellites to provide coverage.

Their plan to launch O(1000) satellites is to get lower latency and higher bandwidth, which would render the current generation of satellite internet obsolete.

It's a great example of the sort of business plan that's only possible with cheap launches that SpaceX's reusable rockets have provided.

Here's a more detailed primer:

https://arstechnica.com/information-technology/2016/11/space...


No, it would not render the current generation obsolete. Not only that, but they are years away from having a fleet up there, and a decade from having the full fleet. Satellite technology progresses a lot in a decade, so you're doing the equivalent of comparing the iPhone 12 to the Galaxy s2.


> No, it would not render the current generation obsolete.

Why not? Just saying "you are wrong" doesn't really add much to the conversation. I'm interested to know more.


You're right, I should have explained better. If you take a look at many of the other satellites announced that don't get the same hype as SpaceX, you'll notice that they're comparable capacity, or have lower advertised capacity with other tradeoffs. SES mPOWER, viasat-3, and Jupiter 3 to name a few. The companies that stick to GEO satellites maintain that LEO satellites waste a lot of capacity over water, and there's nothing you can do about that. Not only water, but underserved areas that can't afford satellite technology are also included in this. Depending on the numbers you look at, this could mean the effective capacity of the satellite is ~10% of what is advertised.

The other issue with LEO is that if you want to double your satellite capacity, you need to launch twice as many than you currently have in the sky, instead of just one or two more large ones. This presents logistical problems, and technical as well to some extent.

When people typically refer to GEO satellites, they're unfairly making the assumption that the old crop of GEO satellites are where technology is today; namely fixed, low-capacity satellites. This is not the case. With the HTS (High-throughput satellites) and XTS (Extreme throughput satellites) that have movable capacity, not only do you have a comparable amount of usable capacity to LEO, but you can also move it as business needs change. The latency issue will never change, but if you see my other comments, I'm skeptical they'll be able to achieve the latency everyone is quoting.


Very interesting, thanks for sharing.


What is O(1000) satellites?


It's an unnecessary misuse of the Big O notation used to establish bounds of functions. I think they just meant up to 1000 satellites


The author means literally 1000 satellites.


It's borrowed from comp sci, but you can read O(x) as "on the order of x", so maybe 900, maybe 1200, but somewhere around there.


In Comp Sci O() notation has a very specific meaning and "on the order of" does not approximate it. I think it was probably just a misuse of the notation.


In informal contexts it's also used as a fudge/handwave, purely as a questionable analogy that nobody is expected to take too seriously. `theptip` almost certainly knows that O(1000)=O(1), they were just being playful.


Exactly.


I have seen it getting used in many places and it's understood as order of. O(n) means at worst case number of steps needed to complete an algorithm would be c.n + a. So it is completely valid use since O(1000) would mean it can be c.1000 + a, which actually means order of 1000s.


This is also an inaccurate description of the property O().



https://en.wikipedia.org/wiki/Big_O_notation

> Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

I.e., it's an asymptotic upper bound.

It's also interesting to compare with Ω() (Big Omega ­— asymptotic lower bound) and Θ() (Big Theta) (big-O AND big-Omega): https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachm...

A good textbook on this subject is Introduction to Algorithms: https://www.amazon.com/dp/0262033844/ .


What I described is the same thing in layman's term. Worst case is the colloquial word for upper bound. And in that example n was approaching 1000.

If you want to be puritan the only fault I see in my definition is instead of using a generic function I assumed it's linear function - but that's for explaining the colloquial use.


It's just an asymptotic upper bound (though often used to imply tight bounds).

It's most commonly applied to worst case running time, but is often applied to expected running time ("hash table insertions run in O(1)"), space complexity, communication overhead, numerical accuracy and any number of other metrics.


Yep. Often people mean the narrower big-theta when they express complexity in terms of big-O.


"The letter O is used because the growth rate of a function is also referred to as the order of the function"

order of...order of

yeah I know there is no "growth"


You've taken an informal and sloppy summary of big-O notation, isolated a single English expression, and construed that a similar English expression used in a different context with entirely different meaning is an accurate approximation of the formal meaning. You would be incorrect.


I guarantee you can find dozens of examples of CS-type people using O() in non-algorithmic-complexity topics as "on the order of" or "approximately" right here on HN within the past 6 months. They aren't all misusing that notation - it is being co-opted into a more general lexicon, whether you like it or not.

And I wasn't saying that "on the order of" is an approximation of what O() actually is in CS, merely that that is how OP used it.


Geosynchronous satellite latency in best case: 250ms

LEO latency best case: 6ms

I've used those Hughes satellite connections before, I never got anything close to 250ms. More like 400ms.


The absolute minimum latency you'll ever see with geostationary is about 489ms. That's assuming 1:1 dedicated transponder capacity and something like a higher end SCPC terminal and modem, accounting for latency and modulation/coding/FEC by the satellite modems on both ends.

Consumer grade hughesnet stuff will vary anywhere from 495ms in the middle of the night to 1100ms+ during peak periods due to oversubscription.


This should probably tell you something about spacex's claims as well. The actual latency is never just the slant range. There's a ton of processing and network switching too.


I am pretty optimistic about SpaceX's claims for what the space-segment latency will be. If you look at the system architecture for current generation high-bandwidth Ka-band geostationary services, which has dozens of spot beams on North America, there's about 30 teleports spread out around the US and Canada. These allow Viasat and Hughesnet customers, and similar, to consume capacity in the same spot beam as the teleport they're uplinked from (vs the satellite cross-linking a set of kHz from one Ka-band spot beam to another). For example, customers in really rural areas of Wyoming are going to connect to a teleport that's in Cheyenne, which will usually be in the same spot beam. Sites in Cheyenne near the railway have really good terrestrial fiber capacity for an earth station operator to buy N x 10 or 100GbE L2 transport links to the nearest major city.

It would be technically possible, but uneconomical and an inefficient use of space segment transponder kHz to have customers in Wyoming moving traffic through a teleport in the Chicago area. Here's an illustration of Ka-band spot beams on a typical state of the art geostationary satellite:

http://www.southwestsatelliteinternet.com/images/Ka-band-spo...

Applying the same concept to starlink, telesat's proposed system, and oneweb, if they build a number of teleports geographically distributed near rural areas, it will allow individual satellites to serve as bent-pipe architecture from CPE --> Teleport, within the same moving LEO spot beams, or to have customer traffic take only one hop through space to an adjacent satellite before it hits the trunk link to an earth station. For example customers in a really rural area of north Idaho along US95 might "see" a set of moving satellites that also have visibility to an earth station in Lewiston, ID, where carrier grade terrestrial fiber links are available. Or a customer in a remote mountainous area of eastern Oregon may uplink/downlink through a teleport in Bend.

The ultimate capacity of the system will be determined by how few hops through space they can get the traffic to do. Since every satellite will be identical and capable of forming a trunk link to a starlink-operated earth station, when it's overhead of it, they have an incentive to build a large number of earth stations geographically distributed around the world.

It's basically the same idea as o3b's architecture but at a much smaller scale.


I don't doubt the latency in space numbers. What I don't believe is using a theoretical distance to compute latency. As you said, a LOT of that latency can come from scheduling inefficiencies and congestion. Each of their satellites has a relatively small amount of bandwidth, so if you happen to be in a beam with a lot of people, you'll be hit hard by this. As far as I know, their satellites are not capable of steering beams, and rely purely on the placement directly down from where they are.

Another consideration: adding another 50ms to GEO latency isn't really going to change anyone's opinion. It's still targeted towards streaming, and latency doesn't matter as much since they're not targeting real-time gamers. SpaceX needs the latency to be very low to hit that market. There's a world of difference going from a 30ms ping to an 80ms ping, and once you're past a certain point, it puts you in the same camp as GEO.


> As far as I know, their satellites are not capable of steering beams, and rely purely on the placement directly down from where they are

This is wrong. From their FCC filing(1), they use AESA phased array antennaes, and each satellite is capable of simultaneously maintaining "many" (unspecified) steered beams that are <2.5 degree wide.

Also, the receiver is capable of distinguishing between multiple beams covering it so long as there is more than 10 degrees of angular separation between them from it's point of view. If I understood it correctly, this will allow nearly every visible satellite at the same orbital height (less the ones very nearest to horizon) to communicate with targets that are geographically very near to each other at full bandwidth. After the very first phase has been launched, they can provide a total of ~500 Gbps of downlink bandwidth to any spot target that lies between 40 and 60 degrees latitude. The later additions at high orbits help with total capacity and especially with targeting multiple targets relatively near each other, but do not help provide more bandwidth per city, as that is limited by the 10 degree angular separation requirement.

The VLEO (330km-ish) constellation will help with that by reducing the size of each spot.

(1): https://cdn3.vox-cdn.com/uploads/chorus_asset/file/8174403/S...


one noteworthy item from the filing is that they intend to build 200 Gateway earth stations just within the continental United States, which means that the vast majority of satellites will be functioning as Bent pipe repeaters. I don't think that there will be a lot of traffic traveling satellite to satellite in a multiple hop arrangement. 200 sites for their ka band trunk links from satellite to earth station means that a CPE terminal in, for example rural NW Montana might have a 25-30ms latency to a gateway in Spokane, and from there the latency to internet destinations will be all fiber based, same as any existing ISP.

if I had to guess on the earth station siting, they are picking locations which are medium-sized cities with decent terrestrial fiber connectivity, which will be within the same satellite view footprint as adjacent rural areas. Such as an earth station in Boise may serve mountainous remote areas of ID.

This 200 Earth station figure also lends me to believe that the first manufacturing run of satellites may not have any satellite to satellite trunk link ability at all, but that they will ALL be bent pipe architecture. this means that if SpaceX wants to serve a particular area, they need to have an earth station on terrestrial fiber in the same region, which is simultaneously visible to satellites and end users.


I think that's a very good observation, especially given the recent news that as part of musk firing some of the leadership on the project, he wants the satellites to be significantly simpler.


If that's the case then the problem becomes exponentially more complex than I was thinking, and the technical challenges are going to be far harder than I'd first thought. Doing frequency reuse and interference mitigation at the rates they need to if they're going to steer the beams is enormously complex.


I'm in agreement about the technical challenge - doing it with "low cost" phased array CPE is challenging. If I had to engineer it I'd design something with a pair of highly shielded, tight focal axis parabolic antennas (basically a miniaturized o3b terminal), like two 60cm size on two-axis tracking motorized mounts. But there's no way that sort of setup with a unique rf chain for each of two dishes would be under $5000.


One of my favorite industry analysts just wrote about this. You might find it interesting:

http://tmfassociates.com/blog/2018/11/09/the-new-new-space-t...


Even if it is 150-250ms to terrestrial internet connections, it will be a lot better than consumer grade geostationary. The unfortunate economics of launching 3500-6000kg things into geostationary orbit means that transponder capacity on current satellites used for consumer grade VSAT services are horribly oversubscribed. You're not going to get very good satellite service with the current cost structure and tech for $80 to $150/mo on a 3 year contract. One needs to look at figures like $400-500/mo, and a 1.8m size antenna for more complex modulation, before vsat access is really "good".

If the space segment only adds 120ms to what would be an otherwise same latency rtt ping, it's not so bad, people in the US have been spoiled by having CDNs very near all major ix points.


I disagree with that. I think there's a latency line that if you exceed that, certain functions are no longer possible. Real-timing gaming is one of those, and VoIP as well with a slightly higher latency. You are at a complete disadvantage playing real-time games if your ping is 150ms compared to someone else at 30ms, to the point where you may as well not play those types of games.

I'm not sure what the justification is to assume that Starlink will not be horribly oversubscribed, either. Last I checked it was supposed to be about 32Tbps with all satellites operational. A substantial amount of that is completely wasted over water, so the effective capacity for customer that actually have the money for SpaceX to generate revenue is very small. The types of services people need in remote areas, whether it be a plane or in a village that has never had internet are not those that require low latency. They are either streaming media (plane), or web browsing. In that sense, I don't see how Starlink has and advantage there.

I would be shocked if they could deliver something better than cable on DOCSIS 3 to even 10% of cable customers with comparable service. My guess is it will be tailored more to high-paying customers that happen to not be able to get decent cable.


Probably will be radically oversubscribed in order for the economics to work, and will be a shittier service than a properly implemented vdsl2, g.fast or docsis3/3.1 last mile (nevermind gigabit class gpon or active Ethernet ftth), but significantly better than small geostationary service vsat. And will be higher latency and with worse GB/mo bandwidth quotas compared to a modern technology WISP for last mile.

A lot of the technology press has misunderstood the most desirable applications and locations for it. People think that it's going to compete for a residential internet service in a suburb of a city like Portland, or Sacramento, or Denver. If you can get 300 megabit per second DOCSIS3 service in one of those locations, that would be drastically better. where it is going to be a game-changer is all of the locations that are right now dependent on highly oversubscribed geostationary small vsat services, and extremely rural areas where there isn't even a single last-mile terrestrial Wireless ISP. and for ships in the middle of the ocean, if the gigabyte per dollar cost is significantly less than inmarsat or other options.


I think I understand what you are saying, but I'm confused at this comment compared to your others. If you believe, as I do, that the user terminal is going to be extremely expensive, how will that compete with the current small vsat terminals?

I agree that the service could be better in theory, but at the same time, existing satellite internet service also could be better by taking on fewer customers and not being as congested. But that's a cost trade-off. And in this case, I believe SpaceX has a higher cost per customer to recoup, so it seems in their best interest to also be congested to increase revenue.


I think that there is a good chance the terminal will be expensive, but the cost hidden or eaten by spacex to gain market share. Building a phased array thing with sufficient gain that can track two LEO satellites can't be cheap. But maybe their terminal hardware engineers have come up with something truly revolutionary, and we will all be surprised. I am hoping but skeptical that it could have similar hardware costs to a viasat small ka-band vsat terminal, in the range of $800-900 for rooftop equipment + modem.


> Last I checked it was supposed to be about 32Tbps with all satellites operational.

It will be ~80Tbps after the first three phases (the LEO constellation), ~240Tbps after VLEO.

I agree with you that they probably cannot offer enough bandwidth to compete with residential internet in densely habited areas.(1) The system is really interesting in less densely inhabited places, and for backhaul. The complete system has more transcontinental bandwith between almost any two (distant enough) places than all submarine cables between them put together. This alone will likely pay for the whole system, with plenty to spare.

(1) With a few exceptions. After the full constellation is up, New Zealand will have ~30 times more downlink capacity than the entire bandwidth use of the country as of right now, and will also have tens of times more connecting capacity with North America and Australia than it currently has. But that requires a country of only 6 million in the starlink sweet spot that gets all of the bandwidth of all visible satellites to the east of it.


I agree. I think the capacity on paper is very impressive, and the trans-continental capacity will be useful. I'm just more skeptical that they'll find a market willing to pay the price it will cost easily in the residential market. Satellite internet is more expensive than cable, and I don't see any way that will change considering their hardware will be more expensive than the current satellite internet hardware (just based on phased array). 1Gbps cable isn't unheard of these days, but the question is whether there's really a market for it. At some point you are past a speed where it has any material effect on what you're doing, and SX needs a small amount of customers paying a large amount of money due to the equipment costs. What they can't have is lots of customers with 50Mbps plans that have identical equipment to someone paying 10x more for 1Gbps.


> LEO latency best case: 6ms

Light travels about 1800km in 6ms, but that's just one way. Straight up and straight down at 550km is 3.6ms.


light is slower in the atmosphere, did you adjust for that?


Speed of light in air is still ~0.9997c (and approaches 1 as altitude increases), so it's a much less noticeable difference than it is for copper or glass


Cheaper how? I would assume with a lower orbit they experience more drag and deorbit faster meaning they need to be replaced more often.

Maybe using the ones in lower orbit to cover more densely populated locations?


The original orbits would take ~100 years to decay, which is way longer than the life of the satellite. The new orbit is around 5 years (not taking into account maneuvers from the fuel on board). Makes a lot of sense, especially since it looks like they'll be iterating the design as they go (two that jumped out at me: initially not all satellites will be taking advantage of both Ku and Ka bands, and not all satellites will have phased array antennas)


That's really interesting, than you for sharing.


This is hollywood material.


No idea what is going on but interested in the commotion...


Peter - Can a large company like Amazon sponsor an individual for the green card directly without going through the an H1B visa?


Yes, any company can and the size of the company isn't the most critical factor.


According to my understanding, GC has nothing to do with H1B visa. How will you work in the US while you obtain your GC? You can work at their foreign offices until your priority date is current and opt for consular processing.


Would a TN visa allow you to do this? As in obtaining a GC while staying with a TN? Thanks


All I could think of is fake news... I used to read the NYT religiously and believed their news was pretty good. Now, I just don't believe any newspaper.

One example: http://nymag.com/daily/intelligencer/2018/05/trump-childrens... now read this: https://www.wsj.com/articles/congress-leery-of-trumps-cuts-t...


You start off talking about how you used to trust the NYT, but you link to something from New York Magazine (an article ostensibly contradicted by the WSJ article you link to)? NYT and NYM are not the same thing, though the NYT does have a Sunday magazine, which largely operates independently of the general NYT staff.


Could you elaborate on how this shows NYT's journalism in a negative light, aka fake news?



Please go on for at least a paragraph, if you can. I don't think these pictures say much.


ID one or more people not mentioned here:

http://gawker.com/here-are-some-top-n-y-times-editors-and-st...

and I will say more than it's a frat, a cult, a club. It has nothing to do with informing people; quite the opposite, they are a flock of mockingbirds.


What? Am I supposed to go ID some people off images on the internet? Is that your idea of supporting your argument?


Why bother? I could post a pic of the people writing the articles you read imitating Heavens Gate and you would still want me to "support" my assertion that all is not well at the NTY. What do you need? Mock murder reenactments?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: