Hacker Newsnew | past | comments | ask | show | jobs | submit | overlords's commentslogin

Focus on automating human labour as much as possible. Problems in the world all trace back to lack of resources. A rich man can afford to be 'nice to the environment', a poor man can't.

The below all are easier machine learning problems than self-driving cars, yet no big tech companies or national initiatives are focused on aggressively applying machine learning to them.

Likely a couple of billion dollars, a year, and a 100 people lab would 'solve' each specific problem.

1. Robot that cooks meals, and clean the dishes afterwards. That saves billions of hours daily.

2. Robotic self-cleaning toilets. Saves another billion hours daily.

3. Robots that can dig up dirt and build a house from that dirt.

4. An app that can teach anyone anything like a teacher would - literally - a talking avatar and cameras and voice output and machine learning powered dialogue.

5. Home manufacturing 'box' that can make 95% of anything that anyone typically wants (some arrangment of 3d printer/laser cutters/pcb placement/wood router machines etc, that can take plastics/wood/metal/electronic components and output a gadget/furniture etc)

The above 5 give the equivalent of a 'basic income' for everyone (if distributed to everyone, and assuming the finished gadget is about the size and complexity of an automobile).

Then the inputs/ouputs problem of energy/raw materials/waste needs to be provided. Disregarding scientific advances like fusion power etc (which require more than 100 billion maybe, or not possible), a drone distrubution platform for getting the energy / matter (input/waste) handled. To do this (as above, 100 people, a year, 1 billion dollars)

6. p2p aerial surveillance system for air traffic managment of millins of drones. Basically, a sky pointing camera gadget that analyzes and broadcasts what it sees and process. Millions/billions of these camera gadgets airdropped every few 100 meters .

7. a drone that can carry 100 kilos and drop ship materails/waste p2p using the p2p air traffic control. The drones are battery operated with range a coupel of kilometers.

8. a drone that can mid-air 'refuel' the above drones. Basically a flying battery that recharges that larger cargo drones.

Summary - 'gadgetize' every problem (it becomes a self contained mechano-electrical desktop/fridge size thing that a team of 100 people can rapidly iterate on) and throw machine learning at it at heavily as possible. Seek to eliminate human labour as fast as possible.


1 rich person saving his money so his 3 children never have to work is bad for the economy.

It's better the 1 rich person is forced to consume that in his lifetime. By doing so that money is redistributed back to the economy, and his 3 kids are also productive.

In scenario 1 only 1 out 4 people work. In scenario 2, 4 out of 4 people work.

Also - it's hypocritical for 'pull yourself by your bootstraps' for children born poor, and 'guaranteed basic income' for children born rich. That is more dehumanizing than not being allowed to pass off inheritance.


Its hard for me to imagine being so full of envy that you would like to see the children of rich people stripped of everything they have. Equality means nothing if you achieve it by pulling everyone down to the lowest level.

> 1 rich person saving his money so his 3 children never have to work is bad for the economy.

"Your rights are bad for the economy, we are taking everything from you. Back to work, pleb".


> Its hard for me to imagine being so full of envy that you would like to see the children of rich people stripped of everything they have.

I think you misread the sentiment. What a lot of us would like to see is a more meritocratic system, where children start off on more equal footing and either earn their wealth or not. But inherited works against that. It is difficult to listen to arguments against taxing the wealthy heavily "because they earned it" while at the same time hearing support for what amounts to aristocracy.


Right, but you're trying to achieve your meritocracy by pulling people down instead of helping people up. This isn't a footrace, and it's not a zero sum game. When you say something like "we need to take money away from rich kids so they have to work like everyone else", you're doing nothing but harm. You may feel the people you're harming have it coming, but that's still what you're doing.

The idea that we have to come up with a good reason _not_ to heavily tax people is backwards. The only reason you need is that people have rights. A man owns what is his and you have absolutely no right to it.

This is all beside the pont anyway. Increasing tax revenue will not solve any of the problems you are trying to solve. As I said elsewhere, you could tax 100% of every billionaire in this country, leaving them homeless, and you would not have collected enough money to fund the federal government for even one year. You could sell off your entire "aristocracy" and it wouldn't make a dent. The strategy of "just spend more money" does not work.


> Right, but you're trying to achieve your meritocracy by pulling people down instead of helping people up.

And how are we to do that without taking money from somewhere and using it to help people up?

> The idea that we have to come up with a good reason _not_ to heavily tax people is backwards. The only reason you need is that people have rights. A man owns what is his and you have absolutely no right to it.

Case in point. You can't take money away from rich kids via inheritance tax (nevermind that they'd still have innumerable advantages from growing up in a wealthy household), you can't tax the wealthy, but you're somehow supposed to help poor children up anyway. Who's paying for that?

Furthermore, I disagree with the statement on the grounds that national infrastructure and protections are doubtless large contributors to anyone being able to amass that kind of wealth in the first place, therefore it is not unreasonable to expect that a portion of that wealth be contributed back.


Children already start of pretty well in most parts of the world. Every child gets an education, for example.

I think you would find that to make everything equal, you would have to take children away from their parents at an early age. Because having caring, loving parents is also an advantage. At least so far not all kids in a school turn out the same, even thought they all have the same teachers and lessons. Must be the parents that make the difference.

I don't think that would be desirable. I don't want my children to be raised in a government institution to make them exactly the same as everybody else, just for the sake of an arbitrary metric ("equality" measured in some arbitrarily chosen terms).

In fact why not get rid of parents altogether, and create children in labs? I bet that is the socialist dream fulfilled.


"Back"?

(Sorry about the cheap comment - I couldn't resist)


You misunderstand the nature of money. Money means society owns something to the holder of the money. By not spending money, society is temporarily richer.

Also, sorry, but I don't care how productive other people's kids are, or "the economy". I care about the well-being of my kids. yes, I want everybody to be happy if possible, but my kids have priority to me.

And what exactly is "hypocritical"? What are you referring to? You assume everybody should have the same starting position in life, and I reject that notion.

Children of course are not at fault for the actions of their parents (like if they are drug addicts and have an unwanted child they can't afford, the child is not to blame - but neither are other people's children or parents). But that doesn't mean every child should automatically get the same "starting money". This is not a game of monopoly.

As I said - people compete to give their children the best odds. That is not even unique to humans, it is true for all of nature.

If you take that away, why bother with anything? Why should you bother to get a good job to be able to support your family? Just fuck and live the good life, and dump your children on society to be taken care of.

Already in the west everybody gets a pretty good deal in the form of an education.


When rich person pays others to, let's say, build a luxurious castle, the money is both redistributed back to the economy AND the castle is there to use, for a few centuries at least. That's how investing* makes the society wealthier over time, and promoting consumption instead is nuts.

* There is a caveat that it should be a new investment rather than a purchase of the existing one, the latter is zero-sum.


In the latter case, the person who sold the castle can use the money to build a new castle.


Sure, but they can also choose not to do that, so the purchase doesn't necessarily increase overall wealth.


Do they need to be typical houses?

About 10 million cars are produced in the USA every year. An RV/Trailer type thing is like a car - so quite easily you could imagine producing that many of these 'tiny houses' every year with car style factories.

About 600k people are homeless in the USA. That's about equal to the number of RV's produced every year.

So that solves 'physical' homelessness quite quickly if there was an effort to. (that disregards that some homeless choose to homeless out of mental illness etc)


Weighing the Justice :

'I like they way my lawn is and i don't like construction noises'

vs

'I can't get a job because it's too expensive to rent or buy where the jobs are, so i live in poverty' (or i can't start family because i can only afford a room-share, or any number of other effects from NIMBYism)


So if people make babies, they are entitled to take your stuff away? I think that is too simplistic.


Basic Income is the same thing as Welfare + Taxation, but done more efficiently.

Prominent economist - Greg Mankiw https://www.youtube.com/watch?v=4cL8kM0fXQc

Removing the needs assessment also removes the geographic location requirement, and that allows to deflate the land value inflation that is the real cause of inequality today.


The critical difference is the amount per person and the number of people included.

Welfare spending for 2020 is estimated to be around $1.9T, around $1.0T of which is means-tested, and $0.7T is Medicaid. Total federal receipts were around $3.4T.

If you believe that the entire adult population should receive the equivalent of a $15/hr minimum wage job that was worked 2000 hours per year, you need to fund $30K/adult. There are around 250M adults living in the US, meaning that paying each of them $30K would require more than twice the total federal tax receipts for this program alone. If you restrict it to citizens only (rather than all lawful residents), it's still basically double.

Even assuming you could cut every other federal function in half, you would still need to raise taxes by +150% (to 250% of current) just to pay for a $15/hr equivalent to adults only. Add in a smaller amount per child and you could be looking at tripling current tax levels. With the top marginal rate already higher than 33%, it's obvious that those tax increases will not hit only "the rich".

Then, does $30K/adult provide for everyone to "enjoy a more fulfilling life" given the inflation that would occur to pay for the UBI? I think it does not.

There is no doubt that UBI and taxation is more efficient; it's the simple multiplication that is a problem.


Something is wrong with that argument. Welfare + Taxation = Basic Income.

Already we have it, by definition - it's an equation.

So some part of your calculation is wrong.

Likely you're over-inflating the amount paid out (it's not going to by 15/hr person) and undercounting payments (maybe disabilty/housing/local programs).

UBI given at equivalent levels to today's welfare would be strictly cheaper because of the reduced adminstration cost.

(this though is different from the question of whether payments should be higher and paid by increased taxation.

And off-topic, but taxation doesn't need to be increased - it can be payed by helicopter money, but actually not cause inflation, because of geographic mobility deflating that aspect of the economy).


Constraint programming is the new way, with Minizinc as the language. OR-tools from google is the best current solver.

https://www.minizinc.org/


SAT, SMT, constraint programming lie on a continuum. SMT is SAT plus machinery from constraint programming. I'm not an expert on this, just info from this video. According to him constraint programming will win in the long run because of constraint programming has larger structures which can be exploited with global constraints (SAT is only a flat sequence).

https://www.youtube.com/watch?v=YVbbNeM74lc


Vowpal Wabbit has been doing this 'hashing trick' since the 200s.

It also the feature interaction, which are the same thing as a layer in transformers (all against all matrix).

So it seems like they are still catching up to where John Langford and crew were over a decade ago.

And, the vowpal wabbit approach is extremely fast to train because it's only doing stochastic gradient descent on a linear function - linear regression. Transformers are much slower to train.

EDIT: Downvoters, please see my last leaf to see why they're effectively the same. The guy responding here seems unfamiliar with all the functionality of vowpal wabbit.


The Google paper's hashing has, as best I can see, nothing to do with the Vowpal Wabbit's 'hashing trick.'

The VW hashing trick is about hashing your input data (ie: words, fields, etc.) into an array to lower storage requirements and deal with novel data at run time.

The google paper is about ordering the intermediate states of the neural network (ie: vectors) while preserving distance. This is done so you can chunk the resulting ordered list and perform computations on individual chunks (and their neighbors).

The only thing in common I see is the fact they both use the word hashing.


They are doing the same thing - using less memory by hashing.

The hashing trick in VW hashes multiple same words into one integer, not the same as reformer, but similar to how reformer puts similar vectors together.

With VW's ngram/skipgram features, you get the same kind of effect - similar strings hash into the same hash.

So locality sensitive hashing = (is around about the same thing as) ngram/skipgram on strings plus hashing trick.


Except in Google's paper the hashing does not directly reduce memory usage in any way. It's a lossless operation on the original vectors unlike VW's lossy operation. Google's representation allows for memory reduction down the line but those mechanisms have nothing to do with hashing.


Let's think through this clearly.

Locality sensitive hashing is a way to put similar vectors into the same buckets - by omission etc. It does this by hashing, but the intent is to approximate nearest neigbours.

skipgram/ngrams turn features into other features by omission etc, and so makes similar things the same. The hashing trick then reduces memory usage.

So yes, you're right the hashing in locality sensitive hashing is different in intent, but my point is, that both these approaches are designed to be more memory and compute efficient.

And vowpal's feature interactions give you transformer layers.

Add up all these together, and they have about the same net effect.


You keep insisting that they're the same when they're not, and then you try to subtly expand your original claim of "using less memory by hashing" to "to be more memory and compute efficient" (emphasis mine), just to force them into the same bucket.

Yes, obviously locality sensitive hashing is a form of hashing. The fact that it's locality sensitive is important for this application, but you'd rather ignore that and insist on labeling them as the same thing just because they're both hashing.


http://matpalm.com/resemblance/simhash/

Simhash algorithm, the LSH i knew about (which i mistakenly thought is LSH) works exactly like VW. It is ngrams + hash.


Downvoters, please see

http://matpalm.com/resemblance/simhash/

https://en.wikipedia.org/wiki/SimHash

Simhash, a type of local sensitive hashing - using hash functions on ngrammed data.

That is exactly what Vowpal Wabbit does.


I'm going to write this out more clearly, because I'm still getting downvotes for my correct answer.

Why neural networks? https://en.wikipedia.org/wiki/Universal_approximation_theore...

Can polynomials do this? (Yes) https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theo...

What is transformer and attention? https://pathmind.com/wiki/attention-mechanism-memory-network

Attention = Polynomial (x2,x3 etc.)

Polynomial = interaction. VW flag -interaction

1 layer transformer = xx. (x^2)

2 layer tranformer = xxx. (x^3)

3 ... etc

What is reformer? Transformer where LSH is applied.

One type of LSH is SimHash. ngrams of strings, followed by 32 bit hash.

Vowpal Wabbit -n flag for ngrams.

vw -interact xxx -n2 -n3 and you get ngrams + 32 bit hash doing SGD over a vector.

This vector is equivalent to a 2 layer reformer.

Non-linear activation is not needed because polynomials are already nonlinear.

So vw + interact + ngrams (almost)= reformer encoder. (if reformer uses SimHash, then they are identical).

Transformer/Reformer have an advantage, the encoder-decoder can learn from unlabeled data.

However, you can get similar results from unlabeled data using preprocessing such as introducing noise to the data, and then treating it as noise/non-noise binary classification. (it can even be thought of as reinforcement learning, with the 0-1 labels as the reward using vw's contextual bandits functionality. This can then do what GAN's do - climb from noise to perfection).


> This vector is equivalent to a 2 layer reformer.

There is no feed forward layer, no skip connections and no layer normalization in VW. In the reformer, hashing is followed by dot products. In VW hashing just collides some tokens, followed by a linear layer.

Also, 2 layers of transformer is a little shallow. In practice it's 12-14 layers or more.

In order to be equivalent, there would need to be equally good results on translation from VW, but I've never seen it used for translation. I'm wondering why?


- hashing followed by dot product in transformer you said

- you were doing dot products at each layer to introduce non-linearity in transformer (and neural nets in general). Polynomials are already non-linear, so you don't need that. Transformer and vw -interact are polynomials. Maybe the feedforward layers and skip connections are not actually needed.

- 12 layers ? vw -interact xxxxxxxxxxxxx is 12 layers. You need a lot of memory for that, but in principle vw interact can do any number of them

These results are coming from google and their massive compute resources. If they ran vw with -interact x^13 they might get similar results.

We're really talking about polynomial approximation here, both transformer and vw used in this way. And that is in theory able to approximate any continuous function (just like neural networks).


I guess the simpler proof that they are the same thing would be: Do they work the same?


Domain randomization works well for robotics tasks.


references?



Good and convincing example. thanks for this robotic perspective.


Random forests split the features This splits the outcomes.

So each tree in RF only looks at a few features. In this, each model looks at all the features.

RF can handle multiclass problems of tens to hundreds (maybe thousands). This MACH algo can handle multiclass problems of millions/billions (extreme classification).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: