Hacker Newsnew | past | comments | ask | show | jobs | submit | WORLD_ENDS_SOON's commentslogin

Deep learning combines linear functions together, but the way in which they are combined is always nonlinear. If the way in which they were combined were linear, the resulting function would also be linear (a linear combination of linear functions is linear), and you'd have linear regression. Specifically the final layer of a deep neural network is often linear but the nodes in middle layers are often Rectified Linear Units which have a nonlinear "max" function applied to a linear function. Without these "max" functions you'd be able to flatten all the layers into one. Other common nonlinear functions you'll find in a deep neural net are sigmoid functions like tanh. These functions are sometimes called "activation functions".


Yes, there is a non-linear function between each linear function. The alternating back and forth between linear and non-linear is important.


Have you tried out the ChucK language? I haven't tried Extempore yet, but it looks like the way they handle timing could be similar. ChucK has a very elegant system for synchronizing synthesizers and sequencers where you can write code in a simple imperative style (within a sequencer you can "wait" until the next note needs to be played). The language runtime keeps all of the coroutines running in sync with a global clock. I don't know of other languages that use this style of timing, so I'd be very curious to hear how they compare.


I think it is because many labs in CS departments do very little research involving human subjects (e.g. a machine learning lab or a theory lab), so within those labs there isn't really an expectation that everything goes through IRB. Many CS graduate students likely never have to interact with IRB at all, so they probably don't even know when it is necessary to involve IRB. The rules for what requires IRB involvement are also somewhat open to interpretation. For example, surveys are often exempt depending on what the survey is asking about.


Machine learning automatically being exempt is a huge red flag for me. There are immense repercussions for the world on every comp sci topic. It's just less direct, and often "digital" which seems separate but it's not.


It's very sad Rosenblatt did not live to see the resurgence of neural networks and his perceptron algorithm. The perceptron algorithm isn't exactly what we use to train neural networks today, but it's similar enough in theory and practice that it still feels very fundamental to understanding machine learning.


And at least in the CogSci space, Rosenblatt's work was instrumental for the PDP (Parallel Distributed Processing) working group in the mid eighties that led to backpropogation methods.


In the United States a photograph of a public domain 2D image is still also public domain if the photograph is considered a faithful reproduction of the public domain 2D image: https://en.wikipedia.org/wiki/Bridgeman_Art_Library_v._Corel....

However, I think the laws for this vary quite a bit across countries, and in many countries the photograph is considered a new copyright work. In general it's pretty frustrating how many legal barriers there are to accessing and reusing old works of art. Thankfully a growing number of museums have made things easy with clear copyright releases (Rijksmuseum, Paris Musées, the MET), but others seem more interested in preserving their ability to sell prints.


This sort of research is maybe less flashy than say using machine learning to automatically generate game assets from photos, but I think this sort of computer-aided game design is possibly the biggest way machine learning will transform video games. As games are becoming bigger and more complicated, the problem of tuning various gameplay parameters explodes exponentially. And, this sort of tuning can have a huge effect on player retention and overall game quality.

In this research the machine learning is being used to balance the game across different asymmetric strategies (different decks in the card game), but you could imagine using similar techniques for balancing and tuning content for single player games as well. Once you have a reasonable model of the player's behavior, you can do all sorts of automatic tuning like balancing the difficulty of jumps in a platformer, tuning enemy positions in an FPS, etc.


I'm guessing information isn't put in the email because of federal privacy laws around education (FERPA). Email isn't considered a secure communication protocol since it isn't encrypted in transit, so schools can't email grades or other information related to education records. I'm guessing sending all messages through a secure web portal is just an easy way to avoid FERPA liability from the schools perspective.


I'm not in the US but you make a good point. They probably need to confirm parents' identities. An easy and well advertised low tech solution would be good for low tech parents though.


Dip pens are used quite a bit for artwork still. Until very recently (last 10 years or so), dip pens were very much the standard for inking certain types of art, manga especially. The main advantages of a dip pen are the amount of control you have over the line by varying pressure and angle (like a flexible nib fountain pen) as well as the ability use pigment based inks that will clog most fountain pens. Of course digital tablets have pressure and tilt sensors, but without the tactile feedback of the nib flexing and pushing back against your hand, it can be much harder to control. I'm guessing that drawing tablets will eventually have haptic feedback for this reason. The ability to swap out nibs or use a paint brush with the same ink is also very useful.


Not sure about tattoo artists, but Disney has been known to pursue cake decorators / bakeries that uses their IP without a license.


What is the standard rate for putting a Disney character on a cake?


Completely speculation, but I wonder if it's using the same mechanism explored in the famous upside down goggles experiment: https://en.wikipedia.org/wiki/Upside_down_goggles Gizmodo article talking about the effect: https://io9.gizmodo.com/does-your-brain-really-have-the-powe...

Basically, if you wear goggles that distort your vision for long enough, your visual processing adapts and "corrects" for the distortion such that you can function normally. Then if you remove the goggles your brain still tries to correct for the distortion that isn't there (for a while at least), so the world appears distorted without the glasses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: