Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

sin( ) is typically implemented with (more sophisticated versions of) exactly the same "tricks"; most library implementations trade off a little speed for more accuracy (and obviously support inputs from the whole number line, not just (-pi,pi)), but honestly they're pretty fast already.

There are also some libraries that provide vectorized implementations, often with a few options for different accuracy/performance tradeoffs.



Is the typical implementation you're referring to CORDIC[1]?

[1] https://en.wikipedia.org/wiki/CORDIC


No. Typical implementations use polynomial evaluation and range reduction. The advantage of CORDIC is that it only uses shifts and additions, and no multiplications. The disadvantage of CORDIC is that it converges very slowly. On modern hardware where multiplication is blazingly fast, CORDIC no longer makes sense.



Sometimes Padé, more often Minimax or Carathéodory-Fejer.


For comparison, these are the results I got from benchmarking this implementation with Rust's standard .sin() with a Core i3 M 380 (fast_sine is code from blog, normal_sine is .sin())

  test fast_sine   ... bench:     746,638 ns/iter (+/- 2,489)
  test normal_sine ... bench:   1,535,147 ns/iter (+/- 13,731)


These ns/iter values seem crazy (a sin function that takes 1.5ms is just really bad). There is implicit loop when using Rust benchmark features, no need to put your own loops.


I didn't realize it loops implicitly, that was my first time using the Rust benchmark tool. Doing only a single sine calculation each iteration yields

  test fast_sine   ... bench:           6 ns/iter (+/- 0)
  test normal_sine ... bench:          13 ns/iter (+/- 0)


Indeed; the reciprocal throughput of sin() for small arguments should be a few tens of ns at worst.


What platform (Rust just uses the host system's libm)?


> not just (-pi,pi)

If you can do [0,pi], a few operations give you the entire domain, no?


Which "few operations" do you have in mind? Accurately reducing an arbitrary argument into (-pi, pi) in double precision requires over 1000 bits of pi--you don't use all of them for any one input, but you need different bits depending on the exponent, and in the worst a bit more than 150 bits contribute meaningfully to the reduced result (see, e.g. https://www.csee.umbc.edu/~phatak/645/supl/Ng-ArgReduction.p...)


I'm confused. Floating point math is only accurate to 2^53 anyway. Any higher than that, and rounding errors are going to vastly overwhelm any inaccuracy of the sine function. E.g. if your input is 2^54+5, floating point will round it down to 2^54 before it even goes into the sine function. Which will give a totally wrong answer, even if the sine function is 100% perfect! So there's no point in handling numbers that large.


How often do you really care about the sin of large floating-point numbers?


If you care about writing standard library functions which conform to a certain error bound, always. If one does not care, one should reconsider their routine as part of a standard library.


As far as I can tell, the standard library functions don't generally handle numbers that large. Try it yourself in the browser, e.g. `Math.sin(Math.pow(10, 15))`. Floating point rounding errors make computing the sine pointless.


While working in computer graphics, it is really easy to end up with some variant of taking the sin of a very large angle; if you imagine a simple spinning cube spinning over time, it's easy to end up with the angle of the cube simply increasing without bound. I believe it's common practice to try to truncate the angles in that case, but in real programs where you're dealing with arbitrarily complicated networks of operations involving billions of numbers it's not that hard for them to slip through.


From my own experience, if you're using angles much at all you're doing it wrong. First, the cube angle should be reduced to reasonable range quite often. Better in 3d is to use a different representation for orientation - a matrix or a quaternion are common and both should be "normalized" regularly. These also have the advantage of eliminating trig functions from the code.

In my day job doing motor control, I track shaft angle using a 16 bit fixed-point integer. It wraps around perfectly and I've got an awesome sin(x) function that takes -32768 to 32767 as input (equivalent to -pi to +pi) and returns a fixed point value in a few operations.

It's all a matter of using a representation that make whatever you're doing easy to compute.


For such a use case, I highly recommend reframing all of your trigonometry problems as vector problems.

If you really must work in terms of angle measures, use “binary degrees” https://en.wikipedia.org/wiki/Binary_scaling#Binary_angles


[0, pi/2] will do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: