3. Embedded Flash implementation (which doesn't exist on mobile anyway).
4. Google API keys.
If what you care about is security auditability, that's pretty good. If you care about running only open source software, that's going to be very hard to do in the Android/Google-Play ecosystem.
Main advertisement? I just went to android.com and developer.android.com; android.com advertises "Google built in" and lots of platforms, with a very small link to AOSP at the bottom of the page; developer.android.com has an AOSP link buried in its menus.
the 1st block is about the nexus one (market as open, but not on this page) and look! the second item on the page reads "Access to the entire platform source and information on how to contribute."
guess they forgot an asterix there saying that the "entire platform" means some of the platform.
I believe only Chrome has built-in PDF viewing too, which can be nice. The page you linked has people saying there are Chromium plugins for it, or you can install a dedicated PDF viewer and it will probably embed itself in the browser when downloading PDFs.
Is the Android chrome (in the UI sense) now open source as well? If that is the case, we could finally get the option to disable third-party-cookies...
What protects you in that scenario is those apps aren't really vulnerable to leaking anything in the first place. Malicious ads in games can only see what other ads you've viewed, it's not like you're signed in to your bank website in Angry Birds.
And if your banking app with its embedded webview has its site compromised, you're already fucked without even opening the app.
According to the Wikipedia page on speedometers, under the Error heading...
'Vehicle manufacturers usually calibrate speedometers to read high by an amount equal to the average error, to ensure that their speedometers never indicate a lower speed than the actual speed of the vehicle, to ensure they are not liable for drivers violating speed limits.', although no citation is currently provided.
Error is introduced by tire diameter being different than what was assumed for initial calibration... i.e. wear on the tires or tires that are under/over-inflated.
Laptop or mobile? How much time did you spend watching the examples? What is your battery's storage capacity (milliamp-hours, usage hours, etc.)? Does it have a GPU? CPU usage? Is that battery drain consistent w/ other things that use that amount of CPU? etc.
Totally agree, under Chrome / OS X, fans went crazy on my rMBP 15 iGPU and CPU raised 80 C degrees just to watch a few slides of that site.
I closed the tab to stop overheating.
Huh. I use Safari / OS X on a 4 year old MacBook Pro. I keep it locked on the integrated CPU and it worked fine, barely increasing the system load. Only one or two of the effects near the end made any noticeable difference.
Of course you're pushing 4x as many pixels as me.
Try Safari (you have to enable WebGL in the Develop menu). I wonder it's a Chrome issue.
I am hardware engineer in this industry. And I know Haskell and made my site with Haskell language. But I do not understand why Haskell is used to hardware design.
Designing hardware is much more important than describing hardware logic itself. IMO, VISIO and Excel are the tools to design hardware logic not Verilog nor like this CLaSH.
But, If this kind of HDL can be used along with Verilog, it might be helpful to build verification IPs.
Wait, so what are you using to describe the circuit on the logic or RTL level? Do you have VLSI engineers using Cadence or something? From my understanding, using Verilog in ASIC design (and not just verification) is pretty widespread in industry.
I think what he's saying is the architectural decisions outweigh the implementation language.
When I was taught Verilog for IP implementation, one thing I noticed is that people get caught in the trap of trying to abstract away the hardware or approach it from a higher level. Haskell/Verilog 2001/SystemVerilog all give us tools to do this. However, when trying to make real silicon, you need to understand what is actually getting built (i.e. know exactly how many flip flops you're creating and how they fan out) and then use the language to describe it. If you use a 'for' loop to try to do computation, as you might in a programming language, you could end up with something entirely unexpected or unsynthesizable.
Traditionally you first design your module conceptually on a whiteboard (or Excel, Viso, etc.), then implement it in an HDL. Because of the influx of software engineers trying to get into hardware (via FPGAs, etc.), there has been a trend in trying to obfuscate away the details of the implementation, and this can cause a lot of confusion.
That said, I've heard of projects that already translate native Haskell to HDL with some success. I'm not a programmer so I don't claim to understand if it's a good idea, but I still think understanding exactly what's being output is important to knowing if it can perform in a reasonable way, especially if you're doing something of any complexity.
FWIW, it is quite easy to write Verilog code that ends up being unsynthesizable, since the language wasn't originally designed to be an HDL. Many of the alternative HDLs, such as UC Berkeley's Chisel (https://chisel.eecs.berkeley.edu/) are designed with the express goal of making it impossible (or at least quite difficult) to write unsynthesizable code.
Also, though figuring out what Verilog to write is not difficult if you've properly thought out the microarchitecture, it can be rather tedious and error-prone to actually write it. I'm not sure how CLaSH works, but Chisel allows you to essentially script generation of hardware using Scala. This removes some of the tedium of writing Verilog and also encourages code reuse (for instance, by allowing you to generate a 32-bit adder and an 8-bit adder using the same code but with different parameters).
Thank you for explaining my thought. I'm not good at English.
In my experience, it is quite easy to describing the hardware logic if the architecture is designed well. So, what I mean in "VISIO and Excel are much more important" that the architecture should be concise and cycle accurate. Then verilog coding is just a piece of cake.
The problem is that despite the Verilog being relatively easy, it's still incredibly tedious and error prone.
It's amazing how Verilog manages to be too low level and too high level at the same time. It's a simulation language not originally intended for synthesis, so it doesn't have access to hardware primitives, and requires you to write specific patterns to ensure they're inferred correctly. But at the same time, it's too low level to even allow you to abstract those patterns.
The need to know exactly what is being built is not completely incompatible with the notion of abstraction. Sure, trying to apply software ideas to hardware with no understanding is a recipe for disaster, but that's not what people are suggesting. The goal is to recognise and abstract patterns in hardware design.
Your example of for loops being fragile is actually a good argument for higher level abstractions: maps and folds are much better tools for working with hardware, since they constrain you to a specific hardware layout, and make it clear what's happening.
>Traditionally you first design your module conceptually on a whiteboard (or Excel, Viso, etc.), then implement it in an HDL.
So wouldn't it be nice if the language you used could express the same concepts you use in your higher level diagrams?
How is this any different from compiling C to assembly? Why would higher level languages create unsynthesizable circuits? You trust the C compiler to create the proper instructions for your target architecture then I don't see why the same can't be done with a Haskell DSL that compiles to Verilog.
From memory, in several HDLs like VHDL and Verilog, a "boolean" value can have up to 8 or so different values (true, false, undefined, hi, lo (different from true and false apparently) plus a few more).
My experience with Verilog is that it's very easy to write things which look fine, you can simulate and then fail in hardware; the semantics are just wrong in the languages.
Higher level languages inevitably come with built in semantics that the programmer takes for granted, but can't be synthesized directly to hardware. In C, it's the function call stack. In Haskell it's higher order types and recursive data structures (and more). You could in theory create some runtime package that you'd compile to hardware for your program to be synthesized to, or "run on".....but then you'd just be making a straight up computer, wouldn't you. ;-)
None of these things are relevant in most HDLs implemented as DSLs in high level languages. The point of most of the HDL work in Haskell (for example Lava, and Bluespec) is to provide primitives to talk about hardware and to use a sane language as a way to manipulate them to build larger specifications. It is embarrassing that people use tools that allow you to write un-synthesizable code.
A computer is a much simpler abstraction and much less leaky than a circuit model.
Yes, in theory a computer could get a high level description of a circuit, and turn it into a very efficient hardware implementation. In practice our computers are not good enough - the same way they were not good enough for compiling high level languages at the 70's, and people wrote assembly by hand.
Hum, no. Sandy/Ivy Bridge can only execute 4 double-precision instructions per cycle per core, in the form of two SSE instructions per cycle (one instruction doing adds, the other doing muls, executed by different units).
Doing 8 double-precision instructions per cycle would translate to either four 128-bit SSE instructions, or two 256-bit AVX instructions per cycle, which is not possible (unless I did not keep track of the latest AVX capabilities).
It should read 8 FLOPS per cycle double precision. So a 3 GHz 4 core Ivy Bridge processor could theoretically peak at 96 GFLOPS double precision, 192 GFLOPS single precision.
> But when we consider the consuming power of ASIC platform, I think this board has strength.
No, it can't possibly have.
SHA is half bitshifts-by-constants. On an ASIC platfrom, those essentially refactor to no-ops. There is no way, no how general-purpose hardware could ever possibly get anywhere near even a piss-poor special purpose ASIC for this task. If you think otherwise you simply don't understand the domain. Those 600-watt ASIC systems contain multiple chips and run at tens of GHashes/s. That 5-watt chip, if it's very, very good, might maybe break 40MHash/s.
It's nowhere near fast enough. My 7970s can push out about 1.3Ghash/s and combined they are capable of around 7 TFLOPs. When (/if) they release the BFL Jalapeño it'll run at 5 Ghash/s and be powered by USB. 90 GFLOPs is equivalent to a decent processor, but nowhere near powerful enough for bitcoin mining.
I'm using Hakyll which derived from Jekyll with Git server.
It is quite appropriate for personal simple website like mine.
To edit page for website, just open terminal and edit and git commit & push. That's all. After git server accept the push request, it rebuilds and publishes the website automatically like Github with Jekyll.
The translation is almost same, however, the meaning is quite different.
Chinese 망양보뢰 means that it is not too late to fix the cowshed after lost. Korean, however, shows the useless of the fixing activity. Later connotes the regret.