This is an interesting book! I skimmed some parts and read through his web pages. Seems like most of the value of this book is gathering together and comparing all the various possible definitions of DG concepts. It’s explicitly not a textbook, and I wouldn’t recommend trying to learn from it. The book aims to “fill in the gaps” of all the things usually left out in the usual treatments, but it seems to me you’d want to involve a computer in that–even the author admits that his book will probably never be peer-reviewed.
Oddly - this sounds like the author has just discovered dependency injection? I would have thought that concept would translate pretty well to Go. I’ve written a lot of Go, but I cut my teeth largely on C, C++, and C# so dependency injection has always been on my radar. When I wrote Go, I learned it and largely applied my own lessons from C, C++, and C#.
Due to compiler constraints and the language’s ethos, global state and func init feel weird in Rust (my current language). You can’t, without jumping through hoops, create objects that are partially constructed (e.g. using Option
for uninitialized types). That said, even if you’ve got struct members that are of type Option
, you are actually initializing it to something - sometimes it’s just None
.
I don’t have enough context in Go land to know why this author’s argument might be a novel conclusion. Does anyone have some context? I’d love to learn more.
Many Go programmers seem to feel very comfortable with global state. When I join new organizations or projects, I often find myself needing to educate and socialize the problems that come from that. This post is just a formalization of the things I’ve been saying informally for a long time.
I wish I knew why this was so relatively prevalent in the Go ecosystem. If I had to guess, I’d speculate that it’s because a lot of the example code in e.g. books and tutorials doesn’t really shy away from global state.
It’s also related to the standard library itself having lots of globals. Which itself leads to bad things, like the cryptographic randomness source being trivially modifiable: https://github.com/golang/go/issues/24160
The Go community has a strong culture of writing application-specific code that is “good enough”, and tends to err strongly on the side of avoiding premature abstraction. For a significant number of use cases, globals (combined with a reasonable amount of documentation, testing, and good discipline) tend to be the “good enough” solution.
The thesis of Go’s culture is that premature abstraction often costs more than rewriting code. The argument is that you often know your domain better after writing a “good enough” first version, and that premature abstractions lock you in to specific designs more tightly that may be harder to change down the line.
It’s definitely not a conventional position, but it’s not indefensible – it’s hard to argue that Go developers are not pragmatic (or not productive).
Yup, this was my comment when this appeared a year ago on HN:
In other words, use a pure dependency-injection style, and ZERO globals.
This can actually be done in linear time, as I wrote about long ago: https://www.akalin.com/longest-palindrome-linear-time
There’s no way I’d expect someone to know/come up with Manacher’s algorithm (as the linear-time algorithm is known) in an interview setting, though.
I tried Duplicity with GPG but sadly I found it lacking, even for rarely looked at archives. I eventually moved to restic and it works splendidly.
I also do backups using restic against a cloud storage (in my case a Ceph cluster), this has two advantages:
My biggest complaints about restic are the lack of access controls and slow pruning of data. Perhaps those may be fixed one day.
Not the OP, but the fact that you can’t delete intermediate incremental backups is pretty bad… Pruning is a pretty key aspect of most backup strategies (as I want daily going back N days, weekly going back N weeks, monthly going back N months, etc). Also, duplicity would run out of memory for me (but restic would too – I eventually settled on the free-to-use-but-not-free-software duplicacy, as I wrote about https://dbp.io/essays/2018-01-01-home-backups.html – some more details about the OOM stuff on the lobsters thread https://lobste.rs/s/pee9vl/cheap_home_backups )
For one, being able to restore single files without scanning through the archive. The duplicity guys do know about the problems with using tar, but I don’t know when they’ll be able to move away from it.
I’m not 100% sure, I’m just going by my limited knowledge of the tar format and what my link says:
Not seek()able inside container: Because tar does not support encryption/compression on the inside of archives, tarballs nowadays are usually post-processed with gzip or similar. However, once compressed, the tar archive becomes opaque, and it is impossible to seek around inside. Thus if you want to extract the last file in a huge .tar.gz or .tar.gpg archive, it is necessary to read through all the prior data in the archive and discard it!
My guess is that –file-to-restore has to search for the file in the .tar.gz. If you find otherwise, I’d be interested to know!
For example, in the equation above, the first step would be to subtract (3) times the first equation from the second equation to yield [ \begin{aligned} y_1 &= x_1 + 2 \cdot x_2 \ y_2 - 3 \cdot y_1 &= -2 \cdot x_2\text{.} \end{aligned} ] Then, add the second equation back to the first equation: [ \begin{aligned} y_2 - 2 \cdot y_1 &= x_1 \ y_2 - 3 \cdot y_1 &= -2 \cdot x_2\text{.} \end{aligned} ] Finally, divide the second equation by (-2): [ \begin{aligned} y_2 - 2 \cdot y_1 &= x_1 \ (3/2) \cdot y_1 - (1/2) \cdot y_2 &= x_2\text{.} \end{aligned} ] This is equivalent to the matrix equation [ \begin{pmatrix} -2 & 1 \ 3/2 & -1/2 \end{pmatrix} \cdot \begin{bmatrix} y_1 \ y_2 \end{bmatrix} = \begin{bmatrix} x_1 \ x_2 \end{bmatrix}\text{,} ] so [ M^{-1} = \begin{pmatrix} -2 & 1 \ 3/2 & -1/2 \end{pmatrix}\text{.} ]
I’m guessing this is some sort of MathML-in-JavaScript?
Yeap, it’s KaTeX: https://khan.github.io/KaTeX/
A few inaccuracies in this. Having isNaN isn’t go-specific, since NaN == NaN evaluates to false by IEEE794. x != x would work, but it’s IMO less clear than just isNaN.
go does have an ‘xor’ operator for boolean values – it’s spelled ‘!=’. Although it does lack a ‘^=’ equivalent.
go does have math.NaN(), but not sure if that’s what she was getting at re. overflowing numeric constants.
Edit: “Fact #6: in go, integer division is a Euclidean domain, except when the dividend is nonnegative and the divisor is a constant power of two.” seems wrong, too.
I have no idea what a Euclidean domain is, but I think her statement must be related to the integer arithmetic section of the Go spec:
If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation
Although I can’t think how that would ever produce anything different from normal division. It’s only negative dividends that pose a problem.
Yeah, that’s the conclusion I reached, too. She might have gotten confused and thought that the optimization is applied for negative dividends/divisors, too, but I don’t think that’s true.
Also, technically, fixed-width integers aren’t even a Euclidean domain; she probably just meant having a division and mod operator such that a = (a/b) * b + (a%b), and (a%b) >= 0.
Terry Tao wrote a nice eulogy for her on his blog.
So all he wants is to be able to replace
void f(void) { g(x,y,z); }
with something that allows him to avoid the definition of f? Really?
Also “partial function” to mean “curried function” is confusing as hell.
It’s confusing only if you conflate “curried function” with “partial(ly applied) function”; remember that currying a function converts (A, B, …) -> Z to A -> B -> … -> Z. See https://en.wikipedia.org/wiki/Currying#Contrast_with_partial_function_application
“Sites that only support Flash are exempted, as are the top 10 sites on the web for a year: YouTube.com, Facebook.com, Yahoo.com, VK.com, Live.com, Yandex.ru, OK.ru, Twitch.tv, Amazon.com, and Mail.ru”
Ok, so they didn’t really change anything for the majority of browsing. They really just “turned on the youtube html5 beta but for all websites for all users”, ie. it will prefer html5, but sites can still veto it and do whatever they feel like. I thought they could have done this step years ago and at this point they should be treating flash and flash video as a “ do you want to enable flash for this site, just this once, always?” banner.
According to Peter Kasting (a Chrome dev) all the news sites are getting it wrong: https://plus.google.com/+PeterKasting/posts/5ioK3cbucKz?sfc=true
Exempting the top 10 websites seems pretty silly too; they’re allowed to have shitty security and the rest of us aren’t?
In my experience, it drives users to other websites. Bear in mind the fact that the ‘average’ user doesn’t know the difference between Google, Google Chrome and the Internet.
If over 50% of web users couldn’t use their sites, I’m sure the big websites would come up with an html5 player very quickly.
For what it’s worth, twitch took about a year to develop a HTML5 player. It’s in general availability now, but it took a long time to fully develop it.
Relevant links: https://blog.twitch.tv/html5-player-turbo-beta-starts-today-135d1b7baa65#.95js7ln4u https://help.twitch.tv/customer/portal/articles/2477288
Yeah, html5 playback wasn’t a massive priority at that time. I understand that development can take some time, but we’ve known about the demise of flash for a long time now. And users having to click a button to activate flash isn’t a huge deal, the site is still usable.
Depends on the website.
I’d use something else instead of youtube/amazon - there’s always somewhere else to get entertainment/shop, but there isn’t an alternative to facebook - it’s what too many social groups use, and if you want to keep being included in those groups you have to use it.
Counterpoint: https://www.chrisstucchio.com/blog/2016/propublica_is_lying.html
I’d be interested to hear if there’s something wrong with Chris’s analysis.
A while ago, I finished my blog post on why the quintic is unsolvable. It got a pretty good showing on r/math, which I was happily surprised by! I’m not entirely satisfied with the performance of the interactive visualizations, and have some half-baked ideas to redo them in WebGL, but I’ve decided to move on for now.
This week I’m focusing on finishing the second problem set from a class I’m auditing on algebraic topology. Maybe when the class is done I’ll write a blog post about it, since it’s a topic that lends itself to computer graphics. :)
I’ve been working on a blog post using interactive visualizations to explain why there is no quintic formula. My goal is to have it be convincing with the reader just having some familiarity with basic group theory and complex functions; this is possible because the proof I’m basing this on is topological and sidesteps all the hard Galois theory stuff.
As part of this, I also ported my blog to use KaTeX instead of MathJax, which makes it much faster to render math. :) The only downside is that KaTeX doesn’t have quite the full range of available commands that MathJax has, so I had to find a bunch of workarounds.
(This is my first lobsters post!)
Presentation and expectations about this aside, it teaches a really important technique that took me way to long to pick up in mathematics: guess at what the solution looks like and work backwards.
It’s based on knowing the fundamental theorem of algebra to come up with the
(x - A) (x - B)
formulation, but you can suspect that will be the shape of solutions to problems like this long before you get around to proving the fundamental theorem.You don’t even need to assume the FTA – you go through the derivation with the assumption that it’s only valid if the quadratic does have two roots, but once you have the expression for the roots, you can substitute the expression for any quadratic to show that they’re indeed both roots.
Alternatively, you can show that any (real or complex) quadratic has a complex root, which is much simpler than the full FTA. You have to be careful to avoid a circular argument, though, since the quadratic formula already implies this!