Monday Links 15

Empowering your teams to tackle legacy code: Five episodes from LeadDev about ways of thinking about and techniques for tackling legacy code. Spoiler alert: frame it as skill development, start it as building confidence, keep it going by ensuring that the old code works with the cool new tools.

When you choose KRs poorly, but achieve really impressive results (via SWLW)

GitHub Copilot Investigation: Folks are worried that open source projects may be harmed by the likes of Copilot, and that it may actually start to produce bad code. So they are investigating whether a lawsuit around the legality of how the model is created is in order.

The legal­ity of Copi­lot must be tested before the dam­age to open source becomes irrepara­ble. That’s why I’m suit­ing up.

Monday Links 14

In 2013 Mike Hoye wrote a blog post about why most programming languages index arrays starting with 0

I’ve spent far more effort than is sensible this month crawling down a rabbit hole disguised, as they often are, as a straightforward question: why do programmers start counting at zero?

So: the technical reason we started counting arrays at zero is that in the mid-1960’s, you could shave a few cycles off of a program’s compilation time on an IBM 7094. The social reason is that we had to save every cycle we could, because if the job didn’t finish fast it might not finish at all and you never know when you’re getting bumped off the hardware because the President of IBM just called and fuck your thesis, it’s yacht-racing time.

Recently, Hillel Wayne found fault with Hoye’s post, and argued that there might be valid technical reasons as well as the reason Hoye states.

This week Hoye published another post, somewhat rebutting the rebuttal while celebrating contrarianism:

I recognize enough contrarian in myself that I guess I’m obligated to be charitable with anyone pointing their contrarian at me. And charitable can be a job, for sure, but fair’s fair and a job’s a job. But there’s one larger point here, the real point that I wanted to make then and want to make now, that I’m not going to let go:

“Hoye’s core point is it doesn’t matter what the practical benefits are, the historical context is that barely anybody used 0-indexing before BCPL came along and ruined everything. […] I may think that counting 0 as a natural number makes a lot of math more elegant, but clearly I’m just too dumb to rise to the level of wrong.”

My point is not, my point has never been, that “zero indexing is bad”. My point is “believing and repeating uninterrogated stories because they sounded plausible to you” is bad, and I’m saying that because it’s really, really bad.

Monday Links 13

Learning from the incident you didn’t have

When an incident happens in an organization, the traditional response is to identify ways to prevent the incident from happening again in the future. The community around this website takes a different approach towards incident analysis. To paraphrase the late computer scientist Edsger Dijkstra, incident analysis is no more about incidents than astronomy is about telescopes. Instead of focusing on prevention, we seek to leverage incidents as an opportunity to learn as much as possible about how work is done within the organization.

Import AI Newsletter on the discussion about the safety of releasing AI models without restriction

Rep. Anna Eshoo (a Democrat from California) has sent a letter to the White House National Security Advisor and Office of Science and Technology Policy saying she has “grave concerns about the recent unsafe release of the Stable Diffusion model by Stability AI”. The letter notes that Stable Diffusion can be used to generate egregiously violent and sexual imagery, and - due to eschewing the kinds of controls that OpenAI uses for its commercial product DALL-E2 - the freely accessible model represents a big problem.

Letters like this are indicative of a culture war brewing up among AI researchers; on one side, groups want to slowly and iteratively deploy new technologies via APIs with a bunch of controls applied to them, while on the other side there are people who’d rather take a more libertarian approach to AI development; make models and release the weights and ride the proverbial lightning.

Andrew Huberman on the optimal morning routine

Programmers are at their best when they can focus deeply on a task. Managers thrive when they have the energy to context switch. A good sleep and an efficient morning routine help a lot in both cases.

Andrew Huberman is a professor in the Department of Neurobiology at the Stanford University School of Medicine. In this chat with Jocko Willink, Huberman discusses what happens in your body when you sleep and wake, and talks about practices that can help set you up for a day of good work.

Among Huberman’s tips:

  • Aim to have a good sleep 80% of the time
  • Get natural light in your eyes within an hour of waking up
  • Excercise early
  • Try cold water to raise your body temperature
  • Delay your intake of caffeine by 90 minutes