Programming languages typically make a distinction between normal program actions and erroneous actions. For Turing-complete languages we cannot reliably decide offline whether a program has the potential to execute an error; we have to just run it and see.
In a safe programming language, errors are trapped as they happen. Java, for example, is largely safe via its exception system. In an unsafe programming language, errors are not trapped. Rather, after executing an erroneous operation the program keeps going, but in a silently faulty way that may have observable consequences later on. Luca Cardelli’s article on type systems has a nice clear introduction to these issues. C and C++ are unsafe in a strong sense: executing an erroneous operation causes the entire program to be meaningless, as opposed to just the erroneous operation having an unpredictable result. In these languages erroneous operations are said to have undefined behavior.
I just spent more than two hours troubleshooting a seemingly simple HTML problem. When I copied and pasted a small section of HTML, the web browser displayed the newly-pasted section differently from the original. The horizontal spacing between some of the elements was slightly different, causing the whole page to look wrong. But how could this be? The two HTML sections were identical – the new one was literally a copy of the old one.
This simple-sounding problem defied all my attempts to explain it. I came up with lots of great theories: problems with my CSS classes, or with margins and padding. Mismatched HTML tags. Browser bugs. I tried three different browsers and got the same results in all of them.
Feeling very confused, I looked again at the two sections of HTML in the WordPress editor (text view), and confirmed they were exactly identical. Then I tried Firefox’s built-in web developer tools to view the page’s rendered elements, and compared all their CSS properties. Identical – yet somehow they rendered differently. I used the developer tools to examine the exact HTML received from my web server, checked the two sections again, and verified they were character-for-character identical. Firefox’s “page source” tool also confirmed the two sections were exactly identical.
Build systems were developed to simplify and automate running the compiler and linker and are an essential part of modern software development. This blog post is a precursor to future posts discussing our experiences refactoring the training projects to use the CMake build generator.
Like them or not, null-terminated strings are essential to C, and working with them is necessary in all but the most trivial programs. While C-style strings are a fundamental part of using the language, manipulating them is a common source of security bugs and lost performance. One of the most common operations is copying a string from one buffer to another, and there are a variety of string functions that claim to do this in C. Anecdotally, however, there is much confusion about what they actually do, and many people desire a string copying function with the following properties:
The function should accept a null-terminated source string, a destination buffer, and an integer representing the size of the destination buffer.
Upon return the function should ensure that the destination buffer points to a null-terminated string containing a prefix of the source string when possible (specifically, when the destination buffer has a non-zero size) to avoid issues in the future with unterminated strings. (While string truncation has its own issues, it is often a fairly reasonable fallback.)
The function should indicate how many characters it copied from the source, as well as indicate if an overflow occurred. (This allows for dealing with the overflow, if desired.)
The function should be efficient, and it should not read or write memory that it does not have to. These go partially hand-in-hand: the function should run in a single pass, not write to the destination buffer past the NUL byte it places, or read characters from the source string once it’s determined that it has filled the destination buffer. Ideally, the implementation would be vectorizable (relaxing some of the previous constraints slightly to within platform alignment guarantees).
That is, what is often necessary is the function below, which we’ll call
In January 2020, I told two members of Racket’s core team that I would no longer be contributing to Racket or participating in the Racket community. Why? Because of a history of intentional, personalized abuse and bullying directed at me by another member of the Racket core team: Matthias Felleisen.
In the first article in this series on developing for Apple Silicon Macs using assembly language, I built a simple framework AsmAttic to use as the basis for developing ARM assembly language routines. In that, I provided a short and simple demonstration of calling an assembly routine and getting its result. This article starts to explain the mechanics of writing your own routines, by explaining the register architecture of ARM64 processors.
Haskell offers ample opportunities for ah ha! moments, where figuring out just how some function or feature works can unlock a whole new way of thinking about how you write programs. One great example of an ah-ha moment comes from when you can first start to understand fixed points, why you might want to use them, and how exactly they work in haskell. In this post, you’ll work through the fixed point function in haskell, building several examples along the way. At the end of the post you’ll come away with a deeper understanding of recursion and how haskell’s lazy evaluation changes the way you can think about writing programs.
If you already have some experience with haskell, you may want to skip the first section and jump directly into learning about fix
There are about six major conceptualizations of memory, which I’m calling “memory models”², that dominate today’s programming. Three of them derive from the three most historically important programming languages of the 1950s — COBOL, LISP, and FORTRAN — and the other three derive from the three historically important data storage systems: magnetic tape, Unix-style hierarchical filesystems, and relational databases.
These models shape what our programming languages can or cannot do at a much deeper layer than mere syntax or even type systems. Mysteriously, I’ve never seen a good explanation of them — you pretty much just have to absorb them by osmosis instead of having them explained to you — and so I’m going to try now. Then I’m going to explain some possible alternatives to the mainstream options and why they might be interesting.
While others may see Rust and Go as competitive programming languages, neither the Rust nor the Go teams do. Quite the contrary, our teams have deep respect for what the others are doing, and see the languages as complimentary with a shared vision of modernizing the state of software development industry-wide.
In this article, we will discuss the pros and cons of Rust and Go and how they supplement and support each other, and our recommendations for when each language is most appropriate.
Companies are finding value in adopting both languages and in their complimentary value. To shift from our opinions to hands-on user experience, we spoke with three such companies, Dropbox, Fastly, and Cloudflare, about their experience in using Go and Rust together. There will be quotes from them throughout this article to give further perspective.
Recently I had to parse some command line output inside a C++ program. Executing a command and getting just the exit status is easy using
std::system, but also getting output is a bit harder and OS specific. By using
popen, a POSIX
Cfunction we can get both the exit status as well as the output of a given command. On Windows I’m using
_popen, so the code should be cross platform. This article starts off with a stack overflow example to get just the output of a command and builds on that to a safer version (null-byte handling) that returns both the exit status as well as the command output. It also involves a lot of detail on
fgetsand how to handle binary data.
This might seem an odd article: every tutorial on the internet teaches you that three point perspective is just the art term for “regular 3D”, where you set up a camera, tweak its distance, FOV, and zoom, and you’re done. The vanishing points that you use when using pen and paper correspond to where the X, Y, and Z axes intersect your clipping plane, and that’s all she wrote… Except that’s not “true” three point perspective. That’s the easy-for-computer-graphics version of three point perspective: the strict version is quite a bit trickier.
The thing that makes it tricky is that in a strict implementation of three point perspective, your vanishing points have to literally be vanishing points: they don’t represent intersections of axes that run to infinity and a clipping plane somewhere off in the distance relative to your camera, the vanishing points are the exact points where all parallel lines to infinity converge. Which is a problem for computer graphics because that means we’re not dealing with linear space, which means we can’t use linear algebra to compute nice “3D world coordinates to 2D screen coordinates” using matrix operations. Which is a slight problem given that that’s the fundamental approach that allows efficient 3D computer graphics on pretty much any modern hardware.
So let’s look at what makes this so crazy, and how we can implement it anyway.
It was 2005, and I felt like I was in the eye of a hurricane. I was an independent performance consultant and Sun Microsystems had just released DTrace, a tool that could instrument all software. This gave performance analysts like myself X-ray vision. While I was busy writing and publishing advanced performance tools using DTrace (my open source DTraceToolkit and other DTrace tools, aka scripts), I noticed something odd: I was producing more DTrace tools than were coming out of Sun itself. Perhaps there was some internal project that was consuming all their DTrace expertise?
Undefined behavior ranks among the most baffling and perilous aspects of popular programming languages. This installment of Drill Bits clears up widespread misconceptions and presents practical techniques to banish undefined behavior from your own code and pinpoint meaningless operations in any software—techniques that reveal alarming faults in software supporting business-critical applications at Fortune 500 companies.
Early in the history of programming languages, two schools of thought diverged. Quicksort inventor C.A.R. Hoare summarized one philosophy in his Turing Award lecture:7 The behavior of every syntactically correct program should be completely predictable from its source code. For the sake of safety, security, and programmer sanity, it must be impossible for a program to “run wild.” Ensuring well-defined behavior imposes runtime overheads (e.g., array bounds checks), but predictability justifies the cost. Today, “safe” languages such as Java embody Hoare’s advice.
The Unix shell is a powerful, ubiquitous, and reviled tool
for managing computer systems. The shell has been largely
ignored by academia and industry. While many replacement
shells have been proposed, the Unix shell persists. Two re-
cent threads of formal and practical research on the shell
enable new approaches. We can help manage the shell’s essential shortcomings (dynamism, power, and abstruseness)
and address its inessential ones. Improving the shell holds
much promise for development, ops, and data processing.
This paper describes the development of the programming language Erlang during the period 1985-1997.
Erlang is a concurrent programming language designed for programming large-scale distributed soft real-time control applications.
The design of Erlang was heavily influenced by ideas from the logic and functional programming communities. Other sources of inspiration came from languages such as Chill and Ada which are used in industry for programming control systems.
Postgres has had “JSON” support for nearly 10 years now. I put
JSONin quotes because well, 10 years ago when we announced JSON support we kinda cheated. We validated JSON was valid and then put it into a standard text field. Two years later in 2014 with Postgres 9.4 we got more proper JSON support with the
JSONBdatatype. My colleague @will likes to state that the B stands for better. In Postgres 14, the JSONB support is indeed getting way better.
I’ll get into this small but pretty incredible change in Postgres 14 in just a minute, first though it is worth some summary disclaimer on the difference between JSON and JSONB. JSON still exists within Postgres and if you do:
CREATE TABLE foo (id serial, mycolumn JSON);You’ll get a JSON datatype. This datatype will ensure you insert valid JSON into it, but will store it as text. This is quite useful if you don’t want to index most of the JSON and want to just quickly insert a ton of it (a great example use case for this is recording API/log input where you may want to play requests).
JSONB unlike JSON compresses the data down and does not preserve whitespace. JSONB also comes with some better indexing ability in GIN indexes. While you can index JSON you have to index each path. From here on I’ll be using JSON interchangeably, but please in your app mostly use JSONB unless explicitly meaning the more simplistic JSON text format.
Enter my venerable Holden VZ Ute daily driver. From the factory, it came with a rubbish four-speed automatic gearbox. During 18 months of ownership, I destroyed four gearboxes. I could not afford a new vehicle at the time, so I had to get creative. I purchased a rock-solid, bulletproof, six-speed automatic gearbox from another car. But that’s where the solutions ended. To make it work, I had to build my own circuit board, computer system, and firmware to control the solenoids, hydraulics, and clutches inside the gearbox, handle user input, perform shifting decisions, and interface to my car by pretending to be the four-speed automatic.
I’m quite proud of my solution. It can perform a shift in 250 milliseconds, which is great for racing. It has a steep first gear, giving it a swift takeoff. It has given some more powerful cars a run for their money. It’s got flappy paddles, diagnostic data on the screen, and the ability to go ahead and change the way it works whenever I want.
This will be a long wall of text, and kinda random! My main points are:
C++ compile times are important,
Non-optimized build performance is important,
Cognitive load is important. I don’t expand much on this here, but if a programming language or a library makes me feel stupid, then I’m less likely to use it or like it. C++ does that a lot :)
This post describes my current approach to testing. When I started programming professionally, I knew how to write good code, but good tests remained a mystery for a long time. This is not due to the lack of advice — on the contrary, there’s abundance of information & terminology about testing. This celestial emporium of benevolent knowledge includes TDD, BDD, unit tests, integrated tests, integration tests, end-to-end tests, functional tests, non-functional tests, blackbox tests, glassbox tests, …
Knowing all this didn’t help me to create better software. What did help was trying out different testing approaches myself, and looking at how other people write tests. Keep in mind that my background is mostly in writing compiler front-ends for IDEs. This is a rather niche area, which is especially amendable to testing. Compilers are pure self-contained functions. I don’t know how to best test modern HTTP applications build around inter-process communication.
Without further ado, let’s see what I have learned.
As soon as Bas Nieuwenhuizen mentioned that he was working on support for Vulkan Raytracing in RADV, my curiosity as to whether this feature could be brought to older generations of AMD hardware was peaked.
Yesteryesterday and yesterday I decided to implement some of the missing pieces for exposing Vulkan Raytracing on older generations of AMD hardware, such as Vega, Polaris and the original Navi.
The work is currently available here if you wish to try it at your own risk.