NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Canvas_ity: A tiny, single-header <canvas>-like 2D rasterizer for C++ (github.com)
nicoburns 1 days ago [-]
The list of "recommended reading" from one of the issues looks great:

https://github.com/a-e-k/canvas_ity/issues/11#issuecomment-2...

lioeters 19 hours ago [-]
Quoting the list here for visibility and archival purpose.

* [Euclidean Vector - Properties and Operations](https://en.wikipedia.org/wiki/Euclidean_vector#Properties_an...) - I assume you know all this already since it's pretty fundamental, but just in case, you'll want to be really comfortable with 2D vector math, and the [dot product](https://en.wikipedia.org/wiki/Dot_product) especially. In 2D graphics, I also find uses for the ["perp dot" product](http://cas.xav.free.fr/Graphics%20Gems%204%20-%20Paul%20S.%2...) all the time. (I maintain that in graphics, if you're calling trig functions too much, you're probably doing it wrong!)

* [W3C HTML5 2D canvas specification](https://www.w3.org/TR/2015/REC-2dcontext-20151119/) - Obviously, this provides the basis that I was trying to adapt closely to a C++ API. There's a lot of detail in here, including some abstract descriptions of how the implementations are supposed to work.

* [ISO Open Font Format spec](http://wikil.lwwhome.cn:28080/wp-content/uploads/2018/06/ISO...), [Apple TrueType Reference Manual](https://developer.apple.com/fonts/TrueType-Reference-Manual/), [Microsoft OpenType Spec](https://learn.microsoft.com/en-us/typography/opentype/spec/) - These are the references that I consulted when it came to adding font and text support. In particular, the descriptions of the internal tables where useful when I was writing the code to parse TrueType font files.

* [Circles in Angles](http://eastfarthing.com/blog/2018-12-27-circle/) - This was a blog post that I wrote after working out the math for how to calculate where to put the center of a circle of a given radius that is inscribed in an angle. This is needed for the `arc_to()` method, but also for computing miter joins for lines.

* [Drawing an Elliptical Arc Using Polylines, Quadratic or Cubic Bezier Curves](https://web.archive.org/web/20210414175418/https://www.space...) - In my implementation, all shapes get lowered to a series of cubic Bezier splines. In most cases the conversion is exact, but in the case of circular arcs it's approximate. I used this reference for the original implementation for `arc()`. I later changed to a slightly simpler home-grown solution that was more accurate for my needs, but this was a good start.

* [Converting Stroked Primitives to Filled Primitives](https://w3.impa.br/~diego/publications/Neh20.pdf), [Polar Stroking: New Theory and Methods for Stroking Paths](https://developer.download.nvidia.com/video/siggraph/2020/pr...) - This pair of papers dealing with modern stroke expansion were published concurrently at SIGGRAPH 2020 and were very useful background when I was writing the stroke expansion code. They both have some great examples of how naive methods can fail in high-curvature cases, and how it can be done correctly. I didn't use the full-fat version of these, but I did borrow some ideas (especially from Fig. 10 of the first) without trying to be clever about simplify the path. I also borrowed some of the nice test cases to make sure my code handled them correctly. (It's surprising how many browser canvas implementations don't.) It's also worth learning something about Gaussian curvature, if you don't know it already; both papers give some background on that.

* [De Casteljau's Algorithm](https://en.wikipedia.org/wiki/De_Casteljau%27s_algorithm) - I use recursive tesselation for flattening cubic Bezier splines to a series of line segments (forming polygons). De Casteljau's algorithm is the basis of this, where it recursively splits Bezier splines in half by computing series of midpoints.

* [Adaptive Subdivision of Bezier Curves](https://agg.sourceforge.net/antigrain.com/research/adaptive_...) - This is a nice writeup by the late author of Anti-Grain Geometry that goes into more details of the recursion, with some ideas about choosing where to split. Adaptive subdivision methods choose whether to recurse or stop based on some estimate of error. I don't use the exact approach here, but a conservative estimate of the maximum distance from the curve, plus a maximum angular turn (determined by solving for the [sagitta](https://en.wikipedia.org/wiki/Sagitta_(geometry)) so that stroke expansion from the tessellated line segments is of sufficient quality).

* [Reentrant Polygon Clipping](https://dl.acm.org/doi/pdf/10.1145/360767.360802) - While I could just rasterize the entire set of polygons and skip over any pixels outside the screen window (and I did exactly this for a large part of the development), it's a lot more efficient to clip the polygons to the screen window first. Then rasterizing only worries about what's visible. I used the classic Sutherland-Hodgman algorithm for this.

* [How the stb_truetype Anti-Aliased Software Rasterizer v2 Works](https://nothings.org/gamedev/rasterize/) - I drew inspiration for this for rasterization with signed trapezoidal areas, but implemented the trapezoidal area idea rather differently than this. Still, this should give you an idea for at least one way of doing it.

* [Physically Based Rendering (4th Ed), Chapter 8, Sampling and Reconstruction](https://pbr-book.org/4ed/Sampling_and_Reconstruction) - This is stuff I already knew very well from my day job at the time writing 3D renderers, but the stuff here, especially Section 8.1, is useful background on how to resample an image correctly. I used this kind of approach to do high quality resampling images for pattern fills and for the `draw_image()` method.

* [Cubic Convolution Interpolation for Digital Image Processing](https://ncorr.com/download/publications/keysbicubic.pdf) - When you hear of "bicubic interpolation" in an image processing or picture editing program, it's usually the kernel from this paper. This is the specific kernel that I used for the resampling code. It smoothly interpolates with less blockiness that bilinear interpolation when magnifying, and it's a piece-wise polynomial approximation to the sinc function so it antialiases well to when minifying.

* [Theoretical Foundations of Gaussian Convolution by Extended Box Filtering](https://www.mia.uni-saarland.de/Publications/gwosdek-ssvm11....) - Naive Gaussian blurring (for soft drop shadows here) can be slow when the blur radius is large, since each pixel will need to be convolved with a large Gaussian kernel. It's separable, so instead of doing full 2D convolutions, you can do a pass of 1D convolutions on all the rows, then all the columns or vice versa. But that's still slow. However, iterated convolution of a box kernel is a very good approximation (think of summing dice approaching a Gaussian distribution). And box blurring is very fast, regardless of the kernel size since everything has the same weight - you just add to and subtract from a running sum. This paper is about quickly approximating Gaussian blurs with iterated box-like blurs.

* [Compositing Digital Images](https://graphics.pixar.com/library/Compositing/paper.pdf) - Porter-Duff forms the basis for the core compositing and blend modes in vector graphics, and is referenced directly by the Canvas spec. For my implementation, I break down the choices of the parameters to use into four bits and encode them directly into the enum of the operation. That way I can implement all the Porter-Duff operations in just 7 lines of code. (I'm pretty proud of that!)

* [sRGB](https://en.wikipedia.org/wiki/SRGB) - The Canvas spec - transitively, via reference to the [CSS color spec](https://www.w3.org/TR/css-color-3/) - defines that input colors are in sRGB. While many vector graphics implementations compute in sRGB directly, operating in linearized RGB is a hill I'll die on. (I don't go crazy about color spaces beyond that, though.) If you don't you'll end up with odd looking gradients, inconsistent appearance of antialiased thin line widths and text weights, different text weights for light-on-dark vs. dark-on-light, color shifts when resizing. [Here are some examples](https://blog.johnnovak.net/2016/09/21/what-every-coder-shoul...). I do all my processing and storage in linear RGB internally and convert to and from sRGB on input and output.

* [GPUs prefer premultiplication](https://www.realtimerendering.com/blog/gpus-prefer-premultip...) - Premultiplied alpha is also important for correct-looking blending. The Canvas spec actually dictates _non_-premultiplied alpha, so this is another case where I convert to premultiplied alpha on input, do everything with premultiplied alpha internally, and then un-premultiply on output.

* [Dithering](https://en.wikipedia.org/wiki/Dither) - I use floating point RGB color internally and convert and quantize to 8-bit sRGB on output. That means that the internal image buffer can easily represent subtle gradients, but the output may easily end up banded if there are too few steps in the 8-bit sRGB space. My library applies [ordered dithering](https://en.wikipedia.org/wiki/Ordered_dithering) to its output to prevent the banding.

quietbritishjim 13 hours ago [-]
I found the "perp dot product" an interesting one. It's a pity the description is in a massive pdf (though it looks like a great book). The top Google result is the MathWorld page [1] but it's very brief.

Here how that pdf describes it. It first defines the perpendicular operator on a 2D vector x as

  x⟂ := (-x_2, x_1)
which is x rotated 90 degrees anticlockwise. Then the perp dot product of two 2D vectors is defined as

  x⟂ . y
This has a few interesting properties, most notably that

  x⟂ . y = |x| |y| sin θ
For example, the sign of the perp dot product tells you whether you need to rotate clockwise or anticlockwise to get from x to y. If it's zero then they're parallel – could be pointing in same or opposite directions (or over or both are zero).

In this Reddit post [2] about it, again not much is said, but a redditor makes the astute observation:

> The perp dot product is the same as the cross product of vectors in a plane, except that you take the magnitude of the z component and ignore the x/y components (which are 0).

[1] http://mathworld.wolfram.com/PerpDotProduct.html

[2] https://www.reddit.com/r/learnmath/comments/agfm8g/what_is_p...

a_e_k 21 hours ago [-]
Author here. What a pleasant surprise to see this trending on the front page!

(I did post a Show HN at the time of the original release, https://news.ycombinator.com/item?id=33148540, but it never gained traction.)

Just to answer some comments that I see:

1. This was absolutely not vibecoded!

I'd originally started with a different version control system and was still getting used to Git and GitHub at the time that I'd released this. (I was a latecomer to Git just because I hated the CLI so much.) It was easiest for me just to drop the whole thing as a snapshot in a single commit.

But my private repo for it actually started in May 2017, and it had 320 commits leading up to its release, all human-written.

For the v2.0 that I have in mind, I'm thinking of force-pushing to migrate the full development history to the public repo.

And finally I'll add that I'm a graphics engineer by education and career. Where would the fun be in vibe-coding this? :-) Oh, and this compiles down to just ~36KiB of object code on x86-64 last I checked. Good luck vibe-coding that constraint.

2. Why a single header with `#define CANVAS_ITY_IMPLEMENTATION`?

I was inspired by the STB header ibraries (https://github.com/nothings/stb) and by libraries inspired by those, all of which I've found very convenient. In particular, I like their convenience for small utilities written in a single .cpp file where I can just `g++ -O3 -o prog prog.cpp` or such to compile without even bothering with a makefile or CMake.

Since the implementation here is all within a single #ifdef block, I had figured that anyone who truly preferred separate .cpp and .h files could easily split it themselves in just a few minutes.

But anyway, I thought this would be a fun way of "giving back" to the STB header ecosystem and filling what looked to me like an obvious gap among the available header libraries. It started as something that I'd wished I'd had before, for doing some lightweight drawing on top of images, and it just kind of grew from there. (Yes, there was Skia and Cairo, but both seemed way heavier weight than they ought to be, and even just building Skia was an annoying chore.)

----

Since I mentioned a v2.0, I do have a roadmap in mind with a few things for it: beside the small upgrades mentioned in the GitHub issues to support parts of newer <canvas> API specs (alternate fill rules, conic gradients, elliptical arcs, round rectangles) and text kerning, I'm thinking about porting it to a newer C++ standard such as C++20 (I intentionally limited v1.0 to C++03 so that it could be used in as many places as possible), possibly including a small optional library on top of it to parse and rasterize a subset of SVG, and an optional Python binding.

PaulDavisThe1st 23 hours ago [-]
And thus random 2D drawing APIs begat Cairo, and then Cairo begat the Canvas, and thus the Canvas begat Canvas_ity, which looked almost like it's grandparent, and yet was very much it's own self.
msephton 1 days ago [-]
The project is great. The HN comments are embarrassing. Isn’t it ironic to imply laziness by chiming in with “vibe coded” which in itself is such a lazy reaction.
injidup 14 hours ago [-]
I remember a old grey beard calling me lazy because I programmed in C++ instead of assembler. Using LLMs has pushed this attitude up a few abstraction layers.
Lerc 1 days ago [-]
It would be interesting to compile to WASM to compare side by side for performance and accuracy.
a_e_k 21 hours ago [-]
Author here. I have a JavaScript port of my automated test suite (https://github.com/a-e-k/canvas_ity/blob/main/test/test.html) that I used to compare my library against browser <canvas> implementations. I was surprised by all of the browser quirks that I found!

But compiling to WASM and running side-by-side on that page is definitely something that I've thought about to make the comparison easier. (For now, I just have my test suite write out PNGs and compare them in an image viewer split-screen with the browser.)

jurschreuder 15 hours ago [-]
Wow very nice work I really like it!

Very clean :) I will use it!

We made our own OpenCV alternative at Kexxu I'll put it in :) exactly what it still needed for a bit of basic drawing.

ddtaylor 1 days ago [-]
Thank you for sharing. The only thing I don't understand why this is a header only implementation with a macro that goes in a C++ file.

    #define CANVAS_ITY_IMPLEMENTATION
erwincoumans 1 days ago [-]
It is common for header-only libraries: you need to include this header in one c++ using the macro for linking (don't use that macro in other c++ files to avoid duplicate symbols). In C++, you can declare a function as many times as you want, but you can only define it (write the actual body) once in the entire project.
ddtaylor 24 hours ago [-]
I understand that part, but I don't see why do this instead of basic Makefile or CMake setup. It seems like more work than a regular linker at that point. For what purpose?
elteto 24 hours ago [-]
Because not everyone is using Makefiles or CMake.

A true header-only library should be build-system agnostic and this is one way to do that.

We can argue about build systems for C++ all day long and never come to an agreement. With this approach this piece of code can be used anywhere.

pjmlp 17 hours ago [-]
We can also argue C++ is not a scripting language, which is what is approach is all about.

When C and C++ were the main programming languages during the 1990's, and commercial compilers abounded strangely we could manage handling all those build systems approaches.

elteto 4 hours ago [-]
I think you are romanticizing the past. I’m pretty sure integration of third party C or C++ code sucked as much back then if not more. You just didn’t have as much of those dependencies because open source was in its infancy and the number of available libraries was much smaller too.
socalgal2 1 days ago [-]
that's a common pattern in C++ land because there is no standard way to use libraries in C++

https://github.com/p-ranav/awesome-hpp

pjmlp 16 hours ago [-]
It is a common pattern among those that don't want to learn build systems, which isn't exactly the same.
socalgal2 15 hours ago [-]
In my experience. I've run into the issue quite often. You find some library, it has its own build system (meaning not the one you're using). It has special rules etc... Integrating into your build system is time consuming and frustrating. Compiler errors from includes, linker errors, etc..

None of that happens with a single file C++ library.

pjmlp 15 hours ago [-]
Yeah, and?

There isn't a single build system without issues, other than siloed languages without language standards.

Header libraries only started to be a thing when scripting generation educated in Python and Ruby during the 2010's turned into compiled languages.

chris37879 8 hours ago [-]
And what build system do you recommend the entire ecosystem support? Well, I choose (arbitrarily different and incompatible one to prove a point). Do you see the problem?
pjmlp 8 hours ago [-]
I see a learning problem, because there is hardly a programming language in widespread use using a single one.

And if you're going to point out Go or Rust, it kind of works as long, nothing else is used, and they don't need to interact with platform SDKs.

capisce 16 hours ago [-]
Or among those who want to support any build system
pjmlp 15 hours ago [-]
We managed during the last 20 years just fine.
on_the_train 5 hours ago [-]
I wrote a mildly popular header only library with exactly this pattern once. Then there came a guy adding cmake. I didn't really know it back then so I didn't stop him. The downloads went to literally zero lol. Cmake is an extremely vocal minority.
pjmlp 5 hours ago [-]
Who said anything about cmake specifically?
CyberDildonics 7 hours ago [-]
Everything is easier with single file headers. This idea that a build system and single header libraries are mutually exclusive is silly.

You can take multiple single file libraries and put them into one compilation unit. Then you have minimal files and minimal compilation units. Compilation is faster, everything is simpler.

pjmlp 5 hours ago [-]
What is silly is pretending everything is a scripting language.
CyberDildonics 4 hours ago [-]
I don't know what that is supposed to mean and I don't think it means anything.

I'm talking about real pragmatic benefits and you're making some vague hand wavy judgement without any tangible explanation.

taminka 23 hours ago [-]
i swear if someone starts another single header vs other options debate in this comment section i'm gonna explode
20 hours ago [-]
pjmlp 16 hours ago [-]
Boom! C and C++ aren't scripting languages.
MomsAVoxell 4 hours ago [-]
It has to be said that one of the reasons a single header library is so useful in the C/C++ world, is because it makes interfacing to Lua so much sweeter.

Lua(C,C++) = nirvana

BRB, off to add Canvas_ity to LOAD81 ..

taminka 5 hours ago [-]
what do you mean by that?
Keyframe 8 hours ago [-]
really nice! I'd prefer to see a C89 version though, however arcane you might think that is.
MomsAVoxell 4 hours ago [-]
Would be a bit more practical and a lot more portable ..
Keyframe 4 hours ago [-]
I forked it and gave the robot work to do through my structured way of things. Seems to be working so far, with some added extensions to the original: https://github.com/Keyframe/canvas_ity_c89
lioeters 2 hours ago [-]
> This fork ports everything to strict C89, adds a backend abstraction layer, and replaces CMake with a plain Makefile.

Blessings, this is exactly what I wanted when I saw the project. Please give my regards to the robot who helped. ;)

disqard 18 hours ago [-]
Thank You For Making And Sharing, a_e_k!
ranger_danger 1 days ago [-]
vibe-coded?
nicoburns 1 days ago [-]
Most likely not seeing as the commit containing the bulk of the implementation dropped in 2022.
ranger_danger 1 days ago [-]
maybe just the README then
a_e_k 21 hours ago [-]
Author here. No vibe-coding, all human-written. Are you thinking of my use of GitHub emoji on the section headings in the README? I just found they helped my eye pick out the headings a little more easily and I'd seen some other nice READMEs at the time do that sort of thing when I went looking for examples to pattern it off of. I swear I'd had no idea it would become an LLM thing!
flowerbreeze 1 days ago [-]
The README is older than ChatGPT too. It's very unlikely that it's vibe coded or vibe written.
peter-m80 1 days ago [-]
Would that be an issue?
Amlal 1 days ago [-]
Yes, it's a canvas library, there's a lot of risks of including AI generated code that hasn't been checked in a rasterizing library.
a_e_k 21 hours ago [-]
Author here. There's no AI-generated code in this. But yes, security hardening this has not been a priority of mine (though I do have some ideas about fuzz testing it), so for now - like with many small libraries of this nature - it's convenient but best used only with trusted inputs if that's a concern.
ivanjermakov 1 days ago [-]
A lot of risks compared to what? I imagine bugs in kernel drivers or disk utilities be riskier.
JoeyJoJoJr 1 days ago [-]
Such as?
1bpp 1 days ago [-]
Yes.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 23:00:44 GMT+0000 (Coordinated Universal Time) with Vercel.