So, you’ve written a Racket library and now you’d like to make it widely available. How to go about it? This post documents a widely used (yet apparently undocumented) best practice.
This is a rather long post, so although it is written sequentially, if you already know what you’re trying to do you could read in a goal-oriented manner instead. By way of signposting: the first few sections below provide some background and rationale. If you just want a quick summary, jump to the review. For the how-to, go straight to How do I adopt the composable (lib/test/doc) approach? If you’re interested in migrating your existing library to using lib/test/doc, go to How do I migrate my existing library? Finally, we’ll also look at some alternative approaches.
A Quick Exercise to Gain Intuition
Before we can appreciate this best practice, we’ll need to understand some basic concepts governing how package management works in Racket. There are many resources that will help you get an in depth understanding (which we’ll point you to at the end) but for a working intuition, we are going to go through a quick exercise where we’ll see how Racket’s notions of “modules,” “collections,” and “packages” are analogous to the familiar OS notions of files, folders, and ZIP archives. Follow along with the example below in order to see how these exist in almost exact analogy. Otherwise, if you’re convinced, then skip to Review and Key Aspects.
Imagine there are two friends named Alice and Bob who collaborate on Racket projects. Alice has written some code that she’d like to share with Bob. This includes (1) a data structure called “fancyqueue,” and (2) some useful functions in a folder called “foo”. She has these organized as follows on her machine:
/foo /a.rkt /b.rkt /data/fancyqueue /a.rkt /b.rkt
She’d like to send these over to Bob in such a way that the file and directory structure are preserved. How should she do it?
One common and convenient way is to send over a ZIP archive. Let’s do this.
Note: the Shell instructions below are UNIX-oriented — if you are using Windows you can still follow along but the commands probably wouldn’t be the same — please comment if you run into any difficulties, or if you have Windows-specific commands to share!
Setup: Create Alice and Bob’s home directories
Enter a command line shell and go to a sandbox folder where you can try things out. Then:
$ mkdir learn-packages $ cd learn-packages $ mkdir alice bob
Setup: Create Alice’s folder structure
$ cd alice $ mkdir -p foo data/fancyqueue $ touch foo/a.rkt foo/b.rkt data/fancyqueue/a.rkt data/fancyqueue/b.rkt
Now, your alice
folder has the structure we previewed above, as the following command shows.
$ find .
Let’s create the ZIP archive. While still in the alice
folder,
$ zip -r alice2bob.zip .
And “send this to Bob”:
$ cp alice2bob.zip ../bob
Now pretend you’re Bob, and extract the archive:
$ cd ../bob $ unzip alice2bob.zip
And look:
$ find .
Bob now has Alice’s code, exactly as it was on her machine.
Nothing new, of course. But now let’s try a small variation. Let’s say that Alice hasn’t sent Bob anything yet, and that it is instead Bob who wants to send something over to Alice. Bob wants to send over a “coolgraph” data structure that he wrote, along with another “foo” folder containing useful functions. That is, Bob’s folder structure looks like:
/foo /c.rkt /d.rkt /data/coolgraph /a.rkt /b.rkt
Note that it is fairly similar to Alice’s folder structure, unbeknownst to her.
Let’s set up this scenario. While in the learn-packages
folder:
Restore the initial state of Bob’s home directory:
$ rm -rf bob/* $ cd bob
Setup: Create Bob’s Folder Structure
$ mkdir -p foo data/coolgraph $ touch foo/c.rkt foo/d.rkt data/coolgraph/a.rkt data/coolgraph/b.rkt
Take a look:
$ find .
We note that Bob has a folder named foo
just like Alice does, but it contains files with different names. What do you think will happen when Bob sends it to Alice? Let’s see.
$ zip -r bob2alice.zip . $ cp bob2alice.zip ../alice/ $ cd ../alice
Assuming that Alice’s files are the same as they were earlier, before unzipping the archive, Alice sees:
$ find .
And then:
$ unzip bob2alice.zip $ find .
We see that Bob’s files were placed at the correct paths, while preserving the files that Alice already had there. If there did happen to be conflicts — existing files with the same name at the same paths — Zip would warn you and ask for confirmation before overwriting them.
The Global, Shared, Collection Namespace
With the above example in mind, to a first approximation, Racket “modules” correspond to files, Racket “collections” are analogous to folders in the filesystem, and “packages” are analogous to Zip archives. Note that these “collections” have nothing to do with data structures like lists and hashes which are also referred to as “collections.”
The way in which you share your code with others is via packages, analogous to Zip archives. Packages, once registered on the Package Catalog, place modules into collections. Unlike a Zip archive where the archive is a file containing the files of interest, a package is simply a URL where these files may be found. This URL is typically a Git repo hosted on GitHub, GitLab, or BitBucket.
The important thing to know about collections is that the “filesystem” that packages place modules into is a global, shared namespace — like a hard drive that the whole Racket community shares. If you provide a file a/b.rkt
in one of your packages — that is, a file b.rkt
in a folder a
— then it is now the b
module in the a
collection. Everyone in the community would be able to access it via (require a/b)
, and no one else can provide a file at the same path[1].
[1] – Similarly for nested files, as you would expect, a/b/c.rkt
in a package would be available as (require a/b/c)
. Here, a
is the collection, while b
is a sub-collection of a
.
Review and Key Aspects to Remember
When you want to share your code with others in the Racket community, you put it in a publicly hosted source repo. This repo consists of a filesystem hierarchy in which you have your source files. In order to make your code available within the global collection namespace, you register a package on the Racket Catalog by providing a URL to your repo, and this package will map the filesystem hierarchy of your repo to the global, shared, collection namespace. That is, the package will provide modules (every Racket file is implicitly considered a module — but a file could also contain multiple modules) that will be placed in collections which are globally shared.
Worth remembering: Every source module in your package gets mapped to this global collection namespace and there are no exceptions. This is in contrast to conventions in other programming languages such as python, where a “manifest” file in the repo can specify which files in the repo are included or excluded.
Finally, as a package can, in separate folders, include files in more than one base path (just like our ZIP example above where we had files in multiple toplevel folders), any given package can include modules to be placed in any number of collections. Therefore, by the same token, any collection in the global namespace may consist of modules that come from any number of packages — packages that could be independently authored by any number of people.
These last two points form the basis for the lib/test/doc best practice employed in the community, which we’ll now go over. But first …
One Last Thing to Know About info.rkt Files
When you share a ZIP file with someone, the contents are simply extracted with reference to the containing folder as the base path. When you provide a Racket package, though, the files are going to be mapped to the global collection namespace and not some arbitrary, unknown path. The way in which the file tree in your repo is mapped to this global collection namespace is controlled using info.rkt
files — specifically, you indicate the base path to use in the global namespace via the collection
directive in an info.rkt
file at the top level at the URL you registered the package with. If you use the special declaration 'multi
, as in (define collection 'multi)
, then it treats your package as containing a folder structure that will be mapped verbatim to the root path in the collection namespace. So if your package contains the paths data/hashtree/tree.rkt
and awesome/file.rkt
, then these will be available via (require data/hashtree/tree)
and (require awesome/file)
, respectively. On the other hand if you specify a collection name here as a string, e.g. (define collection "hash-util")
, then the contents of your package will be made available as (require hash-util/data/hashtree/tree)
and (require hash-util/awesome/file)
, respectively.
And now we’re ready.
What is the composable approach?
The composable or “lib/test/doc” approach involves making your library available via a combination of (usually) three standard packages, each of which provides modules in relevant collections. Assuming your library is called “foo,” the three standard packages are foo-lib
, foo-test
, and foo-doc
— containing your library’s core functionality, tests, and documentation, respectively. While this breakdown is the most common one, the main idea is that the library is decomposed into component packages in a way that is natural for the specific library. All of these packages can be contained in the same source repo, however, so from a logistical standpoint it isn’t too different from having a single package. There are many examples of Racket libraries that do this, a few of which are linked in the Resources for reference.
Why would I do this?
- It keeps dependencies focused and lean.
Applications and other libraries depending on your library can rely specifically on the runtime functionality (and not e.g. tests and docs). This keeps running Racket programs (e.g. packaged binaries) lean.
- It minimizes and contextualizes points of failure.
For instance, if there is a problem with an upstream package that your docs rely on, it will break your docs but not the functionality provided by your library. Indeed, while there could be many ways to compose your library from independent packages, the choice of separating out tests and docs arises from the fact that almost every library can be decomposed into these components, and the dependencies for these particular components tend to have little overlap, making this a convenient breakdown.
Case study: Recently a widely used package, memoize
, had a build failure. This had a ripple effect causing all dependent packages and their dependent packages to fail. As I had recently transitioned one of my libraries, Qi, to using the lib/test/doc approach, only the docs became unavailable for a few days while the upstream problem was addressed. The core library was unaffected. Had Qi been in a single package as it formerly was, it would have been completely unavailable during this time.
- It improves the efficiency of the Racket ecosystem
Every time someone installs a package depending on your library, it would retrieve from the network only the part that it actually needs, and would only build that part. This may save seconds or minutes on each installation, and may save megabytes on each download on every installation. It adds up to a noteworthy savings in CPU cycles and bandwidth, that perhaps is worth doing in the pursuit of minimizing our environmental footprint.
- You gain the flexibility to mix and match functionality
… allowing you to provide tailored packages for different use cases. For instance, for the common case of a user installing your library and using it directly (as opposed to indirectly via a dependency), you could offer a foo
package that depends on foo-lib
, foo-test,
and foo-doc
— that is, this aggregate package is equivalent to the former “single package” way of doing it. It’s even better, in fact, since this aggregate package is now no longer in the position of being the primary way in which other packages rely on your library. Its responsibilities are now specific to the direct installation case and it can be optimized for this usecase. You now have the flexibility here to even depend on third party packages such as a foo-utils
package authored by someone else, that offer conveniences for these “direct” users of the library. That is, foo
can focus on what is useful, not just what is necessary.
Case study: The Qi library now consists of 6 packages — three are the standard lib/test/doc, one is a dedicated debugger for the language, and one is a collection of scripts for use in an editor (DrRacket), and finally, the aggregate package qi
includes all of these so that direct users of Qi can simply install one package to get everything they might need — including packages authored by third parties. Meanwhile, libraries relying on Qi functionality need only depend on qi-lib
.
Why wouldn’t I do this?
- It’s more complicated.
- It’s extra work to set up.
- Every library you write could have three corresponding packages on the Catalog, and that may feel gratuitous.
I’ll be honest. I wasn’t a fan of the composable approach when I first learned of it, for these reasons. I have now come around to the view that the benefits provided by the decomposition far outweigh these superficial drawbacks, especially for complex libraries. I am curious to hear other perspectives on the subject though, as I am writing all of this as a complete outsider to the thinking that went into instituting these conventions — if you are a holdout, or an advocate, of the composable approach, for any reasons I haven’t covered or where you believe I am mistaken, please share your thoughts in the comments below.
Update: While this post was still a draft, an active (independent) discussion on the topic arose on the Racket Discourse. Many Racketeers weighed in on the topic, and the broad sentiment seemed to be that this approach is overly complicated, while providing known benefits. Clearly, there is room for improvement here, and the discourse brought out some good ideas and thoughts that are worth reading to get a fuller picture of the perspectives on this complex topic, and maybe you can even contribute your own view there towards any necessary reforms.
When should I use this?
If your library happens to be small and simple, then it may be best to just structure it as a single package. While the conceptual benefits of the composable approach apply equally to simple libraries and complex libraries, the practical benefits are more significant for complex libraries due to the larger number of dependencies and the need to incorporate third party add-ons. You can also always transition from the former to the latter (and see the section below for a transition strategy) so it may be best to start simple for your first library to learn the ropes of getting a package out there at all, and then consider migrating to this structure at a future point. This is roughly also the recommendation in the official Racket docs. If you are opting to “start simple,” then head straight on over to the Resources for links that will help you with that. Word to the wise, though:
“The Top Level is Hopeless”
Whatever organizing strategy you choose for your library, you probably do not want to use the top level of your Git repo as a package URL, because every file in your repo would then be part of your Racket package, and you rarely want this to be the case. When I first learned that this is how it worked, I felt that this was inconvenient. But the beauty of this approach is that it delegates the inclusion or exclusion of files to an existing system designed for this — a filesystem! Thus, Racket solves through delegation to standard platforms what other ecosystems achieve through configuration (such as Python’s MANIFEST.in
).
So instead, just use a subfolder in the repo to contain the actual package, and indicate that path when you register the package with the Package Catalog. This gives you the flexibility to store development scripts and other useful things in the repo that don’t need to be part of the package, without having to do any special configuration.
Just remember, as far as packages and your repo are concerned, “the top level is hopeless.”*
* – This alludes to, but has nothing to do with, the oft-repeated maxim in the Racket community regarding (I think) the REPL.
Okay, how do I adopt this composable approach?
Alright, now we’re gettin’ to the good stuff. This stuff took me literally days — spread out over weeks — to figure out and get right. But have no fear, we will get you sorted out in short order. And if you still have questions after going through all this, then comment below and we’ll figure it out together. Now, let’s do this.
Create the Component Packages
First of all, create the following folder structure:
foo-lib foo-test foo-doc foo
Each of these is going to be a package. With that in mind, within these folders, structure your code within folders as you intend them to appear in the global collection namespace. Here is an example:
foo-lib/ info.rkt bar/ a.rkt b.rkt main.rkt data/ pinetree.rkt pinetree/ util.rkt
In this example, you would eventually be able to (require bar)
and (require data/pinetree)
and these would do what you expect. The info.rkt
file should contain (define collection 'multi)
here, to indicate that you want these paths to be placed at the root level in the collection namespace.
Note: main.rkt
is only consulted for top level collections, not nested ones like data/pinetree
. For such nested collections you will need a file resembling pinetree.rkt
in the parent collection to serve the same role.
As an illustration, if you instead wanted (require foo/bar)
and (require foo/data/pinetree)
here, then use (define collection "foo")
. In this case, note again that bar/main.rkt
would not be consulted since bar
is no longer a top level collection.
Your info.rkt
file should also of course contain the package dependencies and other configuration — if you haven’t written an info.rkt file before, see the Resources section for helpful links.
foo-test/ info.rkt tests/ foo.rkt
With (define collection 'multi)
in this info.rkt
file, it puts the module “foo” in the global collection “tests” — that’s right, your tests would be in Racket’s global collection namespace and anyone could find them at (require tests/foo)
. Kind of weird, but at least consistent and simple. It’s one convention followed in the Racket community — the other, and the one that I find myself favoring, is to use (define collection "foo")
here, so that the tests are placed at (require foo/tests/foo)
.
foo-doc/ info.rkt scribblings/ foo.scrbl
And with (define collection 'multi)
, that’s right, you guessed it, this puts a module named “foo” in the (global) collection named “scribblings”. Except, since it’s a Scribble module, it is treated in a special way and it doesn’t look like you can just import it as (require scribblings/foo)
— not that you’d want to but just in principle. Again, I prefer (define collection "foo")
here to avoid any possibility of conflict in the global scribblings
collection.
To ensure that your modules are treated as docs and not (just) as source files, you will need: (define scribblings '(("scribblings/foo.scrbl" ())))
.
By the way, for the longest time I was confused about the meaning of things like (tech #:doc '(lib "scribblings/reference/reference.scrbl") "blame object")
while writing Scribble docs. What exactly is this “scribblings/reference/reference.scrbl” path? Well, it should make sense now — this points to the reference.scrbl
module in the (global) scribblings/reference
collection. If you happen to use (define collection "foo")
in the info.rkt
file for your package docs, other docs would refer to yours via foo/scribblings/foo.scrbl
.
Documentation is handled in a special way, but from the perspective of modules, collections, and packages, Scribble modules are organized in the same way as Racket modules or those of any other language.
Now, you may be wondering, since foo-doc
is a different package from foo-lib
, how does the package catalog know that it contains the docs for foo-lib
? Well, it turns out that it doesn’t! So when all is said and done, you will in fact see warnings on the foo-lib
package to the effect that it “needs documentation.” This isn’t ideal, of course, but for what it’s worth, it’s also where the composite package comes in.
Create the Composite Package
And finally, the aggregate foo
package:
foo/
info.rkt
The contents of the info.rkt
file here should be something like:
#lang info (define collection 'multi) (define deps '("base" "foo-lib" "foo-doc" "foo-test")) (define implies '("foo-lib" "foo-doc" "foo-test"))
This essentially just configures this foo
package as a composite of the other three so that, for instance, raco pkg install foo
would install all of them. Note that the use of (define collection 'multi)
here is arbitrary — if the package truly is a composite and does not contain any additional files in the folder besides info.rkt
, then which collection you use makes no difference here since there are no modules that could be placed in collections. Technically, this option can even be left out*.
* – Leaving it out means that the collection name is implicitly equated with the package name, i.e. the name you use when you register the package with the Package Catalog. This generally shouldn’t cause problems, but it leaves open a loose end where a module being added to the composite package in the future could have an implicit association with the package name, so that changing the package name at that stage would change the collection of this module as well. A remote possibility, to be sure, but it may be better if there was a way to explicitly indicate that a package is a composite so that extra modules could not accidentally be added to it. Simon Schlee in a Discourse post proposed (define collection 'none)
to achieve this. Until such a time, I slightly prefer being explicit about the collection (even if it is arbitrary and disregarded) to leaving this option out altogether.
Test Everything Locally
Now that you have your basic package layout, install these packages locally, run tests, and build docs the way you usually do for packages, and confirm that they work as expected. Personally, I can never remember all the different flags to pass to raco
to do all these things, so I use a Makefile to do it, which is just a way to manage your project using short aliases for commonly used commands. The Makefile I use in my projects is derived from a template by Greg Hendershott. You can also look at the Makefile for the Qi library for an example of one that uses the lib/test/doc structure.
Register the Packages
Once everything looks good locally, push the changes to your source control host of choice (e.g. GitHub or GitLab) and then register the packages — all four of them, independently — on the the Package Catalog, in the usual way (if you’ve never done this before, this section of Beautiful Racket walks you through it). Be sure to point each package URL to the respective subfolder in your repo. That should be all you need, but if you’d like a step-by-step guide for this part, you could follow along in the next section, which covers some of the same ground but from a different starting point. If you are going to do this, just remember that if your library is new, you don’t need to worry about downtime and therefore don’t need to worry about creating a fresh branch on your repo — just do everything on the main / master branch.
How do I migrate my existing library?
In order to migrate your existing library to the lib/test/doc structure while minimizing risk of build failures that may affect your users and the availability of your library, I recommend the following approach.
- First, create a separate
lib-test-doc-restructure
branch in your source repo for the migration - Restructure the library into three packages in this branch, and also the fourth aggregate
foo
package, so that there should be at least 4 folders at the top level in your repo - Register the three sub-packages on the package index — point them all to the appropriate paths in the
lib-test-doc-restructure
branch of your repo (not the main/master branch!) - Update the URL of your existing
foo
package to also point to thelib-test-doc-restructure
branch of your repo, to thefoo
folder. - Wait ten minutes or so and verify that the package index pages show the correct files in the manifest for each of the four packages (If the manifests still don’t reflect the modules after waiting for a little while, try “rescan all of my packages” from your account menu (at the time of this writing, found in the dropdown in the top navbar). Also, note that the manifest for the aggregate package will typically be empty).
- Wait up to 24 hours to see whether the packages and docs all build correctly (the package index refresh cycle is currently 24 hours)
- If they don’t build correctly, restore the URL of the
foo
package to the main/master branch. You may also need to delete the other packages to avoid conflicts, since, for instance, thefoo-lib
package now provides some of the same modules as your restoredfoo
package, and as we learned, these conflict in the shared global collection namespace - Fix the errors you encountered and try steps 3-6 again until everything looks good.
- Once everything looks good, merge the
lib-test-doc-restructure
branch into your main branch. - Update the URLs on all four packages to use the main branch.
- One thing to keep in mind is that the package server may not rebuild your package if the checksum (i.e. the commit) hasn’t changed. So if for instance you encounter a failed build on the package index and realize that you hadn’t set the paths correctly, then fixing the path in the package metadata alone would not trigger a rebuild! In these cases you may need to push a fresh commit in order for the package to get rebuilt.
- Wait and ensure everything looks good.
It’s obviously not ideal that we need to “debug in production” like this with such long feedback cycles, but I’m not aware of a better option at the moment. Sounds like there are big plans to improve the package index infrastructure, though, so there may be better options available in the not-too-distant future.
What else do I need to know?
In general I recommend setting up automated testing for your project with your CI service of choice, for obvious reasons, but also because, with the Racket package server’s 24 hour build cycle, you may otherwise need to wait a whole day to find out if a commit broke your build.
If you happen to be using GitHub, I wrote another post that can help you get set up here. Note: although it is titled “migrating your project from Travis,” it is really more about mimicking a longstanding and popular Travis workflow (in the Racket community) in the newer GitHub Actions service.
If you are using SourceHut, Stefan Schwarzer has instructions for setting up CI on this platform.
Frequently Asked Questions
What’s the difference between single and multi-collection packages?
Packages and collections are orthogonal concepts — a single package can provide modules in many collections and a single collection can be made up of modules provided by different packages. This aspect is a source of confusion for Racketeers for at least this reason: When you say (define collection "my-collection")
, i.e. using a string name for the collection, you are always creating a single collection package. But another way to write a single collection package is to use (define collection 'multi)
and just put modules in a my-collection
subfolder. As we saw earlier, 'multi
is treated as the “root” path in the collection namespace. Simply using 'multi
does not necessarily mean, therefore, that the package contains modules in multiple collections — it just means that it can.
If you still have questions
Please comment below, or ask on the Racket Discourse — in either case, if you feel there is something missing here, please consider commenting so that I can update the post if necessary for the benefit of the next person.
Alternative Approaches
Is lib/test/doc More Trouble Than It’s Worth?
This Discourse topic raised by Simon Schlee discusses this subject, and asks whether there is a missing abstraction connecting the lib/test/doc packages.
Are Collections Fundamentally Flawed?
While Racket’s overall approach to package management has a certain simplicity to it, it isn’t the only way it could be done. There are some who say, well, why should there be a shared collection namespace at all? If you write a library that provides some useful functionality and someone else wants to use it, couldn’t they just download it to wherever they want it and then use it at that path? Why is it necessary that they only use it at a globally unique path, especially considering that their local machine has no need to remain globally consistent with all libraries in the ecosystem? After all, if someone were to send you a ZIP file containing code they wrote, you could extract it into whatever folder you wanted it in and the code would be fully usable at that path. Just what do we gain by imposing the additional constraint of the association of libraries with a global namespace path?
Now certainly, having a global shared namespace is in line with conventions in other languages, where in python, for instance, a declaration like import datetime
means the same thing on every machine running python, since datetime
is in, essentially, a global, shared namespace. The main aspect that adopting such a shared namespace lends to the library ecosystem is that library imports will necessarily mean the same thing on every machine and that there can be broad agreement on names given to various libraries. Perhaps the question to ask ourselves here is not so much whether this is a desirable quality or not, but rather, whether it is necessary to couple this notion of addressing with the notion of functional dependency management — or whether it would be better in fact to treat these as two entirely independent things, which could be addressed in independent ways. Sage Gerard’s Denxi appears to be a proposed solution to the latter here — functional dependency management without reference to addressing. As this approach is more focused, it should enable greater flexibility and variety in the kinds of solutions we find here. At the same time, decoupling these two notions — of dependencies and shared names — shouldn’t necessarily mean that they are incompatible, and I do think that an approach like Denxi’s could be compatible with a certain idea of shared namespaces, which are less rigid and more “open” than the current way. That is all perhaps far outside the scope of the present post, however. Look for more about this in a future post.
Resources
These resources are organized into Examples for you to study, References for additional context and information, and Infrastructure containing code, tools and recipes you can use in your projects.
Examples
The threefold lib/test/doc is a widely applicable breakdown, but the most natural make-up varies from library to library. Here are some examples of projects using variations of lib/test/doc that you could study to find what’s right for yours:
- pict
- marionette
- megaparsack — this library is provided via 5 packages (not counting the composite), as of this writing.
- gui-easy — provided via 2 packages.
- sketching — provided via 3 packages but not quite the usual ones.
References
Package Management in Racket — The official docs. In particular, Package Metadata and Controlling raco setup with info.rkt files document info.rkt flags and how they are used in raco setup
to build your package.
Tutorial: Creating a Package — A tutorial on creating a single collection package. This is what I followed when I put up my first Racket library. If you choose to “start simple” then this is a great post — just keep in mind the caveat above re: the top level being hopeless.
Beautiful Racket: The Info.rkt File — A great resource to gain more insight into packages and collections, and configuring packages using info.rkt files.
Beautiful Racket: The Package Server — Learn more about registering your package on the package server, and how the package server works.
Single Package vs Multiple (-lib -doc -test) & Convenience — A Discourse topic that coincidentally arose while I was readying this post, showcasing many other perspectives on the subject. A worthwhile read for anyone interested in reforms to the package management process.
Creating Packages — A Racket Wiki entry containing resources for writing and organizing packages.
Denxi White Paper — For a quite different approach to dependency management.
Infrastructure
Racket Makefiles — On using Makefiles to manage your development workflow.
Migrating Your Racket Project from Travis to GitHub Actions — My other post that covers setting up automated testing and coverage reporting for your project on the GitHub platform.
Continuous Integration (CI) Example for SourceHut — by Stefan Schwarzer, covers setting up CI infrastructure on the SourceHut platform.
Tim Jervis
Hi –
Is there a typo in “(require hash-util/data/awesome/file), respectively.”? Should be “(require hash-util/awesome/file), respectively.”?
Excellent article by the way, very helpful!
Kind regards,
Tim
sid
Yes, thank you for catching that! Fixed 🙂