Skip to content

Commit

Permalink
feat: Use raw inline for inline code highlighting
Browse files Browse the repository at this point in the history
  • Loading branch information
treeman committed Jan 18, 2025
1 parent a063b84 commit b531057
Show file tree
Hide file tree
Showing 21 changed files with 84 additions and 163 deletions.
2 changes: 1 addition & 1 deletion drafts/rust_and_neovim_communication.dj
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ I couldn't find a channel to pass internal messages [like I use in Rust](#Channe

I figured I'll solve this by storing all replies (identified by a message id) and the `call` function can just wait until a message with a matching id has been received.

The `call` function that allows the `"ListUrls"`{lang=lua} call above to work looks like this:
The `call` function that allows the `"ListUrls"`{=lua} call above to work looks like this:

```lua
M.call = function(msg, cb)
Expand Down
2 changes: 1 addition & 1 deletion posts/2014-07-05-reinstalling_slackware.dj
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Create
dd if=usbboot.img of=/dev/sdX bs=1M
```

Be sure `/dev/sdX` is the usb, dd will wipe everything! Simple way is to `ls /dev`{lang=bash} before and after plugging in device. Boot from bios (f2 or f10).
Be sure `/dev/sdX` is the usb, dd will wipe everything! Simple way is to `ls /dev`{=bash} before and after plugging in device. Boot from bios (f2 or f10).


## Make partitions
Expand Down
16 changes: 8 additions & 8 deletions posts/2020-05-03-how_i_wrote_my_book_using_pollen.dj
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ imap <C-L> λ
imap <C-E> ◊
```

Pollen markup uses `◊` extensively so having an easy way to insert it is very important. And in Racket instead of writing lambdas `(lambda (x) ...)`{lang=racket} you can write `(λ (x) ...)`{lang=racket}. It's not necessary but I thought, why not?
Pollen markup uses `◊` extensively so having an easy way to insert it is very important. And in Racket instead of writing lambdas `(lambda (x) ...)`{=racket} you can write `(λ (x) ...)`{=racket}. It's not necessary but I thought, why not?

# Configuring Pollen

Expand Down Expand Up @@ -92,7 +92,7 @@ You can also specify attributes such as classes, which is quite handy:
◊div[#:class "my-class"]{ ... }
```

But the real power of the markup language is how easy it is to create your own tags. Just define regular Racket functions and provide them in pollen.rkt like so: `(provide (all-from-out "rkt/tags.rkt"))`{lang=racket}
But the real power of the markup language is how easy it is to create your own tags. Just define regular Racket functions and provide them in pollen.rkt like so: `(provide (all-from-out "rkt/tags.rkt"))`{=racket}

Here are some examples of tags I've implemented:

Expand Down Expand Up @@ -212,9 +212,9 @@ Now it would be fairly easy (or at least it's possible to style it in such a way

I wanted to be able to customize the placement on the narrower screen as I wanted, while the floating sidenote should be as close to their reference as possible. I solved it, but it's not very pretty...

My first thought was to insert two sidenotes, and set `display:none`{lang=css} to hide one of them. But this would break screen readers or simplified readers that removes much of the styling, such as the "reader view" in Firefox. So I opted for a more complex solution of manually modifying the top margin for each sidenote.
My first thought was to insert two sidenotes, and set `display:none`{=css} to hide one of them. But this would break screen readers or simplified readers that removes much of the styling, such as the "reader view" in Firefox. So I opted for a more complex solution of manually modifying the top margin for each sidenote.

In practice it means I insert a sidenote using `◊sn{my-ref}`{lang=pollen}, which by default inserts it below the current paragraph. If I want to manually place it somewhere else I use `◊note-pos[#:top -9]{my-ref}`{lang=pollen}. So for example:
In practice it means I insert a sidenote using `◊sn{my-ref}`{=pollen}, which by default inserts it below the current paragraph. If I want to manually place it somewhere else I use `◊note-pos[#:top -9]{my-ref}`{=pollen}. So for example:

```pollen
First paragraph.◊sn{my-ref}
Expand All @@ -224,7 +224,7 @@ Second paragraph.
◊note-pos[#:top -9]{my-ref}
```

The text for the sidenote is given by `◊ndef["my-ref"]{Sidenote text here}`{lang=pollen}, which can be paced anywhere in the source file.
The text for the sidenote is given by `◊ndef["my-ref"]{Sidenote text here}`{=pollen}, which can be paced anywhere in the source file.

There's a bunch of sidenote specific styling, but the important parts are given by:

Expand Down Expand Up @@ -258,9 +258,9 @@ There's a bunch of sidenote specific styling, but the important parts are given
}
```

And margins are overridden by `<div class="sidenote" style="margin-top:-9em;">`{lang=html}.
And margins are overridden by `<div class="sidenote" style="margin-top:-9em;">`{hl=html}.

The actual implementation of `◊sn`{lang=pollen} and `◊ndef`{lang=pollen} has grown surprisingly large and I went through the old version in the [previous post][sidenotes], so I'll skip it here. The implementation has changed a little but not in any major way. You can always find the [latest code on GitHub][sidenote-code] if you're interested.
The actual implementation of `◊sn`{=pollen} and `◊ndef`{=pollen} has grown surprisingly large and I went through the old version in the [previous post][sidenotes], so I'll skip it here. The implementation has changed a little but not in any major way. You can always find the [latest code on GitHub][sidenote-code] if you're interested.


# Local markup
Expand Down Expand Up @@ -370,7 +370,7 @@ This can be accomplished by writing a bit of lisp code in the tag that splits th
,@(map make-row rows)))
```

(Yes I know that the "..." row will generate a `<span class="time">...</span>`{lang=html} and a `<span class="txt"></span>`{lang=html} element, but it doesn't affect the appearance.)
(Yes I know that the "..." row will generate a `<span class="time">...</span>`{hl=html} and a `<span class="txt"></span>`{hl=html} element, but it doesn't affect the appearance.)


# Table of contents
Expand Down
2 changes: 1 addition & 1 deletion posts/2021-06-03-the-t-34-keyboard-layout.dj
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ Here they are:

Yupp, I use numbers on home-row (and the low index, which is the next best key apart from the thumbs). They're laid out prioritizing lower digits, slightly de-emphasizing index fingers as they're responsible for two digits. Separating even from odd numbers made sense from an optimization aspect, but it also made it easier to learn.

What makes this special is that the layer switch is smart, similar to CAPSWORD as the layer turns off on space (which I call NUMWORD). So if I want to write `if x == 3 do`{lang=elixir} then I type `if x == <NUMWORD>3 do` and the layer turns off after the space.
What makes this special is that the layer switch is smart, similar to CAPSWORD as the layer turns off on space (which I call NUMWORD). So if I want to write `if x == 3 do`{=elixir} then I type `if x == <NUMWORD>3 do` and the layer turns off after the space.

What about `k`, `j` and `G`? Those are for easy navigation with Vim. So `13k` means "13 lines above" and `127G` means "line number 127". Naturally, the layer turns itself off, so it doesn't interfere with my next commands. I use it all the time and it's fantastic.

Expand Down
10 changes: 5 additions & 5 deletions posts/2022-08-29-rewriting_my_blog_in_rust_for_fun_and_profit.dj
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ These are the main annoyances I wanted to solve with this rewrite:
>>= deIndexUrls
```

Even if you don't understand the `$`{lang=haskell} and `>>=`{lang=haskell}, I still think it's clear that we're finding files from the `static/` folder, sending them to `pandocCompiler` (to convert from markdown), to some templates and then de-indexing urls (to avoid links ending with `index.html`).
Even if you don't understand the `$`{=haskell} and `>>=`{=haskell}, I still think it's clear that we're finding files from the `static/` folder, sending them to `pandocCompiler` (to convert from markdown), to some templates and then de-indexing urls (to avoid links ending with `index.html`).

Simple and clear!

Expand Down Expand Up @@ -285,9 +285,9 @@ At first my plan was to do this with the generalized preprocessing step, but the
{ :notice }
```

That would call a `notice` parser, which in this case would create a `<aside>`{lang=html} tag instead of a `<blockquote>`{lang=html} tag, while preserving the parsed markdown.
That would call a `notice` parser, which in this case would create a `<aside>`{hl=html} tag instead of a `<blockquote>`{hl=html} tag, while preserving the parsed markdown.

While there are existing crates that adds code highlighting using [syntect], I wrote my own that wraps it in a `<code>`{lang=html} tag and supports inline code highlighting. For example "Inside row: `let x = 2;`{lang=rust}" is produced by:
While there are existing crates that adds code highlighting using [syntect], I wrote my own that wraps it in a `<code>`{hl=html} tag and supports inline code highlighting. For example "Inside row: `let x = 2;`{=rust}" is produced by:

```md
Inside row: `let x = 2;`rust
Expand All @@ -297,7 +297,7 @@ Inside row: `let x = 2;`rust

I didn't spend that much time into improving the performance, but two things had a significant impact:

The first thing is that if you use [syntect] and have custom syntaxes, you really should [compress `SyntaxSet`{lang=rust} to a binary format][syntect-compress].
The first thing is that if you use [syntect] and have custom syntaxes, you really should [compress `SyntaxSet`{=rust} to a binary format][syntect-compress].

The other thing was to parallelize rendering using [rayon][]. Rendering is the markdown parsing, applying templates and creating the output file. Rayon is great is this task is limited by CPU and it's very easy to use (if the code is structured correctly). This is for instance a simplified view of the rendering:

Expand All @@ -317,7 +317,7 @@ fn render(&self) -> Result<()> {
}
```

To parallelize this all we need to do is change `iter()`{lang=rust} into `par_iter()`{lang=rust}:
To parallelize this all we need to do is change `iter()`{=rust} into `par_iter()`{=rust}:

```rust
use rayon::iter::{IntoParallelRefIterator, ParallelIterator};
Expand Down
6 changes: 3 additions & 3 deletions posts/2023-10-01-rewriting_my_neovim_config_in_lua.dj
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ return {

Incredibly nice when you have lots of plugins, and some have large configurations (like [lspconfig][], [treesitter][nvim-treesitter] or [cmp][nvim-cmp]).

One last big thing is I wanted to have all global keymaps in one single file. [lazy.nvim][] supports adding keymaps in the plugin specification using `keys = { }`{lang=lua} option. I accomplished this by simply returning a "module" table from `config/keymaps.lua`:
One last big thing is I wanted to have all global keymaps in one single file. [lazy.nvim][] supports adding keymaps in the plugin specification using `keys = { }`{=lua} option. I accomplished this by simply returning a "module" table from `config/keymaps.lua`:

```lua
M = {}
Expand Down Expand Up @@ -193,7 +193,7 @@ local on_attach = function(client, buffer)
end
```

One last thing; for the regular mappings you don't want to just remap them in `config/keymaps.lua` because multiple files will run `require("config.keymaps")`{lang=lua}, so I wrapped it in an init function:
One last thing; for the regular mappings you don't want to just remap them in `config/keymaps.lua` because multiple files will run `require("config.keymaps")`{=lua}, so I wrapped it in an init function:

```lua
M.init = function()
Expand Down Expand Up @@ -280,7 +280,7 @@ I think that's very nice, but treesitter is more than that. And a great example
- `if` textobject for inner function. So `cif` would delete the function body and enter insert mode.
- `ax` textobject for outer comment, to easily delete/change comments.

The beauty is that these work on treesitter nodes, so they work equally well across languages for functions like `fn myfun() { }`{lang=rust}, `function myfun() ... end`{lang=lua} or a `def myfun() do .. end`{lang=elixir}. (Given that the treesitter implementation supports these options. Markdown doesn't have the concept of a function for instance.)
The beauty is that these work on treesitter nodes, so they work equally well across languages for functions like `fn myfun() { }`{=rust}, `function myfun() ... end`{=lua} or a `def myfun() do .. end`{=elixir}. (Given that the treesitter implementation supports these options. Markdown doesn't have the concept of a function for instance.)

## [Neogit][]: Git management

Expand Down
8 changes: 4 additions & 4 deletions posts/2024-03-19-lets_create_a_tree-sitter_grammar.dj
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ bool tree_sitter_sdjot_external_scanner_scan(void *payload, TSLexer *lexer,
To decide if we're going to close the paragraph early, we'll look ahead for any `:::`, and if so we'll close it without consuming any characters.
This might not be the most efficient solution because we'll have to parse the `:::` again, but it gets the job done.

The matched token should be stored in `lexer->result_symbol`{lang=c}:
The matched token should be stored in `lexer->result_symbol`{=c}:

```c
static bool parse_close_paragraph(TSLexer *lexer) {
Expand All @@ -284,7 +284,7 @@ So `:::` would be marked as `_close_paragraph` (which will be ignored by the out
To prevent this, we turn `_close_paragraph` into a zero-width token by marking the end before advancing the lexer.

How do we advance the lexer?
We call `lexer->advance`{lang=c}:
We call `lexer->advance`{=c}:

```c
static uint8_t consume_chars(TSLexer *lexer, char c) {
Expand Down Expand Up @@ -886,7 +886,7 @@ emphasis: ($) => prec.left(seq("_", $._inline, "_")),
_text: (_) => /[^\n]/,
```

When we try to match a `_` then the grammar can match either `emphasis` or `_text` because `_` matches both `"_"`{lang=javascript} and `/[^\n]/`{lang=javascript}.
When we try to match a `_` then the grammar can match either `emphasis` or `_text` because `_` matches both `"_"`{=javascript} and `/[^\n]/`{=javascript}.
The issue seems to be that Tree-sitter doesn't recognize this as a conflict.

If we instead add a fallback with a `_` string then Tree-sitter will treat it as a conflict:
Expand Down Expand Up @@ -1245,7 +1245,7 @@ What the callback does is inject the return value into the `span` element, like
<span CALLBACK_RESULT >highlight</span>
```

So we'd like to return something like `"class=\"markup italic\""`{lang=rust}, using `attr` which is only a `usize` into `HIGHLIGHT_NAMES`:
So we'd like to return something like `"class=\"markup italic\""`{=rust}, using `attr` which is only a `usize` into `HIGHLIGHT_NAMES`:

```rust
renderer.render(highlights, code.as_bytes(), &|attr| {
Expand Down
6 changes: 3 additions & 3 deletions posts/2024-05-02-customizing_neovim.dj
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ require("blog.commands")

The above files handles the initial registration, the other files are required when they're needed.

For example, `autocmd.lua` registers a `{ "BufRead", "BufNewFile" }`{lang=lua} autocommand that establishes a connection to the backend, and registers buffer local keymaps:
For example, `autocmd.lua` registers a `{ "BufRead", "BufNewFile" }`{=lua} autocommand that establishes a connection to the backend, and registers buffer local keymaps:

```lua
-- This way we include other blog related functionality
Expand Down Expand Up @@ -257,7 +257,7 @@ Instead of pasting a big chunk of code, let's go through the most important impl

1. I want to use the title from the metadata because I'll often change the title and I want the slug to be updated.

We can extract the title from the post by shelling out to [ripgrep][] that matches against a `title = "My title"`{lang=toml} line.
We can extract the title from the post by shelling out to [ripgrep][] that matches against a `title = "My title"`{=toml} line.

[nvim-nio][] provides `process.run` to run a shell command:

Expand Down Expand Up @@ -336,7 +336,7 @@ Instead of pasting a big chunk of code, let's go through the most important impl

The error to look out for is `nvim_exec2 must not be called in a lua loop callback`, which I assume means that `nio` uses the lower level lua loop API for its async system.

What we need to do is yield to the Neovim scheduler before calling `vim.cmd` to rename the file, which can be done with `nio.scheduler()`{lang=lua}:
What we need to do is yield to the Neovim scheduler before calling `vim.cmd` to rename the file, which can be done with `nio.scheduler()`{=lua}:

```lua
nio.scheduler()
Expand Down
20 changes: 10 additions & 10 deletions posts/2024-05-08-browse_posts_with_telescopenvim.dj
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ series = "extending_neovim_for_my_blog"
favorite = true
---

I've used [telescope.nvim][]\'s find files with `require("telescope.builtin").find_files`{lang=lua} for quite some time.
I've used [telescope.nvim][]\'s find files with `require("telescope.builtin").find_files`{=lua} for quite some time.
I use find files together with it's cousin `.oldfiles` (find recently opened files) all the time for finding source code files, blog posts, and more.

But it's naturally restricted to operate on only filenames and you can make telescope richer by operating on structured data.
Expand Down Expand Up @@ -178,10 +178,10 @@ First, the input arguments to `scoring_function`:
```

What `scoring_function` should do is return a single numeric value signifying how close `ordinal` is to `prompt`, where higher means a better match.
Because we defined `discard = true`{lang=lua} if we return a value less than `0`{lang=lua}, the entry will get removed (filtered).
Because we defined `discard = true`{=lua} if we return a value less than `0`{=lua}, the entry will get removed (filtered).

At first I thought this was a weird way of creating a sorting function.
I had expected a comparison function like `cmp(left, right)`{lang=lua} but after having played with it a little it seems pretty clever.
I had expected a comparison function like `cmp(left, right)`{=lua} but after having played with it a little it seems pretty clever.

## Sorter requirements

Expand Down Expand Up @@ -294,8 +294,8 @@ scoring_function = function(_, prompt, entry)
end,
```

Remember that a return value of `< 0`{lang=lua} filters the entry, so we make sure to check that for each part.
This is already how `fzy_sorter:scoring_function` works and it will either return `< 0`{lang=lua} for entries we should remove and a value in `0..1`{lang=lua} otherwise.
Remember that a return value of `< 0`{=lua} filters the entry, so we make sure to check that for each part.
This is already how `fzy_sorter:scoring_function` works and it will either return `< 0`{=lua} for entries we should remove and a value in `0..1`{=lua} otherwise.
Because the matching of tags and series is so similar, I introduced the `score_element` helper function:


Expand Down Expand Up @@ -344,19 +344,19 @@ end

With this in place we have a working sorter.

- If we type regular text, then `series_score` and `tags_score` will be `0`{lang=lua}, practically ignoring them.
- A `#series_filter` will set `series_score == -1`{lang=lua} unless the post has a matching series, removing the entry.
- And typing `@tag1 @tag2` will require a match for every tag, otherwise `tags_score == -1`{lang=lua} that again removes the entry.
- If we type regular text, then `series_score` and `tags_score` will be `0`{=lua}, practically ignoring them.
- A `#series_filter` will set `series_score == -1`{=lua} unless the post has a matching series, removing the entry.
- And typing `@tag1 @tag2` will require a match for every tag, otherwise `tags_score == -1`{=lua} that again removes the entry.
- Adding the fuzzy scores together gives us a pretty good fuzzy matching behavior I feel.

The only requirement left is to order posts by date.
It must be done carefully not to override the sort order we get from the fuzzy matching.

One way to do this is to sort all posts and add a post counter to each entity (so the first post would get `1`{lang=lua} and the last `272`{lang=lua}) and then add it to the final score somehow.
One way to do this is to sort all posts and add a post counter to each entity (so the first post would get `1`{=lua} and the last `272`{=lua}) and then add it to the final score somehow.
This felt a little cumbersome and I wanted to see if I could implement the scoring function without having to sort the entries first.
After all, we already have the date of each post...

This is possible by placing the post on a timeline between the date of the very first post and today, and then clamping the range to `0..1`{lang=lua}.
This is possible by placing the post on a timeline between the date of the very first post and today, and then clamping the range to `0..1`{=lua}.
Something like this:

```lua
Expand Down
2 changes: 1 addition & 1 deletion posts/2024-05-26-autocomplete_with_nvim-cmp.dj
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@ callback(items)

::: note
As the completion logic grew more complex, I moved it out from Neovim to the backend.
It was easier to setup completion details for individual items that way, and now I only rely on a single `"Complete"`{lang=lua} message that I pipe directly back to `cmp`:
It was easier to setup completion details for individual items that way, and now I only rely on a single `"Complete"`{=lua} message that I pipe directly back to `cmp`:

```lua
function source:complete(params, callback)
Expand Down
Loading

0 comments on commit b531057

Please sign in to comment.