Who would have thought an LLM may very well be run inside a ttf font file….
I used to be a bit confused too when i discovered this https://fuglede.github.io/llama.ttf/, an LLM and an inference engine for it, however truly a ttf file.
Properly.. I attempted loading the font inside Firefox for a pattern HTML enter ingredient and tried typing stuff out however received no end result and upon in search of docs, I noticed there was tweaks to be achieved on some shared libraries to make this factor work.
So about that..
libharfbuzz is a text-shaping engine utilized in Android, Chrome, ChromeOS, Firefox, GNOME, GTK+, KDE, Qt, LibreOffice, OpenJDK, XeTeX, PlayStation, Microsoft Edge, Adobe Photoshop, Illustrator, InDesign, Godot Engine, and different locations.
So the newest model of this library comes with an internet meeting shaper, we are able to write our personal textual content shaping engine and embed it right into a font file. we are able to use this text-shaping function via their wasm shaper interface. You possibly can learn extra concerning the api features obtainable and documentation here
To make the libharfbuzz library work with WebAssembly function I wanted wasm-macro-runtime constructed from supply.
I had libharfbuzz library preinstalled in my Arch Linux(I take advantage of Arch BTW) system however libharfbuzz by default comes with WebAssembly function disabled clearly for safety causes, so i needed to manually construct libharfbuzz with wasm function enabled and wasm-macro-runtime library.This was the toughest half not gonna lie.
Additional dependencies you’re going to want for constructing:
meson, pkg-config, ragel, gcc, freetype2, glib2, glib2-devel, cairo
Set up these out of your distro bundle supervisor
I cloned the [wasm-macro-runtime](https://github.com/bytecodealliance/wasm-micro-runtime/) repo and the harfbuzz’s wasm_shaper docs had construct directions for the wasm function within the engine
for constructing wasm-macro-runtime:
$ cmake -B construct -DWAMR_BUILD_REF_TYPES=1 -DWAMR_BUILD_FAST_JIT=1
$ cmake — construct construct — parallel
if you wish to set up the libiwasm shared library globally, you’ll have to run this too
$ sudo cmake — construct construct — goal set up
Make sure that to clone the newest model of the harfbuzz for an error-free construct.
for constructing the library run:
$ meson setup construct -Dwasm=enabled
Make sure that your output reveals the WebAssembly choice enabled
instance output:
…
Further shapers
Graphite2 : NO
WebAssembly (experimental): YES
…
Now run this command to construct it
$ meson compile -C construct
Be aware:When you encounter error with meson not with the ability to discover the shared object libiwasm it’s a must to manually copy the libiwasm.so file constructed from supply to src listing contained in the construct listing of harfbuzz (construct/src).
You should have harfbuzz constructed with wasm enabled contained in the harfbuzz/construct/src/ (assuming construct was the listing used).
There shall be a whole lot of information contained in the listing, the one essential one is the `libharfbuzz.so.0.XXXXX.0` file that is the shared library we have to run wasm contained in the ttf file.
Now to preload the the libharfbuzz and libiwasm libraries ( utilizing wasm enabled libharfbuzz is a nasty concept from a safety perspective ) we are going to use the LD_PRELOAD setting variable.
$ export LD_PRELOAD=/path/to/libharfbuzz.so.0.XXXXX.0:/path/to/libiwasm.so
Now we are able to copy the llama.ttf file to our .native/share/fonts/ listing inside our dwelling listing and run `$ fc-cache` command to make use of the font.
since Llama ttf makes use of Open Sans as the bottom font, you’ll have to seek for Open sans within the font choice menu to see LLama ttf and use this font
Any app that makes use of libharfbuzz library can be utilized for this, I used Gedit.
With all that setup, I attempted typing texts into Gedit, however there was no textual content era taking place besides the phrase ‘Open’ was being magically changed to ‘Llama’ every time i typed it.
To be able to perceive what the llm inference engine was doing i scanned via the supply code and located this Rust code:
let res_str = if str_buf.starts_with(
“As soon as upon a time!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!”,
) {
let depend = str_buf.chars().filter(|c| *c == ‘!’).depend() as usize;
let s = format!(“{}”, next_n_words(&str_buf, depend + 5–70));
debug(&s);
s
} else if str_buf.starts_with(“Abracadabra”) || str_buf.starts_with(“As soon as upon”) {
format!(“{}”, str_buf).substitute(“ö”, “ø”)
} else {
format!(“{}”, str_buf)
.substitute(“Open”, “LLaMa”)
.substitute(“ö”, “ø”)
.substitute(“o”, “ø”)
This was outlined inside a operate that calls next_n_words(), an interface operate to generate textual content utilizing the embedded LLM , every time the textual content begins with “As soon as upon a time!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!” and for each “!” character pressed.
Here is the complete code
So i typed “As soon as upon a time!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!” and the font began producing textual content for me for each “!” character pressed afterwards. 🙂