"The overwhelming majority of successful innovations exploit change"
Peter Drucker
What is a run-time?
Deno is fairly new so there will be a ton of changes to it but the run-time itself and how it works will likely not change much at all. Learning the fundamentals will lend you a ton of skills later on to lean on when troubleshooting or reviewing changes.
More importantly, understanding how deno works will extend your understanding of how node works - so don't dismiss this article just yet. It's jam packed with knowledge bombs about node.
It would be hard to imagine that you haven't heard "v8 engine" but easy to understand (and believe) if you didn't really know anything about it. Today, that changes.
Up until now in this series, we just understand that there is deno with a run-time engine that can read javascript and typescript (compiled at run time) and send that to the V8 engine based on the script that you provide it. What we don't know is how it all works under the hood.
A look under the hood of the V8 Javascript Engine (Terrible pun)
Deno run-time is a system also called a runtime environment. Imagine that it's a car that has both an engine (v8 engine), a transmission (rusty_v8), a gas tank (tokio project). Below we will break down all the parts into easily consumable questions and answers. I hope you enjoy!
How we interact with Javascript
Javascript is a single threaded language. This means it runs one line at a time and will not proceed with the next one until the last one is complete.
When someone says javascript is an interpreted language - they mean that they want to convert javascript into a language that the computer hardware can understand. This is partially true if it's not compiled. Enter the V8 javascript engine.
What is a javascript engine?
The first javascript engine was created by Brendan Eich while working at netscape in a project known as SpiderMonkey. He later co-founded the Mozilla project. SpiderMonkey is still used by Mozilla Firefox browser. Read more on the history of Mozilla.
The V8 Engine
V8 was created by google in 2008. to use with chromium's open source project (it wasn't created as open source for altruistic reasons - more so to get a better market share of the browser & search engine space).
V8 is written in C++ which is a low level programming language. Inside the engine - it takes a javascript file and uses lexical analysis to break down the code into tokens (Parsing using a "parser").
How does parsing work?
That parsed javascript is then broken down into Abstract Syntax Trees (also known as AST). You can see how it breaks it down into a format you're likely already familiar with from the chrome browser tools. Head over to https://astexplorer.net/ and paste some javascript in so that you can see how the syntax tree looks.
From there, a interpreter, profiler, and compiler spit out optimized [machine] code or bytecode to your device and runs. Within the interpreter is a call stack and a memory heap (there is a gif explainer on the callstack/memory heap below).
What is an interpreter?
You cannot run javascript directly in your terminal because your computer doesn't really understand that language. For that, we'd need an interpreter.
When running your javascript, translation and reading of the files, line by line occurs. Each line, one at a time. There is a problem with this when you need to run asynchronous code which we will get into momentarily.
Basically, it takes a set of instructions for the engine to run step-by-step-and-in-order for a desired outcome. These instructions are interpreted into bytecode.
What is a compiler?
A compiler doesn't read and execute on the fly line-by-line like an interpreter, instead what it does is it makes a full pass through the code and then writes a new program in a new language. A compiler that a front end developer may be familiar with is babel.
It's important to write predictable code for your compiler not just for people so that the engine's compiler can optimize the delivery of it.
It will take javascript and convert it to optimized [machine] code that can run natively.
Compilers vs Interpreters?
In some respects all languages on the web need some level of interpretation or compilation but why would you use one over the other?
Interpreters are quick to run, there is no compilation step to run your code. This is ideal when speed matters, javascript was originally created for the browser where load time is important. There is a problem with running everything through an interpreter. If you are running the same piece of code over and over, you start eating up your memory and performance declines.
Compilers may take a little more time to run initially, but it will create machine code that will simplify the code and save resources. Compilers are used for optimizations.
Key take away is that we can get the best of both worlds combining the two into something called a "JIT" compiler.
What is a JIT Compiler?
JIT compilers are also known as "Just In Time" compilers & for v8 it's called "turbo fan". You'll start to see a pattern in their car metaphors.
Remember how I said the code goes from the AST to the interpreter, profiler, and compiler? In v8, the code initially goes to the interpreter and this is called "ignition" - see what I mean? What it spits out is bytecode.
What is a Profiler?
The profiler also known as a monitor watches the code the interpreter creates and how it runs. It monitors how it can be optimized. The profiler selects code from the interpreter to compile and will continuously improve the execution and speed of the output from the engine. This ensures the fastest code possible.
Side note: In v8 there is actually two JIT compilers.
Why not just use Machine code from the onset?
At the end of the day, it comes down to adoption of languages but on a more technical note - it's due to WebAssembly (Wasm).
"Wasm is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for programming languages, enabling deployment on the web for client and server applications" and is adopted by all the big browsers.
To understand deno under the hood, you'd first need to understand node under the hood.
How does node work under the hood?
How Does Node use the "Call stack"? Well, if you haven't already, I highly recommend looking into my article on Event Loops and Callbacks where I explain how it handles calls. Everything with deno is similar - but different.
In the above example you are seeing the LIBUV in action which is how Node API handles asynchronous code and uses binding to communicate from LIBUV back to the V8 engine. If you are confused, don't worry, I'll go into further details.
The LIBUV executes commands based on the call-stack which in turn kicks it back to the node.js bindings after they are processed in their respective queue and back to the v8 engine.
How is Deno different?
What deno is doing under the hood is essentially the same. It's creating and starting a process. When you open an app, you are creating/running a process. What happens is it basically gives you a sandbox with memory and boundaries that you can run a program within.
From a high level, these two run-times do essentially the same things - how they do them is slightly different mostly in who does the work, how secure they are, and how much faster each is paving the way for a promising future for deno.
What is rusty_v8?
Deno has something called rusty_v8 which is a program that is actually a layer inside the deno process that allows the v8 engine to read javascript and convert 1:1 that javascript into rust code. Remember how we said that Node was created in c & c++ (object oriented paradigm language), deno was created on rust which is a multi paradigm language and has a lot of safety built in related to memory as well as being extremely performant.
In deno, if you needed to do anything outside of javascript, rust_v8 takes that and hands it off to rust so that it can access files, edit permissions, and similar functions. It's kind of like a backend to deno.
How does rust_v8 convert javascript to rust?
Head over to the deno repo we can see some of the code that makes this all happen.
You can see, around line 32, deno's bindings which are available on start. These allow it to access the window.deno.core
and send/receive code really quickly.
This deno.core
is the main API that allows us to communicate between javascript and rust. You can use deno.core.send
to send javascript commands to rust and deno.core.rcv
to pull messages from rust. This makes deno fully featured - you can access 'all-the-things'. In the rust language, the things we can do are called ops or operations - just like syscalls, they are operations we need to perform on the computer to run the task.
Let's pretend we make a request - in order to run multiple operations at the same time we need something called the event loop which allows us to run events (much like node). There is something called the Tokio Project which allows us to create a thread pool and workers to run commands for us much like LIBUV.
Why did deno use Tokio instead of LIBUV?
Tokio is a rust module, which works with the future abstraction. LIBUV is c and would necessitate building a bridge to run futures.
Ryan Dahl - https://github.com/denoland/deno/issues/2340
Tokio is an event-driven, non-blocking I/O platform for writing asynchronous applications with the Rust programming language. At a high level, it provides a few major components:
- A multithreaded, work-stealing based task scheduler.
- A reactor backed by the operating system's event queue (epoll, kqueue, IOCP, etc...).
- Asynchronous TCP and UDP sockets.
These components provide the runtime components necessary for building an asynchronous application.
Bringing it all together
If we use something that isn't javascript (anything from the deno API docs that starts with Deno.[command]
for example )we are going to be using the rust backend - once the task is received at the thread pool from the Tokio Project, the thread pool then queues those jobs up, processes them in rust and then sends them back to rusty_v8 to be processed by the v8 engine.
This is similar to the node.js system (as described in the above gif) in that an application (ie your javascript file) is sent to the v8 engine.
Then the node.js bindings (aka the node api) makes a call to the event loop called LIBUV (which is exactly what the Tokio Project does).
The LIBUV executes commands based on the call-stack which in turn kicks it back to the node.js bindings after they are processed back to the v8 engine.
Image source: v8.dev(opens in a new tab)
Next...
If you gleaned anything from this article - it should be that node and deno are very similar but use different tools to accomplish many of the same tasks. You learned about the V8 Javascript engine, node, and deno.
In my next article we will talk about deno's main benefits specifically security.
If you found this article helpful, give me a shout on twitter - I'd love to hear from you. @codingwithdrewk. As always, if you found any errors, just highlight it and mash that "R" button on the right side of the screen and I'll get those fixed right up!
Drew is a seasoned DevOps Engineer with a rich background that spans multiple industries and technologies. With foundational training as a Nuclear Engineer in the US Navy, Drew brings a meticulous approach to operational efficiency and reliability. His expertise lies in cloud migration strategies, CI/CD automation, and Kubernetes orchestration. Known for a keen focus on facts and correctness, Drew is proficient in a range of programming languages including Bash and JavaScript. His diverse experiences, from serving in the military to working in the corporate world, have equipped him with a comprehensive worldview and a knack for creative problem-solving. Drew advocates for streamlined, fact-based approaches in both code and business, making him a reliable authority in the tech industry.