Transporter
Typesafe distributed computing in TypeScript.
Install / Use
/learn @daniel-nagy/TransporterREADME
Introduction
Transporter is an RPC library for typesafe distributed computing. The Transporter API was influenced by comlink and rxjs.
Message passing can quickly grow in complexity, cause race conditions, and make apps difficult to maintain. Transporter eliminates the cognitive overhead associated with message passing by enabling the use of functions as a means of communication between distributed systems.
For an introduction to Transporter check out my blog post!
Features
- 👌 Typesafety without code generation.[^1]
- 😍 Support for generic functions.
- 🤩 The core API works in any JavaScript runtime.[^2][^3]
- 😎 Easily integrates into your existing codebase.
- 👍 No schema builders required, though you may still use them.
- 🥹 Dependency injection.
- 🫶 FP friendly.
- 🤘 Memoization of remote functions.
- 🫡 Small footprint with 0 dependencies.
- 🚀 Configurable subprotocols.
- 🚰 Flow control and protocol stacking using Observables.
- 🤯 Recursive RPC for select subprotocols.
- 🌶️ PubSub using Observables for select subprotocols.
- 👏 Resource management.
- 🥳 No globals.[^4]
- 🧪 Easy to test.
[^1]: Transporter is tested using the latest version of TypeScript with strict typechecking enabled.
[^2]: Transporter works in Node, Bun, Deno, Chrome, Safari, Firefox, Edge, and React Native.
[^3]: Hermes, a popular JavaScript runtime for React Native apps, does not support FinalizationRegistry. It also requires a polyfill for crypto.randomUUID.
[^4]: Transporter has a global AddressBook that is used to ensure every server has a unique address.
Practical Usage
Transporter may be used to build typesafe APIs for fullstack TypeScript applications.
Transporter may be used in the browser to communicate with other browsing contexts (windows, tabs, iframes) or workers (dedicated workers, shared workers, service workers). The browser is ripe for distributed computing and parallel processing but not many developers take advantage of this because the postMessage API is very primitive.
Transporter my be used for inter-process communication in Electron applications.
Transporter may also be used in React Native apps to communicate with webviews. You could take this to the extreme and build your entire native app as a Web app that is wrapped in a React Native shell. The Web app could use Transporter to call out to the React Native app to access native APIs not available in the browser.
Getting Started
To get started using Transporter install the package from the npm registry using your preferred package manager.
npm add @daniel-nagy/transporter
As of beta 3 Transporter is nearing API stability but there may still be some breaking changes to the API. For API docs see the README for each package.
Packages
- core - Core APIs that are designed to work in any JavaScript runtime.
- browser - Wrappers around the browser's messaging APIs that provide normalized interfaces and additional semantics.
The Basics
Let's get up and running with Transporter. We'll create a User module and use Transporter to expose that module.
Here's our User module.
const users = [
{ id: 0, name: "Dan" },
{ id: 1, name: "Jessica" },
{ id: 2, name: "Mike" },
];
export async function list() {
return users;
}
export async function findById(id: number) {
return users.find((user) => user.id === id);
}
Notice that our User module is just a plain old JavaScript module. There's no tight coupling between Transporter and our functions. Nor does Transporter impose any semantics on our API. You can use modules, plain objects, classes, or even arrays it doesn't really matter. The only requirement is that our functions must return a Promise. To expose our User module we need to create a ServerSession. At a minimum, when creating a session, we must provide a Subprotocol. A subprotocol is necessary to provide typesaftey at the protocol level. Let's create a subprotocol and a session for our server.
import * as Json from "@daniel-nagy/transporter/Json";
import * as Session from "@daniel-nagy/transporter/Session";
import * as Subprotocol from "@daniel-nagy/transporter/Subprotocol";
import * as User from "./User";
const Api = {
User,
};
export type Api = typeof Api;
const protocol = Subprotocol.init({
connectionMode: Subprotocol.ConnectionMode.Connectionless,
dataType: Subprotocol.DataType<Json.t>(),
operationMode: Subprotocol.OperationMode.Unicast,
transmissionMode: Subprotocol.TransmissionMode.HalfDuplex,
});
const session = Session.server({ protocol, provide: Api });
For now don't worry about the different modes and just focus on the data type. In this case we are telling Transporter that our API only uses JSON data types. With strict type checking enabled we get a type error.
Type 'undefined' is not assignable to type 'Json'.
Can you spot the problem? If you can't then don't worry because the compiler spotted it for you. We are telling Transporter that our API only uses JSON data types but the return type of findById could be undefined. To fix this we could update findById to always return valid JSON, for example by returning null if the user is not found, but since our server and client are both JavaScript runtimes it would be nice if we could allow undefined. Let's instead use the SuperJson type provided by Transporter. The SuperJson type is a subtype of JSON that allows many built in JavaScript types, such as undefined, Date, RegExp, Map, etc.
- import * as Json from "@daniel-nagy/transporter/Json";
+ import * as SuperJson from "@daniel-nagy/transporter/SuperJson";
- dataType: Subprotocol.DataType<Json.t>(),
+ dataType: Subprotocol.DataType<SuperJson.t>(),
With that change the error will go away.
We just learned that Transporter provides type safety at the protocol level. It will complain if our API and subprotocol are incompatible. We also learned that there is no tight coupling between Transporter and how we build our API. We can also see that Transporter does not use a router. Instead objects can be composed to create namespaces.
Let's move on now and create a client session.
import * as Session from "@daniel-nagy/transporter/Session";
import * as Subprotocol from "@daniel-nagy/transporter/Subprotocol";
import * as SuperJson from "@daniel-nagy/transporter/SuperJson";
import type { Api } from "./Server";
const protocol = Subprotocol.init({
connectionMode: Subprotocol.ConnectionMode.Connectionless,
dataType: Subprotocol.DataType<SuperJson.t>(),
operationMode: Subprotocol.OperationMode.Unicast,
transmissionMode: Subprotocol.TransmissionMode.HalfDuplex,
});
const session = Session.client({
protocol,
resource: Session.Resource<Api>(),
});
const client = session.createProxy();
Creating a client session is almost identical to creating a server session. Generally the client and the server will use the same subprotocol. To get a client that acts as a proxy for our API we use the createProxy method on the ClientSession.
The last thing we need to do is we need to get our server session and our client session to talk to each other. A session is both a message source and a message sink. If our server session and our client session were in the same process then we could just pipe the output of one into the input of the other to complete the circuit.
serverSession.output.subscribe(clientSession.input);
clientSession.output.subscribe(serverSession.input);
While using Transporter in a single process is not very useful, it is useful to understand this example because it will allow you to easily adapt Transporter for just about any transport layer. As long as you can route the messages then you will be able to get Transporter working. This makes the core Transporter API general purpose and, perhaps ironically, transport layer agnostic.
Let's finish off this example by using HTTP as our Transport layer. HTTP is a text base protocol so we need to go from SuperJson to string in order to use HTTP. Let's start on the server side. I'm going to use Bun's built-in server API for this example.
import * as Message from "@daniel-nagy/transporter/Message";
import * as Observable from "@daniel-nagy/transporter/Observable";
Bun.serve({
async fetch(req) {
using session = Session.server({ protocol, provide: Api });
const reply = Observable.firstValueFrom(session.output);
const message = SuperJson.fromJson(await req.json());
session.input.next(message as Message.t<SuperJson.t>);
return Response.json(SuperJson.toJson(await reply));
},
port: 3000,
});
Notice I moved the creation of the session into the request handler. This is perfectly fine, each request will create a session and the session will be terminated at the end of the request handler. In this example this is accomplished using a new feature of JavaScript called explicit resource management. That's the using syntax you may be wondering about. If you are unable to use explicit resource management then that is ok. You can just call session.terminate() explicitly before returning the response.
We get the request body as JSON and then decode the message using SuperJson.fromJson. We then feed that massage into our session and wait for a reply. We encode the reply as text, using the reverse process, and then send the message to the client.
Let's move on now to our client. For our client I am going to use JavaScript's built-in HTTP client fetch.
import * as Observabl
