<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Alec Brunelle's Blog]]></title><description><![CDATA[Words on Node.js, Deno, React, and GraphQL]]></description><link>https://blog.alec.coffee</link><generator>RSS for Node</generator><lastBuildDate>Mon, 13 Apr 2026 18:23:54 GMT</lastBuildDate><atom:link href="https://blog.alec.coffee/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Typical vs. Protobufs: Data serialization in TypeScript]]></title><description><![CDATA[Micro-services are prevalent in the software industry for their flexibility and scalability. Those micro-services will likely need to communicate with each other, not only that, clients like web apps and mobile apps also need to be sent information. ...]]></description><link>https://blog.alec.coffee/typical-vs-protobufs-data-serialization-in-typescript</link><guid isPermaLink="true">https://blog.alec.coffee/typical-vs-protobufs-data-serialization-in-typescript</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[kafka]]></category><category><![CDATA[protobuf]]></category><category><![CDATA[typical]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Fri, 15 Sep 2023 04:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/8_NI1WTqCGY/upload/75344ce3e2bd48ce3f04378dab018203.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Micro-services are prevalent in the software industry for their flexibility and scalability. Those micro-services will likely need to communicate with each other, not only that, clients like web apps and mobile apps also need to be sent information. One of the first problems software teams may encounter is making sure the right payloads are sent to and from services and clients. Developers either can assume payload structure through speaking to each other (docs, e-mail, etc) or they can opt into a technology which handles this for them through the use of shared schemas. Buying into one serialization technology makes it easy for developers share knowledge like schemas and build utilities for the common format which can lead to shipping code that is less prone to validation error. In a traditional REST API setting, OpenAPI is popular, but for other settings which use gRPC or messaging queues, technologies like <a target="_blank" href="https://github.com/stepchowfun/typical">Typical</a>, <a target="_blank" href="https://protobuf.dev/">Protobuf</a>, <a target="_blank" href="https://thrift.apache.org/">Thrift</a> or <a target="_blank" href="https://avro.apache.org/">Avro</a> are used. An example use-case could be two services within an organization which need to communicate employee data with each other over a topic in Kafka.</p>
<blockquote>
<p>Here is a <a target="_blank" href="https://blog.logrocket.com/using-protobuf-typescript-data-serialization/">LogRocket article explaining Protobuf</a> in depth</p>
</blockquote>
<p>All of these technologies use something called a “schema”. This is used to serialize payloads into a binary and deserialize the binary into a payload. They offer type-safety in that the processes will fail if the payloads are not of the correct schema structure. Typical is based on functional principles, is relatively new to the scene with being around for two years and is maintained by a small set of core developers. In comparison, Protobuf has fifteen years behind it and is maintained by Google.  Both have a lot of similarities with the main differences accounting for the more modern features Typical has to offer. Typical has unique features like “asymmetric” fields for ensuring backward/forward compatibility and “choices” for supporting pattern-matching, while Protobuf support plugins and has a long-standing reputation and widespread use make it a more battle-tested solution for many.</p>
<p>To showcase the two technologies, we will go through some common tasks a developer will encounter when working with schemas when working with TypeScript. TypeScript is a very popular programming language which is a superset of JavaScript. We will go through a detailed tutorial working with employee data on how to serialize/deserialize data, how to safely make schema changes and what options there are for optional fields. </p>
<h2 id="heading-our-example-employee-service-and-schemas">Our Example Employee service and Schemas</h2>
<p>Lets pretend we have a service which sends data about employees to other services within an organization, lets call this service the “writer” (producer, serializer, publishers) when it comes to data interchanges and schemas. We will also handle receiving this employee data. These services can be called “readers” (consumers, deserializers, subscribers). We could use Kafka, GCP Pub Sub or a myriad of other technologies to send the data which supports binary transports. For the sake of examples and our tutorial, lets assume our services are written in TypeScript and are running on Node.js.</p>
<p>We will get our service setup to use both Typical and Protobuf so we can demonstrate the differences. The great thing about both of these technologies is that both offer a CLI tool to generate code based on a schema. It will generate serializers, deserializers and TypeScript types. This means we can use the schema as the source of truth for what payloads should look like, if changes need to be done, we can do it in the schema and run the generation over again. This can save you a lot of time and headache for developers who want to speed along coding. </p>
<blockquote>
<p>A<a target="_blank" href="https://github.com/aleccool213/typical-vs-protobuf">ll of the code for this blog post can be referenced to this github repository</a></p>
</blockquote>
<h2 id="heading-setting-up-our-development-machine">Setting up our development machine</h2>
<p>To set everything up, we can run this script. We will install the Typical and Protobuf runtimes via <a target="_blank" href="https://brew.sh/">brew</a>. Ill be explicit how which steps are needed for each. The runtimes are needed for various functions like generating the types and code, etc. </p>
<blockquote>
<p>It would be nice if these came packaged up on NPM but I couldn’t find packages suitable. Let me know if you find a better way.</p>
</blockquote>
<p>We will create a new folder for the project, create a <code>package.json</code> file with <code>npm init</code> and get TypeScript installed. </p>
<pre><code class="lang-bash">mkdir typical-vs-protobuf-example &amp; <span class="hljs-built_in">cd</span> typical-vs-protobuf-example
brew install typical
brew install protobuf
// -- fix mac os js protobuf compiler issue
brew install protobuf@3
brew link --overwrite protobuf@3
// --
npm init -Y
npx tsc --init
npm add -D typescript ts-node @types/node
// Needed <span class="hljs-keyword">for</span> TypeScript <span class="hljs-built_in">type</span> gen <span class="hljs-keyword">in</span> protobuf
npm add -D ts-protoc-gen
</code></pre>
<h2 id="heading-the-schema">The Schema</h2>
<p>Here is a sample payload structure for the data we will be sending from service to service:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"id"</span>: <span class="hljs-string">"1"</span>,
    <span class="hljs-attr">"name"</span>: <span class="hljs-string">"John Doe"</span>,
        <span class="hljs-attr">"hourly_rate"</span>: <span class="hljs-number">20</span>,
    <span class="hljs-attr">"department"</span>: <span class="hljs-string">"HR"</span>,
    <span class="hljs-attr">"email"</span>: <span class="hljs-string">"john@doe.com"</span>,
    <span class="hljs-attr">"active"</span>: <span class="hljs-literal">true</span>
}
</code></pre>
<p>The next step will be to define a schema for the payload, this defines what the payload should look like, if it doesn’t follow the schema, serialization and deserialization when running the service will fail.</p>
<blockquote>
<p>This is common functionality between most data interchange technologies. The schema is what provides “safety” to services and clients that they will receive the correct data in a shared format.</p>
</blockquote>
<p>In both Typical and Protobuf, the schemas will look similar with minor differences. We will define an <code>Employee</code> struct which contains basic information like <code>name</code> and <code>hourly_rate</code>, we will use an enum to offer a set of choices for <code>department</code>instead of a string as we only have two options it can be.</p>
<p>Typical:</p>
<pre><code><span class="hljs-comment">// types.t</span>

struct Employee {
    <span class="hljs-attr">id</span>: <span class="hljs-built_in">String</span> = <span class="hljs-number">1</span>
    <span class="hljs-attr">name</span>: <span class="hljs-built_in">String</span> = <span class="hljs-number">2</span>
    <span class="hljs-attr">houry_rate</span>: F64 = <span class="hljs-number">3</span>
    <span class="hljs-attr">department</span>: Department = <span class="hljs-number">4</span>
    <span class="hljs-attr">email</span>: <span class="hljs-built_in">String</span> = <span class="hljs-number">5</span>
    <span class="hljs-attr">active</span>: Bool = <span class="hljs-number">6</span>
}

choice Department {
    HR = <span class="hljs-number">0</span>
    NOT_HR = <span class="hljs-number">1</span>
}
</code></pre><p>Protobuf:</p>
<pre><code class="lang-protobuf">// types.proto

syntax = "proto3";

package employee.v1;

message Employee {
    string id = 0;
    string name = 1;
    string email = 2;
    int32 hourly_rate = 3;
    Department department = 5;
    string email = 6;
    bool active = 7;

    enum Department {
        HR = 0;
        NOT_HR = 1;
    }
}
</code></pre>
<p>Once we have those defined, we can move onto the TypeScript code.</p>
<h2 id="heading-generating-serializersdeserializers">Generating Serializers/Deserializers</h2>
<p>Once we have the schema defined we can generate the TypeScript types and code using some NPM scripts. Add these scripts to your <code>package.json</code>:</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"scripts"</span> {
            <span class="hljs-attr">"generate:typical:types:1"</span>: <span class="hljs-string">"typical generate typical-example/types-1.t --typescript typical-example/generated/types.ts"</span>,
        <span class="hljs-attr">"generate:protobuf:types:1"</span>: <span class="hljs-string">"protoc --plugin=\"protoc-gen-ts=./node_modules/.bin/protoc-gen-ts\" --ts_opt=esModuleInterop=true --js_out=\"./protobuf-example/generated\" --ts_out=\"./protobuf-example/generated\" ./protobuf-example/types-1.proto"</span>
        }
}
</code></pre>
<p>Run the <code>generate:typical:types:1</code>  cmd and inspect the generated code in <code>typical-example/generated/types.ts. Do</code> the same for same for Protobuf but the path is, <code>protobuf-example/protobuf-example/typeos-1_pb.ts.</code></p>
<p>Now we will write some code which uses it to serialize our sample payload from above into a binary. First in Typical: </p>
<pre><code class="lang-tsx">import { Types1 } from "./generated/types";

// Take our sample payload
const payload = {
  id: "1",
  name: "John Doe",
  hourlyRate: BigInt(20),
  department: { $field: "hr" as const },
  email: "john@doe.com",
  active: true,
};

// Serialize the Employee object to binary using the generated Serializer from Typical
const binary = Types1.Employee.serialize(payload);

// Log that it was successful
console.log("Successfully serialized Employee object to binary:", binary);

// Send the binary off using Kafka, etc
...
</code></pre>
<p>Then in Protobuf:</p>
<pre><code class="lang-tsx">import { Employee } from "./protobuf-example/types-1_pb";

// Take our sample payload
const payload = {
  id: "1",
  name: "John Doe",
  hourlyRate: 20,
  department: Employee.Department.HR,
  email: "john@doe.com",
  active: true,
};

// Create the Employee object based on our sample payload
const employee = new Employee();
employee.setId(payload.id);
employee.setName(payload.name);
employee.setHourlyRate(payload.hourlyRate);
employee.setDepartment(payload.department);
employee.setEmail(payload.email);
employee.setActive(payload.active);

// Serialize the Employee object to binary
const binary = employee.serializeBinary();

// Log that it was successful
console.log("Successfully serialized Employee object to binary:", binary);

// Send the binary off using Kafka, etc
...
</code></pre>
<p>As you can see, there is still is not many differences. When it comes to schema changes and optionals is where things start to change.</p>
<h2 id="heading-making-a-schema-change-or-schema-evolution">Making a schema change or schema evolution</h2>
<p>We realized we made a crucial error, we need to add an employee’s phone number to the schema. All employees have to input the phone number into the service so it will always be present. After some thought we decide to make this field required. Depending on the technology used and the quantity of readers and writers, it may be a breaking schema change. This means that we need to be careful to not send incompatible payloads to services which can’t handle them.</p>
<p>For example, let’s say we have multiple writers using this schema, if we add a required field, some writers will not be updated at the same time (we are part of a big organization). If this is the case readers can’t expect it to be present on every single message until every writer has there code and schema updated.</p>
<blockquote>
<p>This is a big topic when it comes to messaging systems so we will only focus on the differences between Typical and Protobuf to go about this kind of change.</p>
</blockquote>
<p>The question is how do we roll this change our safely and effectively. There are other changes we may want to make.</p>
<p>Some definitions:</p>
<ul>
<li>Backwards Compatible<ul>
<li>Means writers send messages of a new schema version and readers can still process messages using an old schema version.</li>
</ul>
</li>
<li>Forwards Compatible<ul>
<li>Means readers updated to a new schema version can still process messages with writers using an old schema version.</li>
</ul>
</li>
</ul>
<p>First lets dig into how Typical does it. Every change is forwards and backwards compatible which makes things easy to understand. There is a feature called “asymmetric” fields which were made for this use-case. </p>
<blockquote>
<p>There are no non-nullable fields in Typical, which is a big difference compared to other technologies.</p>
</blockquote>
<p>How this works if that you add the keyword on a field. This basically says that its required for the writer, optional for the reader. When we are 100% sure all of the writers have been updated, we remove the keyword which makes the field required. All fields in Typical are required by default.</p>
<p>Lets see an example schema:</p>
<pre><code>struct Employee {
    <span class="hljs-attr">id</span>: <span class="hljs-built_in">String</span> = <span class="hljs-number">1</span>
    ...
    asymmetric phone_number: <span class="hljs-built_in">String</span> = <span class="hljs-number">7</span>
}
</code></pre><p>Now that we have the schema, run type generation again and inspect the outputted types:</p>
<pre><code class="lang-tsx">export type EmployeeOut = {
  id: string;
  ...
  phoneNumber: string;
};

export type EmployeeIn = {
  id: string;
  ...
  phoneNumber?: string;
};
</code></pre>
<p>You can see how it is now optional for the readers. Here is a <a target="_blank" href="https://github.com/stepchowfun/typical#summary-of-what-kinds-of-schema-changes-are-safe">summary of schema changes which are safe</a> in Typical.</p>
<p>Now with Protobuf, all fields are required by default. But the main difference is that there is a traditional <code>optional</code> keyword which you can use. This makes the rules on how you can evolve a schema a more involved. Some changes are forwards compatible and some are backwards, overall its more granular approach.</p>
<blockquote>
<p><a target="_blank" href="https://softwaremill.com/schema-evolution-protobuf-scalapb-fs2grpc/">Here is a good article which summarizes what kind of changes are forwards/backwards compatible</a></p>
</blockquote>
<p>Now our strategy must goes as follows:</p>
<ol>
<li>Update the Protobuf schema to have <code>phoneNumber</code> as optional</li>
<li>Update all writers to use the new schema and update the code to have the value be present</li>
<li>Update all readers with the new schema</li>
<li>Wait until all writers have finished updating</li>
<li>Update the Protobuf schema to have <code>phoneNumber</code> as required</li>
<li>Repeat steps 2, 3, and 4</li>
</ol>
<p>You can see it’s more involved and you have to know what you are doing. This process is similar to Thrift and Avro.</p>
<h2 id="heading-flexible-payloads">Flexible Payloads</h2>
<p>You may have payloads where optional fields make sense. An example could be filling the success and error fields inside of a payload. When a success message is sent, error would not be present and vice verse.</p>
<p>In Typical, the spec doesn’t support non-nullable fields but instead uses something called “choices”. This offers pattern-matching-like capabilities to readers and acts like an enum (we used it before for <code>department</code>). It’s much more flexible than a traditional enum as fields inside of a choice can be strings or new structs.</p>
<p>Here is an example of adding a <code>details</code> field to the Employee struct which explains to a reader if it was successful or an error occurred in the payload.</p>
<p>Here is the Typical schema:</p>
<pre><code>struct Employee {
    <span class="hljs-attr">id</span>: <span class="hljs-built_in">String</span> = <span class="hljs-number">1</span>
    ...

    details: Details = <span class="hljs-number">7</span>
}

choice Details {
    success = <span class="hljs-number">0</span>
    <span class="hljs-attr">error</span>: <span class="hljs-built_in">String</span> = <span class="hljs-number">1</span>
}
</code></pre><p>Here is what the writer may serialize:</p>
<pre><code class="lang-tsx">const payload: Types2.EmployeeOut = {
  id: "1",
  ...
  details: {
    $field: "success",
  },
};
</code></pre>
<p>Here is what the reader may deserialize:</p>
<pre><code class="lang-tsx">// Read the binary payload from a file
const fileContents = readFileSync(filePath);
// Deserialize using Typical generated code
const payloadDeserialized = Types2.Employee.deserialize(
  new DataView(
    fileContents.buffer,
    fileContents.byteOffset,
    fileContents.byteLength
  )
);
// Handle the details field
switch (payloadDeserialized.details.$field) {
  case "success":
    console.log("We have a success!");
    break;
  case "error":
    console.log("We have an error!");
    break;
  default:
    throw new Error("Unknown details field");
}
</code></pre>
<p>In Protobuf you can use the <code>optional</code> keyword like we described before. If you want to mimic the behaviour of a choice in Typical, <a target="_blank" href="https://protobuf.dev/programming-guides/proto3/#oneof">there is a keyword <code>oneof</code></a>. This can codify different options a field can take and you can define more than a traditional enum.</p>
<p>Here is an example of using the same schema as above using a <code>oneof</code> :</p>
<pre><code class="lang-protobuf">
message Employee {
    string id = 1;
    ...

    oneof details {
        bool success = 7;
        string error = 8;
    }
}
</code></pre>
<p>Here is what the reader may deserialize:</p>
<pre><code class="lang-tsx">...
// Read the binary payload from a file
const fileContents = readFileSync(filePath);
const payloadDeserialized = Employee.deserializeBinary(fileContents);

// Handle the details field
switch (payloadDeserialized.getDetailsCase()) {
  case Employee.DetailsCase.SUCCESS:
    console.log("We have a success!");
    break;
  case Employee.DetailsCase.ERROR:
    console.log("We have an error!");
    break;
  default:
    throw new Error("Unknown details field");
}
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>While Typical offers some unique features like asymmetric fields for ensuring backward/forward compatibility and choices for pattern-matching, Protobuf's long-standing reputation and widespread also makes it an attractive choice. Being a relatively new technology, Typical can offer some advantages like a single cohesive CLI tool which can generate types and code instead of having to download separate packages in Protobuf. On the other hand, Protobuf supports plugins and has a wider range of language support compared to Typical. It also offers traditional optional fields, making schema evolution more involved but also more granular when compared to the opinionated approach of Typical.</p>
<p>In conclusion, both Typical and Protobuf have their own unique advantages and limitations. The choice between the two depends on the specific needs and preferences of the organization.</p>
]]></content:encoded></item><item><title><![CDATA[Using EMCAScript decorators in TypeScript 5.0]]></title><description><![CDATA[The State of Developer Ecosystem 2022 crowned TypeScript the fastest-growing programming language. It’s not hard to see why. This popular superset of JavaScript provides type-checking, enums, and other enhancements. But often, TypeScript introduces l...]]></description><link>https://blog.alec.coffee/using-emcascript-decorators-in-typescript-50</link><guid isPermaLink="true">https://blog.alec.coffee/using-emcascript-decorators-in-typescript-50</guid><category><![CDATA[TypeScript]]></category><category><![CDATA[decorators]]></category><category><![CDATA[news]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Fri, 28 Apr 2023 13:43:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/1seONCyPWfQ/upload/b62ef1f7e8e857bf4e880a842fc51b3a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a target="_blank" href="https://www.jetbrains.com/lp/devecosystem-2022/">The State of Developer Ecosystem 2022</a> crowned TypeScript the fastest-growing programming language. It’s not hard to see why. This popular superset of JavaScript provides type-checking, enums, and other enhancements. But often, TypeScript introduces long-awaited features that are not yet part of the ECMAScript standard that JavaScript relies on.</p>
<p>One example is the reintroduction of decorators in the <a target="_blank" href="https://devblogs.microsoft.com/typescript/announcing-typescript-5-0-rc/">soon-to-be-released TypeScript 5.0</a>; decorators is a meta-programming technique that can be found in other programming languages. If you’re an application developer or library author who is interested in using the new official TypeScript decorators, you’ll want to adopt the new syntax and understand the differences between the old and new feature sets. The API differences are extensive and it is unlikely that old decorators will work with the new syntax out of the box.</p>
<p>In this article, we’ll review the history of using decorators in TypeScript, discuss the benefits associated with decorators in TypeScript 5.0, walk through a demo using modern decorators, and explore how to refactor existing decorators.</p>
<p><strong><em>N.B.,</em></strong> <em>all the APIs have changed extensively in TypeScript 5.0; for this article, we’ll focus on class method decorators</em></p>
<p><em>Jump ahead:</em></p>
<ul>
<li><p>History of TypeScript Decorators</p>
</li>
<li><p>Decorators in TypeScript 5.0</p>
</li>
<li><p>Decorator factory demo</p>
</li>
<li><p>Refactoring existing decorators</p>
</li>
<li><p>Understanding the limitations of modern decorators</p>
</li>
</ul>
<h2 id="heading-history-of-typescript-decorators">History of TypeScript Decorators</h2>
<p><a target="_blank" href="https://www.typescriptlang.org/docs/handbook/decorators.html">Decorators</a> is a feature that enables developers to reduce boilerplate by quickly adding functionality to classes, class properties, and class methods. When TypeScript first introduced decorators it did not follow the ECMAScript specification. This wasn’t great for developers, since ideally emitted code from any JavaScript compiler should comply with web standards!</p>
<p>Using decorators required setting an --experimentalDecorators experimental compiler flag. Several popular TypeScript libraries, such as <a target="_blank" href="https://typegraphql.com/">type-graphql</a> and <a target="_blank" href="https://inversify.io/">inversify</a>, rely on this implementation.</p>
<p>Here’s an example of a simple class method decorator, demonstrating the enhanced ergonomics of the new syntax:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">debugMethod</span>(<span class="hljs-params">
  _target: unknown,
  memberName: <span class="hljs-built_in">string</span>,
  propertyDescriptor: PropertyDescriptor
</span>) </span>{
  <span class="hljs-keyword">return</span> {
    get() {
      <span class="hljs-keyword">const</span> wrapperFunction = <span class="hljs-function">(<span class="hljs-params">...arguments_: unknown[]</span>) =&gt;</span> {
        <span class="hljs-keyword">const</span> now = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(<span class="hljs-built_in">Date</span>.now());

        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"start time"</span>, now.toISOString());

        propertyDescriptor.value.apply(<span class="hljs-built_in">this</span>, arguments_);

        <span class="hljs-keyword">const</span> end = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(<span class="hljs-built_in">Date</span>.now());

        <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"end time"</span>, end.toISOString());
      };

      <span class="hljs-built_in">Object</span>.defineProperty(<span class="hljs-built_in">this</span>, memberName, {
        value: wrapperFunction,

        configurable: <span class="hljs-literal">true</span>,

        writable: <span class="hljs-literal">true</span>,
      });

      <span class="hljs-keyword">return</span> wrapperFunction;
    },
  };
}

<span class="hljs-keyword">class</span> ComplexClass {
  <span class="hljs-meta">@debugMethod</span>
  <span class="hljs-keyword">public</span> complexMethod(a: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">void</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"DOING COMPLEX STUFF!"</span>);
  }
}
</code></pre>
<p>In the above code, we can see that the <code>debugMethod</code> decorator overrides the class method property using Object.defineProperty, but in general, the code isn’t easy to follow. Also, the arguments are not type-safe, which limits our safety inside the <code>wrapperFunction.</code> Additionally, the compiler will not fail if this decorator is used on an invalid use case, such as a class property.</p>
<p>We could <a target="_blank" href="https://blog.logrocket.com/using-typescript-generic-type-create-reusable-components/">use TypeScript generics</a> to try to achieve type safety, but TypeScript does not infer generic types and this makes them a pain to consume. Thus, writing complex decorators is difficult due to the unknown values users can input into them.</p>
<p>The modern version of decorators, which will be officially rolled out in TypeScript 5.0, no longer requires a compiler flag and follows <a target="_blank" href="https://github.com/tc39/proposal-decorators">the official ECMAScript Stage-3 proposal</a>. Alongside a stable implementation that follows ECMAScript standards, decorators now work seamlessly with the TypeScript type system, enabling more enhanced functionality than the original version.</p>
<p>With the new implementation of decorators in TypeScript 5.0, these aspects are greatly improved. Let’s take a look.</p>
<h2 id="heading-decorators-in-typescript-50">Decorators in TypeScript 5.0</h2>
<p>TypeScript 5.0 offers better ergonomics, improved type safety, and more. Here’s a similar example of a TypeScript 5.0 decorator that overrides a class method:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">debugMethod</span>(<span class="hljs-params">originalMethod: <span class="hljs-built_in">any</span>, _context: <span class="hljs-built_in">any</span></span>) </span>{
  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">replacementMethod</span>(<span class="hljs-params"><span class="hljs-built_in">this</span>: <span class="hljs-built_in">any</span>, ...args: <span class="hljs-built_in">any</span>[]</span>) </span>{
    <span class="hljs-keyword">const</span> now = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(<span class="hljs-built_in">Date</span>.now());

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"start time"</span>, now.toISOString());

    <span class="hljs-keyword">const</span> result = originalMethod.call(<span class="hljs-built_in">this</span>, ...args);

    <span class="hljs-keyword">const</span> end = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(<span class="hljs-built_in">Date</span>.now());

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"end time"</span>, end.toISOString());

    <span class="hljs-keyword">return</span> result;
  }

  <span class="hljs-keyword">return</span> replacementMethod;
}

<span class="hljs-keyword">class</span> ComplexClass {
  <span class="hljs-meta">@debugMethod</span>
  complexMethod(a: <span class="hljs-built_in">number</span>): <span class="hljs-built_in">void</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"DOING STUFF!"</span>);
  }
}
</code></pre>
<p><strong><em>N.B.,</em></strong> <em>to</em> <a target="_blank" href="https://www.typescriptlang.org/play"><em>try out TypeScript in an online playground</em></a><em>, just switch the version to “nightly” or “&gt;5.0”</em></p>
<p>With the new implementation, simply returning the function can now replace it; there’s no need for the Object.defineProperty. This makes decorators easier to implement and understand. Alongside this improvement, let’s make it completely type-safe:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">debugMethod</span>&lt;
  <span class="hljs-title">TThis</span>,
  <span class="hljs-title">TArgs</span> <span class="hljs-title">extends</span> [<span class="hljs-title">string</span>, <span class="hljs-title">number</span>],
  <span class="hljs-title">TReturn</span> <span class="hljs-title">extends</span> <span class="hljs-title">number</span>
&gt;(<span class="hljs-params">
  originalMethod: <span class="hljs-built_in">Function</span>,

  context: ClassMethodDecoratorContext&lt;
    TThis,
    (<span class="hljs-built_in">this</span>: TThis, ...args: TArgs) =&gt; TReturn
  &gt;
</span>) </span>{
  <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">replacementMethod</span>(<span class="hljs-params"><span class="hljs-built_in">this</span>: TThis, a: TArgs[0], b: TArgs[1]</span>): <span class="hljs-title">TReturn</span> </span>{
    <span class="hljs-keyword">const</span> now = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(<span class="hljs-built_in">Date</span>.now());

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"start time"</span>, now.toISOString());

    <span class="hljs-keyword">const</span> result = originalMethod.call(<span class="hljs-built_in">this</span>, a, b);

    <span class="hljs-keyword">const</span> end = <span class="hljs-keyword">new</span> <span class="hljs-built_in">Date</span>(<span class="hljs-built_in">Date</span>.now());

    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"end time"</span>, end.toISOString());

    <span class="hljs-keyword">return</span> result;
  }

  <span class="hljs-keyword">return</span> replacementMethod;
}
</code></pre>
<p>Our decorator function in TypeScript 5.0 is greatly improved and now supports the following:</p>
<ul>
<li><p>Using generics to type a method’s arguments and return a value; the method must accept a string and a number, <code>TArgs</code>, and return a number, <code>TReturn</code></p>
</li>
<li><p>Typing the <code>originalMethod</code> as a <code>Function</code></p>
</li>
<li><p>Using the <code>ClassMethodDecoratorContext</code> inbuilt helper type; this exists for all decorator types</p>
</li>
</ul>
<p>We can test to see if our decorator is truly type-safe by inspecting errors when it is used incorrectly:</p>
<p><img src="https://paper-attachments.dropboxusercontent.com/s_19D4B78D1F1C72C5172016AA8E1797DE2FBDF440146335E743AA96678C0F41DC_1678709249215_Xnip2023-03-12_14-15-40.jpg" alt="Using the TypeScript 5.0 decorator with incorrectly typed arguments." /></p>
<p>Now, let’s look at an actual use case for the new TypeScript 5.0 decorators.</p>
<h2 id="heading-decorator-factory-demo">Decorator factory demo</h2>
<p>We can use the type safety available in the TypeScript 5.0 decorators to create functions that return a decorator, otherwise known as a <a target="_blank" href="https://blog.logrocket.com/practical-guide-typescript-decorators/#use-cases-typescript-decorators">decorator factory</a>. Decorator factories allow us to customize the behavior of our decorators by passing some parameters in the factory.</p>
<p>For our demo, we’ll create a decorator factory that changes the class method argument based on its own arguments. This is possible with a TypeScript type ternary operator. Our example is inspired by REST API frameworks like NestJS.</p>
<p>We’ll call our module rest-framework. Let’s start by creating a blank TypeScript project using ts-node:</p>
<pre><code class="lang-bash">$ mkdir rest-framework

$ <span class="hljs-built_in">cd</span> rest-framework

$ npm init -y

$ npm install -D typescript@5.0.4 @types/node ts-node

$ touch index.ts

$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"console.log('Hello, world!');"</span> &gt; index.ts
</code></pre>
<p>Next, we’ll define the script to build and run the project in package.json:</p>
<pre><code class="lang-json">{

  <span class="hljs-comment">// ...</span>

  <span class="hljs-attr">"scripts"</span>: {

    <span class="hljs-attr">"build"</span>: <span class="hljs-string">"tsc"</span>,

    <span class="hljs-attr">"start"</span>: <span class="hljs-string">"ts-node index.ts"</span>

  }

}
</code></pre>
<p>Let’s run npm start to see it in action:</p>
<pre><code class="lang-bash">$ npm start


Hello, world!
</code></pre>
<p>Now, let’s define our types:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">interface</span> RouteOptionsAuthEnabled {
  auth: <span class="hljs-literal">true</span>;
}

<span class="hljs-keyword">interface</span> RouteOptionsAuthDisabled {
  auth: <span class="hljs-literal">false</span>;
}

<span class="hljs-keyword">type</span> RouteArguments = [<span class="hljs-built_in">string</span>] | [];

<span class="hljs-keyword">type</span> RouteDecorator&lt;TThis, TArgs <span class="hljs-keyword">extends</span> RouteArguments&gt; = (
  originalMethod: <span class="hljs-built_in">Function</span>,

  context: ClassMethodDecoratorContext&lt;
    TThis,
    <span class="hljs-function">(<span class="hljs-params"><span class="hljs-built_in">this</span>: TThis, ...args: TArgs</span>) =&gt;</span> <span class="hljs-built_in">string</span>
  &gt;
) =&gt; <span class="hljs-built_in">void</span>;

<span class="hljs-comment">// Next, let’s define the factory decorator:</span>

<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">Route</span>&lt;
  <span class="hljs-title">TThis</span>, // <span class="hljs-title">The</span> <span class="hljs-title">user</span> <span class="hljs-title">can</span> <span class="hljs-title">enable</span> <span class="hljs-title">or</span> <span class="hljs-title">disable</span> <span class="hljs-title">auth</span>
  <span class="hljs-title">TOptions</span> <span class="hljs-title">extends</span> <span class="hljs-title">RouteOptionsAuthEnabled</span> | <span class="hljs-title">RouteOptionsAuthDisabled</span>
&gt;(<span class="hljs-params">
  options: TOptions
</span>): <span class="hljs-title">RouteDecorator</span>&lt;
  <span class="hljs-title">TThis</span>, // <span class="hljs-title">Do</span> <span class="hljs-title">not</span> <span class="hljs-title">accept</span> <span class="hljs-title">a</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">that</span> <span class="hljs-title">uses</span> <span class="hljs-title">a</span> <span class="hljs-title">string</span> <span class="hljs-title">for</span> <span class="hljs-title">an</span> <span class="hljs-title">argument</span> <span class="hljs-title">if</span> <span class="hljs-title">auth</span> <span class="hljs-title">is</span> <span class="hljs-title">disabled</span>
  <span class="hljs-title">TOptions</span> <span class="hljs-title">extends</span> <span class="hljs-title">RouteOptionsAuthEnabled</span> ? [<span class="hljs-title">string</span>] : []
&gt; </span>{
  <span class="hljs-title">return</span> &lt;<span class="hljs-title">TThis</span>&gt;(<span class="hljs-params">
    target: (
      <span class="hljs-built_in">this</span>: TThis,

      ...args: TOptions <span class="hljs-keyword">extends</span> RouteOptionsAuthEnabled ? [<span class="hljs-built_in">string</span>] : []
    ) =&gt; <span class="hljs-built_in">string</span>,

    context: ClassMethodDecoratorContext&lt;
      TThis,
      (
        <span class="hljs-built_in">this</span>: TThis,

        ...args: TOptions <span class="hljs-keyword">extends</span> RouteOptionsAuthEnabled ? [<span class="hljs-built_in">string</span>] : []
      ) =&gt; <span class="hljs-built_in">string</span>
    &gt;
  </span>) =&gt; </span>{};
}
</code></pre>
<p>Now we have a route decorator that changes the class method parameter types depending on the user’s options.</p>
<p>Let’s create an example Route class to act as our test case:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> Controller {
  <span class="hljs-meta">@Route</span>({ auth: <span class="hljs-literal">true</span> })
  get(authHeaderValue: <span class="hljs-built_in">string</span>): <span class="hljs-built_in">string</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"get http method handled!"</span>);

    <span class="hljs-keyword">return</span> <span class="hljs-string">"response"</span>;
  }

  <span class="hljs-meta">@Route</span>({ auth: <span class="hljs-literal">false</span> })
  post(): <span class="hljs-built_in">string</span> {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"post http method handled!"</span>);

    <span class="hljs-keyword">return</span> <span class="hljs-string">"response"</span>;
  }
}
</code></pre>
<p>We can see that TypeScript fails to compile if we try to use <code>authHeaderValue</code> in the post route:</p>
<p><img src="https://paper-attachments.dropboxusercontent.com/s_19D4B78D1F1C72C5172016AA8E1797DE2FBDF440146335E743AA96678C0F41DC_1681851657521_image.png" alt /></p>
<p>The decorator factory use case is a simple example, but it demonstrates the power of what type-safe decorators can do.</p>
<h2 id="heading-refactoring-existing-decorators">Refactoring existing decorators</h2>
<p>If you’re using an existing TypeScript decorator, you’ll want to refactor to use the API and take advantage of its many benefits. Basic decorators can be easily refactored to the new ones, but the difference is substantial enough that advanced use cases will take effort.</p>
<p>For best results, follow these steps to refactor existing decorators:</p>
<ul>
<li><p>Write unit tests for your decorators</p>
</li>
<li><p>Remove or falsify the experimentalDecorators TypeScript compiler flags</p>
</li>
<li><p>Read this extensive <a target="_blank" href="https://2ality.com/2022/10/javascript-decorators.html">summary of how the new proposal works</a></p>
</li>
<li><p>Understand the limitations of modern decorators (we’ll cover this in more detail later in this article)</p>
</li>
<li><p>Rewrite decorators using no types and use any in place of all types</p>
</li>
<li><p>Make sure unit tests pass</p>
</li>
<li><p>Add types</p>
</li>
</ul>
<h2 id="heading-understanding-the-limitations-of-modern-decorators">Understanding the limitations of modern decorators</h2>
<p>The modern decorator implementation is great news for TypeScript developers, but notable features are missing. First, there’s no support for decorating method parameters. This is within the spec of the proposal, so hopefully it will be included in the final spec. Its omission is notable because popular libraries, like type-graphql, utilize this in important ways, such as writing resolvers:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">class</span> Resolver {
  <span class="hljs-meta">@Query</span>(<span class="hljs-function">(<span class="hljs-params">returns</span>) =&gt;</span> Recipe)
  <span class="hljs-keyword">async</span> recipe(<span class="hljs-meta">@Arg</span>(<span class="hljs-string">"recipeId"</span>) recipeId: <span class="hljs-built_in">string</span>) {
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>.recipeRepository.findOneById(recipeId);
  }
}
</code></pre>
<p>Second, TypeScript 5.0 cannot emit decorator metadata. Subsequently, it doesn’t integrate with the <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Reflect">Reflect API</a> and won’t work with the <a target="_blank" href="https://github.com/rbuckton/reflect-metadata">reflect-metadata</a> npm package.</p>
<p>Third, the --emitDecoratorMetadata flag, which was previously used to access and modify metadata for given decorators, is no longer supported. Unfortunately, there’s no real way to achieve the same functionality by getting the metadata at runtime. Some cases can be refactored. For example, let's define a decorator that validates a function’s parameter types at runtime:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">validateParameterType</span>(<span class="hljs-params">
  target: <span class="hljs-built_in">any</span>,
  propertyKey: <span class="hljs-built_in">string</span> | symbol
</span>): <span class="hljs-title">void</span> </span>{
  <span class="hljs-keyword">const</span> methodParameterTypes: (<span class="hljs-built_in">string</span> | unknown)[] =
    <span class="hljs-built_in">Reflect</span>.getMetadata(<span class="hljs-string">"design:paramtypes"</span>, target, propertyKey) ?? [];

  <span class="hljs-keyword">const</span> firstParameterType = methodParameterTypes[<span class="hljs-number">0</span>];

  <span class="hljs-keyword">if</span> (<span class="hljs-keyword">typeof</span> firstParameterType !== <span class="hljs-string">"string"</span>) {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">TypeError</span>(<span class="hljs-string">"First parameter must be a string"</span>);
  }
}
</code></pre>
<p>We can achieve similar functionality with the improved type safety provided by TypeScript 5.0. We simply add the arguments of the method we are decorating, like so:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">debugMethod</span>&lt;<span class="hljs-title">TThis</span>, <span class="hljs-title">TArgs</span> <span class="hljs-title">extends</span> [<span class="hljs-title">string</span>], <span class="hljs-title">TReturn</span>&gt;(<span class="hljs-params">
</span>) </span>{

...
</code></pre>
<p>In theory, we could use this approach to refactor decorators that depend on getting types from Reflect for <code>design:type</code>, <code>design:paramtypes</code>, and <code>design:returntype</code>. This is a different way to write decorators; it is not a simple refactor because it requires using TypeScript type inference to refactor how types are acquired and validated.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The new decorator implementation in TypeScript 5.0 follows the official ECMAScript Stage-3 proposal and is now type-safe, making it easier to implement and understand. However, some notable features are missing, such as support for decorating method parameters and the ability to emit decorator metadata.</p>
<p>Basic decorators can be easily refactored to the TypeScript 5.0 version, but advanced use cases will require more effort. Developers can refactor existing decorators to use the new API and take advantage of the associated benefits. They can be less dependent on external libraries and are less likely to refactor code in the future. These changes to TypeScript's implementation of decorators are a benefit to the broader ecosystem, but community adoption could take some time.</p>
]]></content:encoded></item><item><title><![CDATA[Monorepo version management with the changesets NPM package]]></title><description><![CDATA[One of the most frustrating aspects of developing software can be upgrading packages. Vague release notes can bog down the process and make it hard to know how to upgrade. Package maintainers use a pattern called Semantic Versioning (semver) to descr...]]></description><link>https://blog.alec.coffee/monorepo-version-management-with-the-changesets-npm-package</link><guid isPermaLink="true">https://blog.alec.coffee/monorepo-version-management-with-the-changesets-npm-package</guid><category><![CDATA[monorepo]]></category><category><![CDATA[npm]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Tue, 27 Dec 2022 13:31:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/sJa0qmawWnM/upload/c0297949de7f7893406a1ea4f97ae784.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the most frustrating aspects of developing software can be upgrading packages. Vague release notes can bog down the process and make it hard to know how to upgrade. Package maintainers use a pattern called <a target="_blank" href="https://semver.org/">Semantic Versioning</a> (semver) to describe changes in new versions. It tells consuming applications how to handle updates. If a project is small, maintainers have an easy time choosing the semver type for a release. Performing other tasks like writing release notes also doesn't take much time. In contrast, when a project gets larger, maintainers find this to be a time-consuming task. For example, <a target="_blank" href="https://github.com/facebook/jest">Jest</a> is a large mono-repo with packages that depend on each other, are public, and are consumed individually. With pull requests merging frequently, maintainers have a tough time figuring out what gets shipped in a package release. Maintainers need to track merging changes, document how apps should upgrade, update packages within the repository and publish packages. There are tools to make this easier, with one of the prominent ones being <a target="_blank" href="https://github.com/changesets/changesets">changesets</a>.</p>
<h3 id="heading-changesets">changesets</h3>
<p>Without using a tool, maintainers can provide checklists in pull request descriptions to remind contributors to add details to their changes. The goal is to reduce the burden and the work a maintainer has to do. This strategy is decent but reviewers miss things and it still leaves work to the maintainer to merge everything into a single release/document when multiple pull requests are merged. They could have little context as the time between merges and when a package gets released could be weeks. Maintainers want to encourage many contributions and want to make them seamless and easy. changesets is a tool created to help with the work of version management inside mono-repos. It provides a CLI interface for contributors to describe their changes in a pull request alongside the semver bump type. These bits of information are called, aptly, “changesets”. The tool is then used to perform versioning, which includes consuming all of the “changesets” since the last release, finding out the maximum semver bump type, updating the changelog and updating the appropriate internal packages.</p>
<h3 id="heading-example">Example</h3>
<p>To demonstrate the capabilities and advantages of using changesets, we can see how different changes to a codebase are handled with and without it. There is an open-source e-commerce store platform that is specifically made for pet stores. The team behind the application is having a hard time organizing the code and wants to make parts of the code base more usable for different applications. They decide a mono-repo suits this well and split up pieces of code they see as generic into different packages.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680147166126/9e493fa7-9a24-4aeb-ae12-cabdebbd7bde.png" alt class="image--center mx-auto" /></p>
<p>Packages like <code>ordering</code> become adopted outside the codebase and make the lives of other developers easier by handling the ordering of items in their stores. During the course of the next week, developers on the core team and outside contributors make changes to the package. An internal developer adds a new feature that makes it easy to update the stock of an item.</p>
<pre><code class="lang-jsx"><span class="hljs-comment">// Updates the stock of an item in the database</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">updateStock</span>(<span class="hljs-params">item, quantity</span>) </span>{}
</code></pre>
<p>This change requires a minor semver bump as apps or packages that get this update can safely upgrade with no breaking changes to any existing APIs.</p>
<p>This is not the only change that happens, an external contributor reports a critical bug where <code>sku</code> cannot be entered into the database when items are being inserted. <code>sku</code> is now a required field on the <code>addItem</code> function.</p>
<pre><code class="lang-jsx"><span class="hljs-comment">// Adds an item in the database</span>
<span class="hljs-keyword">export</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">addItem</span>(<span class="hljs-params">item, sku</span>) </span>{}
</code></pre>
<p>This change requires a major semver bump as apps or packages must make code changes to upgrade to this new version safely.</p>
<p>Let’s see how these two changes can be released with and without changesets.</p>
<h3 id="heading-without-changesets">Without changesets</h3>
<p>After a recent release, an external developer opens a pull request to the monorepo package <code>ordering</code> to add a new feature:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680147062559/7790b4f1-7dbb-4192-8fa7-8d583ef87e3b.png" alt class="image--center mx-auto" /></p>
<p>The maintainer of the repository is pleased but notices that the contributor didn’t update the release notes in the pull request. This is outlined in the guidelines but contributors sometimes miss this which isn’t to fault them, they are focused on making changes.</p>
<p>The maintainer asks for the contributor to add the notes:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680147029802/84f4006e-be1e-4cd6-9f1c-f0baaa91248c.png" alt class="image--center mx-auto" /></p>
<p>This change along with other changes gets merged and the maintainer proceeds to release the package a few weeks later. As part of this process, they need to figure out the semver bump type. They need to look over every pull request and see if there were any breaking changes, feature additions and/or bug fixes. They find the <code>sku</code> addition to <code>addItem</code> change and decide a full version bump is needed. As part of the release they then need to find every package which depends on the <code>ordering</code> package and bump it from 1.0.0 to 2.0.0, like the <code>pet-store</code> app. The maintainer is tasked with upgrading the app to comply with the changes. It takes a while as the contributor didn’t write much documentation for it.</p>
<p>Afterwards they need to create a change-log entry, by again, going through all of the pull requests merged since the last release. This alongside possibly other tasks is making the maintainer's job tedious, prone to error, and cumbersome.</p>
<h3 id="heading-with-changesets">With changesets</h3>
<p>The contributor runs <code>yarn changeset</code> and creates a “changeset”. They describe the new <code>updateStock</code> function, how consuming packages can use, which package was affected by the change, and the semver bump type. They push the change and open a pull request.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680147107847/79a0ea00-a768-455b-bcc7-29e409afde18.png" alt class="image--center mx-auto" /></p>
<p>The maintainer reviews the pull request and merges it, no questions asked! The “changeset” file contained everything that is needed to make the release process easy. After this, other pull requests come in and one includes a major version bump, this is due to the <code>sku</code> argument addition to <code>addItem</code>.</p>
<pre><code class="lang-markdown">---
<span class="hljs-section">"changesets-package-ordering": major
---</span>

Add sku argument to addItem function

The <span class="hljs-code">`sku`</span> argument was added to the <span class="hljs-code">`addItem`</span> function. It is a required argument due to changing business needs.

Example usage:
...
</code></pre>
<p>Now the maintainer is ready to release the next version of the <code>ordering</code> package. To do this, all they must do is run the <code>yarn changeset version</code> command. This command removes all of the local “changeset” files, creates the change-log entry, and automatically bumps the version of the <code>ordering</code> package in the package itself and in the dependants like the <code>pet-store</code> app.</p>
<pre><code class="lang-jsx">## <span class="hljs-number">2.0</span><span class="hljs-number">.0</span>

### Major Changes

- ea13cc5: Add sku argument to addItem <span class="hljs-function"><span class="hljs-keyword">function</span>

  <span class="hljs-title">The</span> `<span class="hljs-title">sku</span>` <span class="hljs-title">argument</span> <span class="hljs-title">was</span> <span class="hljs-title">added</span> <span class="hljs-title">to</span> <span class="hljs-title">the</span> `<span class="hljs-title">addItem</span>` <span class="hljs-title">function</span>. <span class="hljs-title">It</span> <span class="hljs-title">is</span> <span class="hljs-title">a</span> <span class="hljs-title">required</span> <span class="hljs-title">argument</span> <span class="hljs-title">due</span> <span class="hljs-title">to</span> <span class="hljs-title">changing</span> <span class="hljs-title">business</span> <span class="hljs-title">needs</span>.

### <span class="hljs-title">Minor</span> <span class="hljs-title">Changes</span>

- 7034<span class="hljs-title">f8a</span>: <span class="hljs-title">Add</span> <span class="hljs-title">the</span> <span class="hljs-title">updateStock</span> <span class="hljs-title">utility</span> <span class="hljs-title">function</span>

  <span class="hljs-title">The</span> `<span class="hljs-title">updateStock</span>` <span class="hljs-title">utility</span> <span class="hljs-title">function</span> <span class="hljs-title">is</span> <span class="hljs-title">used</span> <span class="hljs-title">by</span> <span class="hljs-title">applications</span> <span class="hljs-title">to</span> <span class="hljs-title">update</span> <span class="hljs-title">the</span> <span class="hljs-title">stock</span> <span class="hljs-title">of</span> <span class="hljs-title">an</span> <span class="hljs-title">item</span>.</span>
</code></pre>
<p>Because the contributor needed to write detailed documentation for their breaking change, the maintainer has an easy time upgrading the <code>pet-store</code> app to comply with the changes.</p>
<p>With the help of changesets, the maintainer was easily able to create a release and update apps across the monorepo in a standardized and reproducible way.</p>
<h3 id="heading-automation">Automation</h3>
<p>We can make working with changesets even easier with automation. We always want to ensure that the correct steps are being followed by contributors and maintainers.</p>
<p>We can make sure contributors always create a “changeset” by installing <a target="_blank" href="https://github.com/apps/changeset-bot">the changesets Github Bot</a>. This bot will comment on pull requests when one is missing:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680147132946/b20f3ccd-c63f-4937-92b2-70f57da80590.png" alt class="image--center mx-auto" /></p>
<p>We will then also want to ensure that no pull requests can be merged without a “changeset” by running the <code>yarn changeset status</code> inside of a Github Action. This can fail when contributors forget to create a changeset.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1680147152770/cfb82f48-8534-4a33-99fa-fe626376e23c.png" alt class="image--center mx-auto" /></p>
<p>When a new version is ready to be created, maintainers can further automate this step by utilizing the <code>changesets</code> Github Action. This action will run the <code>yarn changeset version</code> command mentioned earlier and create a pull request with all of the file changes it creates.</p>
<h3 id="heading-in-conclusion">In Conclusion</h3>
<p>Using the changesets package, many of the steps which make the release process a burden on maintainers can be lifted. Contributors can be reminded to make detailed notes about their changes and maintainers can easily create new versions of packages using a single CLI command. With high-quality and recurring package releases, consumers will have an easy time upgrading to new versions.</p>
]]></content:encoded></item><item><title><![CDATA[Building React Apps in Deno using Aleph.js and Ruck]]></title><description><![CDATA[Building a frontend in the modern era is tough because of the plethora of choices you must make. Developers will often reach for a popular framework, React, and find themselves needing more tools to get the job done. These could include a bundler, te...]]></description><link>https://blog.alec.coffee/building-react-apps-in-deno-using-alephjs-and-ruck</link><guid isPermaLink="true">https://blog.alec.coffee/building-react-apps-in-deno-using-alephjs-and-ruck</guid><category><![CDATA[React]]></category><category><![CDATA[Deno]]></category><category><![CDATA[import-maps]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 02 Nov 2022 03:22:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/Pihl8kTtX-s/upload/2c51b1fb82f1d369a9cd0e3eb48417b1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Building a frontend in the modern era is tough because of the plethora of choices you must make. Developers will often reach for a popular framework, React, and find themselves needing more tools to get the job done. These could include a bundler, test runner, linter more. Not only that but they need to consider SEO, styling assets, routing, data-fetching, and the list goes on. Developers should consider these when creating a production-ready, performant React app. Projects like <a target="_blank" href="https://create-react-app.dev/">create-react-app</a> and <a target="_blank" href="https://nextjs.org/">Next.js</a> have gained popularity for providing features that were tedious to put in place on their own. Deno is a new JavaScript runtime that is gaining support from the community. Deno aligns with web standards by supporting <a target="_blank" href="https://flaviocopes.com/es-modules/">ES Modules</a>, <a target="_blank" href="https://github.com/WICG/import-maps">import maps</a>, and <a target="_blank" href="https://deno.land/manual@v1.26.0/runtime/web_platform_apis#fetch-api">the fetch API</a>. Most React frameworks today are only supported on Node.js but now some are released and built on Deno. Deno can do things these frameworks must put in place on their own while supporting Node.js. Due to Deno supporting ES Modules and TypeScript, frameworks can avoid building steps like transpilation. Deno has a large standard library, developer tools for common tasks like linting, formatting and testing, and a package manager. Some are weary of Deno because it not supporting NPM and not being compliant with all Node.js third-party packages. In my experience, there are many workarounds to these limitations. Ruck and Aleph.js are Deno-native React web frameworks and support features like server-side rendering, data-fetching, routing and modifying the HTTP server response. There are key similarities and differences with both Ruck and Aleph.js that are important to distinguish when choosing which one to use.</p>
<h2 id="heading-ruck">Ruck</h2>
<p><a target="_blank" href="https://github.com/jaydenseric/ruck#installation">Ruck</a> is a minimal framework for building React apps with Deno. It leans into Deno-specific features like ES Modules and import maps which makes it a great showcase for the new runtime. It doesn’t use a bundler so it does not support writing React components in JSX and all configuration defines itself in code. Using <code>createElement</code> everywhere is not the best developer experience. I could see another framework adopting Ruck under the hood to solve a lot of the problems for it. Ruck is what you are looking for if you want a framework in which you have control of what is going on and don’t like the “magic” of other frameworks.</p>
<p>An example component is written with Ruck:</p>
<pre><code class="lang-tsx">import { createElement as h } from "react";
import useOnClickRouteLink from "ruck/useOnClickRouteLink.mjs";
import useRoute from "ruck/useRoute.mjs";

export const css = new Set([
  "/components/ExampleComponent.css",
]);

export default function ExampleComponent({ href, children }) {
  return createElement("a", 
        { className: "NavLink__a", href, onClick: () =&gt; console.log('Hello World!') },
        children
    );
}
</code></pre>
<h2 id="heading-alephjs">Aleph.js</h2>
<p>Aleph.js is a full-stack web framework for building React apps with Deno. Second, to <a target="_blank" href="https://github.com/denoland/fresh">Fresh</a>, it is the most popular Deno-native React framework. It leans into Deno for some of its features but also provides much more. Aleph.js is inspired by Next.js even giving some of the same syntaxes for some features. Aleph.js supports server-side rendering as well as static-site generation, creating standalone APIs, file-base routing, and React Hot Module Reloading. To support separate file types such as JSX and CSS it doesn’t use webpack but instead uses <a target="_blank" href="https://github.com/evanw/esbuild">esbuild</a>.</p>
<p>An example component is written in Aleph.js:</p>
<pre><code class="lang-tsx">import React from 'react';
import Logo from '../components/logo.tsx'

export default function ExampleComponent() {
  return (
    &lt;div&gt;
      &lt;Logo /&gt;
      &lt;h1&gt;Hello World!&lt;/h1&gt;
    &lt;/div&gt;
  )
}
</code></pre>
<h2 id="heading-similarities">Similarities</h2>
<p>There are similarities between Ruck and Aleph.js. One of those similarities is the support for <a target="_blank" href="https://github.com/WICG/import-maps#the-basic-idea">import maps</a>. Without the usage of NPM or another package manager, Deno depends on <a target="_blank" href="https://deno.land/manual/linking_to_external_code">HTTP imports</a>. This means imports usually look like this:</p>
<pre><code class="lang-tsx">import React from "https://esm.sh/stable/react@18.2.0/es2021/react.js”;
</code></pre>
<p>Deno recommends putting all module imports into a single <code>deps.ts</code> file to be re-exported. The issue with this approach is that imports are still not compatible with Node.js/webpack counterparts. A better way (and browser-compliant way) to do this is with import maps. Import maps are a recent browser feature that instructs the browser where dependencies for a module are located.</p>
<p>An example import map:</p>
<pre><code class="lang-tsx">{
  "imports": {
    "react": "https://esm.sh/stable/react@18.2.0/es2021/react.js",
  }
}
</code></pre>
<p>A component that uses the import map:</p>
<pre><code class="lang-tsx">import React from "react";

export default function ExampleComponent() {
  return &lt;div /&gt;;
}
</code></pre>
<p>To use an import map in Aleph.js, one needs to define one file named <code>import_map.json</code> in the root directory. Using one in Ruck is also simple, define the file and pass it into Deno at runtime:</p>
<pre><code class="lang-tsx">deno run \
    --allow-env \
    --allow-net \
    --allow-read \
    --import-map=importMap.json \
    scripts/ruck-serve.mjs
</code></pre>
<p>The issue with import maps is that browser support is still poor with Safari and Firefox not supporting it out of the box. The good news is that Ruck uses a shim to provide support for older browsers.</p>
<p>Another similarity is their focus on server-side rendering (SSR) React components. SSR can provide performance, SEO and other benefits over client-side rendering. If a React component depends on fetched data, opting to do so on the server means a component can render before sending data to the client. This means no loading states to show to the user and generally better performance. Ruck supports data fetching on the server at a component level whereas other frameworks usually only support this at the page level. Aleph.js lets you define <a target="_blank" href="https://alephjs.vercel.app/docs/basic-features/ssr-and-ssg">a ssr function</a> inside a page component file to achieve this. Aleph.js also supports a special hook, <code>useDeno</code>, for use in a component.</p>
<p>Example of using <code>useDeno</code> to fetch data on the server side in Aleph.js:</p>
<pre><code class="lang-tsx">import React from 'react'
import { useDeno, useRouter } from 'aleph'

export default function Post() {
  const { params } = useRouter()
  const post = useDeno(async () =&gt; {
    return await (await fetch(`https://.../post/${params.id}`)).json()
  })

  return (
    &lt;h1&gt;{post.title}&lt;/h1&gt;
  )
}
</code></pre>
<p>When it comes to styling your React app with CSS, both Ruck and Aleph.js support component-level CSS imports. This allows for sending CSS to the browser that requests it (i.e. when a component renders). Ruck allows for this via an exported component variable named css. You can achieve the same behavior in <a target="_blank" href="https://alephjs.vercel.app/docs/basic-features/built-in-css-support">a variety of ways</a> with Aleph.js but the recommended approach is to use <a target="_blank" href="https://github.com/css-modules/css-modules">CSS modules</a>.</p>
<p>Example of using the <code>css</code> function in Ruck:</p>
<pre><code class="lang-tsx">import React from 'react'
import Heading, { css as cssHeading } from "./Heading.mjs";
import Para, { css as cssParagraph } from "./Para.mjs";

export const css = new Set([
  ...cssHeading,
  ...cssParagraph,
  "/components/ExampleComponent.css",
]);

export default function ExampleComponent() {
   ...
}
</code></pre>
<p>Example of using a CSS module in Aleph.js:</p>
<pre><code class="lang-tsx">import React from 'react'
import styles from './exampleComponent.module.css'

export default function ExampleComponent() {
  return (
    &lt;&gt;
      &lt;h1 className={styles.title}&gt;Hi :)&lt;/h1&gt;
    &lt;/&gt;
  )
}
</code></pre>
<p>A perk of being a server-side rendered application is having access to the HTTP request during the rendering lifecycle. This can be helpful if you need to access headers or change the response. With Ruck, the HTTP response is available in a React context, <code>TransferContext</code>. In Aleph.js we can use the <code>ssr</code> function.</p>
<p>Example of modifying the HTTP response in Ruck:</p>
<pre><code class="lang-tsx">import React from 'react';
import TransferContext from "ruck/TransferContext.mjs";

export default function PageError({ errorStatusCode, title, description }) {
  const ruckTransfer = useContext(TransferContext);

  if (ruckTransfer) ruckTransfer.responseInit.status = errorStatusCode;

  ...
}
</code></pre>
<p>Example of modifying the HTTP response in Aleph.js:</p>
<pre><code class="lang-tsx">import React from 'react';
import { useDeno } from 'aleph';

export default function ExampleComponent() {
  const isLoggedIn = useDeno(req =&gt; {
    return req.headers.get('Auth') === 'XXX'
  }, { revalidate: true })

  return (
    &lt;p&gt;isLoggedIn: {isLoggedIn}&lt;/p&gt;
  )
}
</code></pre>
<h2 id="heading-differences">Differences</h2>
<p>There are notable differences between the two frameworks to be aware of. Popularity and developer experience are the two largest. Ruck is new so doesn’t have the community backing that a framework like Aleph.js has. By looking at <a target="_blank" href="https://yoshixmk.github.io/deno-x-ranking/">Deno X Ranking</a>, Aleph.js is the second most popular React framework by Github star count with 4.8k compared to Ruck with only 120. Star Count isn’t the best metric but it gives you a good idea about developer intent.</p>
<p>Ruck will favor the developers who like a high level of control over exactly how their application functions. Ruck has configuration set in code, for example routing you must define yourself while Aleph.js handles this for you. Aleph.js can be run with zero configs and has project templates to get developers started. You can opt-in to features based on config. In Ruck, you must spend time setting up the basics of the application yourself.</p>
<p>Static websites are desirable if your web application has all the data it needs at build time. This can simplify deployments as there needs to be no running Deno server. Place the built folder of HTML, CSS and JS to a deployment target like Github Pages or Cloudflare. <a target="_blank" href="https://alephjs.vercel.app/docs/basic-features/ssr-and-ssg">Aleph.js supports static-site generation</a> which is helpful for these situations while Ruck does not. Like <code>getStaticPaths</code> in Next.js, you can define a paths key in the <code>ssr</code> function inside a component file to specify the paths this route can handle:</p>
<pre><code class="lang-tsx">import type { SSROptions } from 'aleph/types';

export const ssr: SSROptions = {
  paths: async () =&gt; {
    const posts = await (await fetch('https://.../api/posts')).json()
    return posts.map(({ id }) =&gt; `/post/${id}`)
  }
}
</code></pre>
<p>And then run <code>aleph build</code> , simple as that.</p>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>With the popularity of Deno continuing to increase, Ruck and Aleph.js are two Deno-based React web frameworks catering to two different sets of developers. Ruck being a newcomer, doesn’t have the same level of polish Aleph.js has but offers more control. Aleph.js offers a great developer experience with zero config needed and lots of powerful features. These minimal frameworks bake in a lot of built-in modern browser features which can lead to a minimal and lean tech stack that contrasts a lot of the complexity in the frontend ecosystem seen today. Deno’s large amount of built-in features leads to less work being done by third-party tools. React frameworks can focus on developing innovative and interesting new features while developers are at ease knowing they made a great choice for their web application tech stack.</p>
]]></content:encoded></item><item><title><![CDATA[Using static site generation in Next.js, Gatsby.js, and Remix]]></title><description><![CDATA[If you are writing a web application in 2022, you are likely using modern frontend technologies like React, Vue, and Svelte. You are also likely using an API to get the data necessary to render pages. 
Using API network requests is easily one of the ...]]></description><link>https://blog.alec.coffee/using-static-site-generation-in-nextjs-gatsbyjs-and-remix</link><guid isPermaLink="true">https://blog.alec.coffee/using-static-site-generation-in-nextjs-gatsbyjs-and-remix</guid><category><![CDATA[React]]></category><category><![CDATA[Next.js]]></category><category><![CDATA[Remix]]></category><category><![CDATA[Gatsby]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 27 Apr 2022 12:03:04 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/ZNTPlG050tk/upload/v1658059287225/BgKvWQvGd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you are writing a web application in 2022, you are likely using modern frontend technologies like <a target="_blank" href="https://reactjs.org/">React</a>, <a target="_blank" href="https://vuejs.org/">Vue</a>, and <a target="_blank" href="https://svelte.dev/">Svelte</a>. You are also likely using an API to get the data necessary to render pages. </p>
<p>Using API network requests is easily one of the slowest steps required to render pages, and a slow-running app can mean a poor user experience. <a target="_blank" href="https://blog.logrocket.com/optimizing-performance-react-application/">Having great performing pages</a> can also improve your <a target="_blank" href="https://en.wikipedia.org/wiki/Search_engine_optimization">search engine optimization</a> (SEO) dramatically. </p>
<p>If you own the API and know how to make it faster, great! But if not, you don’t have control over the performance and speed of your app. You may even be using <a target="_blank" href="https://vercel.com/blog/nextjs-server-side-rendering-vs-static-generation">server-side generation</a>, which passes data fetching and page rendering to the server, and yet, your app still may not be fast enough. This is where static site generation comes into play. There are different ways to implement SSG, but first an explanation into how it works.</p>
<h2 id="heading-what-is-static-site-generation">What is static site generation?</h2>
<p>One of the ways software developers can optimize their applications is by doing the work ahead of time or by using caching. </p>
<p>Static site generation is the process of building pages (pre-rendering) into static assets and serving them to users instead of doing it per request, especially when our data is static or doesn’t change often. We can also create as many builds as we want, and most of this work can be done on a hosted server, making the process easy.</p>
<p>Thankfully, there are many static site generation tools based on React.js, with some of the most interesting ones being <a target="_blank" href="https://www.gatsbyjs.com/">Gatsby</a>, <a target="_blank" href="https://nextjs.org/">Next.js</a>, and <a target="_blank" href="https://www.gatsbyjs.com/https://remix.run/">Remix.js</a>. All three achieve <a target="_blank" href="https://blog.logrocket.com/react-remix-vs-next-js-vs-sveltekit/">serving performant web applications in their own way</a>. </p>
<p>For example, Gatsby has a great ecosystem of plugins to get you started quickly. Next.js is flexible, allowing you to opt into SSG on a per-page basis. <a target="_blank" href="https://blog.logrocket.com/remix-guide-newly-open-sourced-react-framework/">Remix.js</a>, while it doesn’t currently support SSG in the traditional sense, strives for page performance through caching assets on edge servers.</p>
<p>After understanding how Gatsby, Next.js, and Remix achieve performance through caching and SSG, choosing the best one for the job becomes easier, and we can take the techniques used and apply them to other frameworks and tools.</p>
<h2 id="heading-using-gatsby-for-static-site-generation">Using Gatsby for static site generation</h2>
<p>Gatsby is a React-based framework made for building statically generated web applications. It is one of the most popular ways developers perform SSG on the web today and has a huge community and <a target="_blank" href="https://www.gatsbyjs.com/plugins/">plugin</a> <a target="_blank" href="https://www.gatsbyjs.com/plugins/"></a><a target="_blank" href="https://www.gatsbyjs.com/plugins/">ecosystem</a>. </p>
<p>Here’s how it works. Developers write functions in Node.js to fetch data, create pages, and fill them with content. Matching these pages with React components (Gatsby calls these “templates”) is how Gatsby knows what to render and when. </p>
<p>Builds can be created on a developer’s local machine but also on many SaaS hosting solutions like Gatsby Cloud, Netlify, and others. Once a build is complete, it can then be deployed to a CDN for ultra-fast serving to users.</p>
<p><img src="https://paper-attachments.dropbox.com/s_C0546FE5A8F64775D8E87F12FEE36C2DA7164D789CC627F62DE935B22BA16C10_1648729126619_gatsby-deployment.drawio.png" alt="Example Gatsby architecture" /></p>
<p>One of the best aspects of Gatsby is that its developer ecosystem is extensive. It has data-source plugins that make it incredibly easy to fetch data from an external API (take Shopify, for example) and image optimization plugins, just to name a few. </p>
<p>Gatsby seems to have a plugin for everything — over 2,000 and counting! Not only does it have several plugins to choose from, but the documentation is extensive and there are many <a target="_blank" href="https://www.gatsbyjs.com/starters/">starter templates to take inspiration from</a>. </p>
<p>With Gatsby, you get a very quick website by default while you’re developing at a quick rate. If you are building a frontend for a CMS or creating a commerce backend, Gatsby is a great choice.</p>
<p>Here is an example of the code used to generate pages in Gatsby:</p>
<p>    exports.createPages = async function ({ actions }) {
      const { data } = await graphql(<code>query {
          allMarkdownRemark {
            nodes {
              fields {
                slug
              }
            }
          }
        }</code>)
      data.allMarkdownRemark.forEach(node =&gt; {
        const slug = node.fields.slug
        actions.createPage({
          path: slug,
          component: require.resolve(<code>./src/templates/blog-post.js</code>),
          context: { slug: slug },
        })
      })
    };</p>
<p>Of course, all frameworks have their downsides, and Gatsby is no exception to this rule. Here are some cons to using Gatsby.</p>
<h3 id="heading-cons-to-using-gatsby-for-static-site-generation">Cons to using Gatsby for static site generation</h3>
<ul>
<li>Server-side rendering (SSR) is relatively new to Gatsby, so the docs aren’t thorough</li>
<li>The build process is tough to debug</li>
<li>Devs must use GraphQL to get data into React components, which may make the learning curve steep for some</li>
<li>Many aspects of Gatsby, including data fetching, are Gatsby-framework specific, so it’s challenging to switch to a different framework in the future</li>
<li>Builds can become slow depending on the amount of data processing you need — like most SSG-based technologies, the more your data increases, the number of pages to create and images to process also increase</li>
</ul>
<p>Considering the trade-offs, however, Gatsby can still be a great choice for your project.</p>
<h2 id="heading-static-site-generation-with-nextjs">Static site generation with Next.js</h2>
<p><a target="_blank" href="https://blog.logrocket.com/creating-website-next-js-react/">Next.js is a React-based framework</a> for building both dynamically and statically rendered web applications. It’s very popular and has many developers using it daily. </p>
<p>There is now a relatively new feature that added SSG support to Next.js, with the unique proposition that it can be done on a per-page basis. The build process is similar to traditional SSG frameworks like Gatsby, but you can choose the pages the SSG occurs on.</p>
<p>Developers also have the choice of using incremental static generation (ISR), which is an advanced method that builds pages at runtime if they are not present or need to be rebuilt due to developer-set invalidation. </p>
<p>If you have only chosen to use SSG for your pages, deployment is similar to Gatsby and can be done with a variety of vendors. It uses Node.js to fetch data and build your pages. If you use <a target="_blank" href="https://nextjs.org/docs/advanced-features/static-html-export#unsupported-features">any of the SSG unsupported features</a>, such as using a server to fetch data and build pages at runtime, you’ll have a more involved deployment process.</p>
<p><img src="https://paper-attachments.dropbox.com/s_C0546FE5A8F64775D8E87F12FEE36C2DA7164D789CC627F62DE935B22BA16C10_1648729160089_next-deployment.drawio.png" alt="Example Next.js architecture" /></p>
<p>One of the nicest aspects of Next.js is that it gives you the freedom to do what you want. As mentioned, you get the choice on a per-page basis to use SSG, SSR, or the new method, ISR. Better yet, it doesn’t prescribe anything like GraphQL when it comes to getting the data into your React components.</p>
<p>Here is an example of the code required for Next.js to statically generate a page:</p>
<p>    function Blog({ posts }) {
      return (
        </p><ul>
          {posts.map((post) =&gt; (
            <li>{post.title}</li>
          ))}
        </ul>
      )
    }<p></p>
<p>    export async function getStaticProps() {
      const res = await fetch('<a target="_blank" href="https://example.com/posts">https://example.com/posts</a>')
      const posts = await res.json()</p>
<p>      return {
        props: {
          posts,
        },
      }
    }</p>
<p>    export default Blog</p>
<p>While Next.js is a great option, it’s not for everyone.</p>
<h3 id="heading-the-cons-to-using-nextjs-for-static-site-generation">The cons to using Next.js for static site generation</h3>
<ul>
<li>The freedom it allows can be daunting for new developers who don’t have much experience writing web applications </li>
<li>Unlike Gatsby, there aren’t any out-of-the-box plugins, so there is boilerplate code you may need to write if the data you are fetching is from a popular vendor like Shopify or WordPress</li>
<li>The deployment and infrastructure can be cumbersome if you mix and match features and don’t use the Vercel, the company behind Next.js, deployment solution. </li>
</ul>
<p>However, if you want the freedom Next.js provides and don’t mind using Vercel’s deployment platform, Next.js is a great choice for web applications that need SSG.</p>
<h2 id="heading-using-remixjs-with-modern-react-tools">Using Remix.js with modern React tools</h2>
<p>Remix.js is a React-based framework for building primarily dynamically rendered web applications. It’s a very new framework to many, as it was closed-source until recently.</p>
<p>Remix is interesting because it does not provide an option to pre-compile pages into a static asset bundle as Gatsby or Next.js do. There is no concept of SSG or “builds”, data-fetching is on-demand by default and HTML is compiled on the server per request (SSR). </p>
<p><a target="_blank" href="https://remix.run/docs/en/v1/guides/performance">Instead Remix depends on distributed computing to perform the same features one would get with SSG.</a> The Remix server is compatible with edge computing vendors, such as Cloudflare Workers and <a target="_blank" href="http://Fly.io">Fly.io</a>. </p>
<p>The server that fetches the data and compiles the HTML is deployed to these edge servers close to your users, resulting in much faster response times than a traditional web application setup. Our data, by default, is still fetched per request, which may be still too slow for your needs. The Remix teams recommends you create custom cache code for this purpose. </p>
<p>There are edge server compatible databases that are usually in-memory, like SQLite, Redis, and LRU caches. When the data changes, you can evict data inside these caches manually or automatically, just like you would trigger a new build in a traditional SSG setup. </p>
<p>It’s also recommended to use <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching">HTTP caching</a> and serve static content on a CDN to serve web pages where the data doesn’t change often. This means you’ll get similar performance to a framework that uses SSG without it being coupled to a specific framework or having to worry about builds.</p>
<p><img src="https://paper-attachments.dropbox.com/s_C0546FE5A8F64775D8E87F12FEE36C2DA7164D789CC627F62DE935B22BA16C10_1648729207919_remix-deployment.drawio.png" alt="Example Remix.js architecture" /></p>
<p>Like Next.js, Remix gives the developer more freedom than prescribed frameworks like Gatsby when it comes to providing a fast web application experience. One of the great aspects of Remix.js is that it relies heavily on platforms like edge computing and browsers to accomplish most of its features. </p>
<p>This means when you learn Remix, you learn more about the web, not how to develop in a specific framework. There is a surprisingly large amount of deployment options for Remix despite being new, so you can shop around for the best one.</p>
<p>Here is an example of a React component that fetches data and compiles the HTML on an edge server and displays it on the page:</p>
<p>    export const loader: LoaderFunction = async ({ request }) =&gt; {
      const userId = await requireUserId(request);
      const noteListItems = await getNoteListItems({ userId });
      return json({ noteListItems });
    };</p>
<p>    export default function NotesPage() {
      const data = useLoaderData() as LoaderData;</p>
<p>      return data.noteListItems.map((note) =&gt; (
        <li>
          📝 {note.title}
        </li>
      ));
    }</p>
<h3 id="heading-the-cons-of-using-remix">The cons of using Remix</h3>
<ul>
<li>Like Next.js, the freedom Remix provides can be daunting for developers just starting, but the documentation is great so far, so you can learn what it has to offer</li>
<li>Creating custom caching mechanisms for your web content could result in boilerplate code, which isn’t great</li>
<li>The framework is still new and just now providing examples of how to perform data-caching on the server</li>
</ul>
<p>In general, Remix is a great option if you aren’t afraid to take on a relatively new option in the space.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>There is plenty to consider when choosing between Gatsby, Next.js, and Remix.js to build your application. Which one you should choose depends on your ideal setup, your experience level, the vendor you want to host on, and how much code you want to write. No matter what you choose, all three options provide ways to serve web pages fast to your users.</p>
]]></content:encoded></item><item><title><![CDATA[Develop, test, and deploy Cloudflare Workers with Denoflare]]></title><description><![CDATA[After spending most of my career working with Node.js, I was interested to hear about its counterpart, Deno. Deno is a different take on server-side JavaScript. Deno was designed to correct the wrongs of Node.js and be a runtime that is safe and secu...]]></description><link>https://blog.alec.coffee/develop-test-and-deploy-cloudflare-workers-with-denoflare</link><guid isPermaLink="true">https://blog.alec.coffee/develop-test-and-deploy-cloudflare-workers-with-denoflare</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[Service Workers]]></category><category><![CDATA[TypeScript]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Tue, 25 Jan 2022 12:36:14 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1646829548183/yV5BxKdtc.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After spending most of my career working with Node.js, I was interested to hear about its counterpart, Deno. <a target="_blank" href="https://deno.land/">Deno</a> is a different take on server-side JavaScript. Deno was designed <a target="_blank" href="https://www.youtube.com/watch?v=M3BM9TB-8yA">to correct the wrongs of Node.js</a> and be a runtime that is safe and secure by default. </p>
<h2 id="heading-what-is-deno">What is Deno?</h2>
<p>Deno incorporates a few other aspects, such as a built-in TypeScript compiler, linter, formatter, and package manager. It also supports <a target="_blank" href="https://blog.logrocket.com/how-to-use-ecmascript-modules-with-node-js/">ESM</a> <a target="_blank" href="https://blog.logrocket.com/how-to-use-ecmascript-modules-with-node-js/"></a><a target="_blank" href="https://blog.logrocket.com/how-to-use-ecmascript-modules-with-node-js/">modules</a>, uses the web platform as a base for its standard library, and has a secure-by-default design around IO. </p>
<p>This results in a drastically simplified development experience when compared to Node.js. I started using Deno for a CLI tool I needed to write (using <a target="_blank" href="https://github.com/c4spar/deno-cliffy">deno-cliffy</a>) and was pleased with the experience. When I first heard about D<a target="_blank" href="https://denoflare.dev/">enoflare</a>, I thought it would be the ideal opportunity to experiment with <a target="_blank" href="https://workers.cloudflare.com/">Cloudflare Workers</a> while gaining more experience with Deno.</p>
<h2 id="heading-using-deno-with-cloudfare-workers">Using Deno with Cloudfare Workers</h2>
<p>Cloudflare Workers is an alternative to serverless infrastructure when compared to services like AWS Lambda. At its core, it acts like a serverless function, taking the request information, applying logic, and sending a response. </p>
<p>The expected use-case is different in that it’s meant to act as a middleware, intercepting requests sent to an origin server, and applying logic. Cloudflare behaves similarly to <a target="_blank" href="https://developers.google.com/web/fundamentals/primers/service-workers/">browser service worker</a>s.</p>
<p><img src="https://paper-attachments.dropbox.com/s_BE214C9BDAEB2DFD8E7D323B5801786E774F2116262B2AC48FF5E2854FEFE528_1640978662490_Untitled+Diagram.drawio.png" alt /></p>
<p>Cloudflare Workers is not full of isolated containers being spun up, providing the benefit of zero cold-starts. Some other nice features of Cloudflare Workers are global CDN deployments and paying on a per-request basis.</p>
<h2 id="heading-what-is-denoflare">What is Denoflare?</h2>
<p>Denoflare is a small framework that lets you publish Cloudflare Workers written in Deno. It’s a natural fit, as both Deno and Cloudflare Workers follow standardized web platform runtime APIs.</p>
<p>Denoflare lets you serve your worker locally to test your changes in an isolated environment that is similar to how Cloudflare would run them. It supports a great developer experience with hot-reloading, being able to publish the worker right to the Cloudflare platform, tailing logs in production, and more. </p>
<p>This has a similar experience to using another worker framework like <a target="_blank" href="https://github.com/cloudflare/miniflare">Miniflare</a>, except it's much simpler because Deno is doing most of the work. For example, instead of depending on Jest as Miniflare does, you can write tests to run with the native Deno test runner.</p>
<h2 id="heading-working-with-cloudflare-and-denoflare">Working with Cloudflare and Denoflare</h2>
<p>The best way to experience Cloudflare and Denoflare is by going through a real-world example use case. Using minimal code, we will set up <a target="_blank" href="https://en.wikipedia.org/wiki/A/B_testing">A/B testing</a> for a blog. Half the time, the user will be put into the test group and will see a new header. The others will be put into the control group and will not see the new header. </p>
<p>Using Cloudflare Workers, this is as simple as intercepting requests to the blog origin server, placing the user into a group based on our split, and setting the response header <code>Set-Cookie</code> with the group name. After this is done, our blog can read the cookie to decide which header to show.</p>
<blockquote>
<p>We are omitting the code needed to change the header in the blog as there are several different ways you can do so.</p>
</blockquote>
<h2 id="heading-setting-up-your-cloudflare-workers-account">Setting up your Cloudflare Workers account</h2>
<p>The first thing we must do <a target="_blank" href="https://dash.cloudflare.com/sign-up/workers">is sign up for a Cloudflare account with Cloudflare Workers set</a> <a target="_blank" href="https://dash.cloudflare.com/sign-up/workers"></a><a target="_blank" href="https://dash.cloudflare.com/sign-up/workers">up</a>.
Once you are done, create a free .dev subdomain for testing our worker on. We will choose the free plan.</p>
<p><img src="https://paper-attachments.dropbox.com/s_BE214C9BDAEB2DFD8E7D323B5801786E774F2116262B2AC48FF5E2854FEFE528_1640978721614_Xnip2021-12-31_09-33-30.jpg" alt="Creating a free .dev subdomain for testing." /></p>
<p>Creating a free .dev subdomain for testing.</p>
<blockquote>
<p>Note: You can change this later to be the domain you would use in production.</p>
</blockquote>
<p>Denoflare requires an API token to allow us to push our compiled worker to Cloudflare. <a target="_blank" href="https://dash.cloudflare.com/profile/api-tokens">Go</a> <a target="_blank" href="https://dash.cloudflare.com/profile/api-tokens"></a>to <a target="_blank" href="https://dash.cloudflare.com/profile/api-tokens">the tokens page</a> in the Cloudflare dashboard and select <strong>Create Token</strong>. Once there, you should choose <strong>Edit Cloudflare Workers</strong> as the template. Note this token for later.</p>
<p>From the overview screen, copy the account ID for use later.</p>
<h2 id="heading-developing-with-denoflare">Developing with Denoflare</h2>
<p>Deno provides a great development experience, which we will keep intact when using Denoflare.</p>
<blockquote>
<p><a target="_blank" href="https://github.com/aleccool213/denoflare-blog-posthttps://github.com/aleccool213/denoflare-blog-post">Note</a><a target="_blank" href="https://github.com/aleccool213/denoflare-blog-posthttps://github.com/aleccool213/denoflare-blog-post">: You can find all the code in this GitHub repo.</a></p>
</blockquote>
<p>First, set the Deno version and install it on your local machine using <a target="_blank" href="https://asdf-vm.com/">asdf</a>:</p>
<pre><code>echo <span class="hljs-string">"deno 1.16.0"</span> <span class="hljs-operator">&gt;</span> .tool-versions <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> brew install asdf <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> asdf plugin add deno <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> deno install
</code></pre><p>Let’s configure our IDE. If you are using Visual Studio Code, you will have a great experience with Deno. Install the Deno extension, <a target="_blank" href="https://marketplace.visualstudio.com/items?itemName=denoland.vscode-deno">which can be found here</a>. It enables type-checking, linting, and more.</p>
<p>To enable it, you must turn it on in the workspace settings file at <code>.vscode/settings.json</code>:</p>
<pre><code>{
  <span class="hljs-attr">"deno.enable"</span>: <span class="hljs-literal">true</span>
}
</code></pre><p>Next, we will install <a target="_blank" href="https://github.com/jurassiscripts/velociraptor">Velociraptor</a>, a script manager for Deno. Velociraptor makes it easy for all developers to use the same scripts when performing common tasks (think npm scripts).</p>
<p>Run this command in your console:</p>
<pre><code><span class="hljs-attribute">deno</span> install -qAn vr https://deno.land/x/velociraptor@<span class="hljs-number">1</span>.<span class="hljs-number">3</span>.<span class="hljs-number">0</span>/cli.ts
</code></pre><p>We will define a Velociraptor script to invoke the Denoflare CLI, allowing us to run Denoflare commands such as <code>serve</code> and <code>push</code>.</p>
<pre><code>{
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"denoflare"</span>: <span class="hljs-string">"deno run --unstable --allow-read --allow-net --allow-env https://raw.githubusercontent.com/skymethod/denoflare/v0.3.3/cli/cli.ts"</span>
  }
}
</code></pre><p>Now that we have our IDE and environment runtime setup, we can move on to the code. With Deno, it’s common to declare all dependencies in a single file named <code>deps.ts</code>. This is because modules are defined by URLs and can become difficult to manage if scattered everywhere in a project.</p>
<p>We need one type of Cloudflare Workers, which Denoflare defines:</p>
<pre><code><span class="hljs-keyword">export</span> <span class="hljs-keyword">type</span> { IncomingRequestCf } <span class="hljs-keyword">from</span> <span class="hljs-string">"https://raw.githubusercontent.com/skymethod/denoflare/v0.3.0/common/cloudflare_workers_types.d.ts"</span>;
</code></pre><p>Once we have this type, we can write our A/B tester logic in <code>index.ts</code>:</p>
<pre><code><span class="hljs-keyword">import</span> { <span class="hljs-title">IncomingRequestCf</span> } <span class="hljs-title"><span class="hljs-keyword">from</span></span> <span class="hljs-string">"./deps.ts"</span>;

<span class="hljs-comment">/**
 * Based on the A/B Testing Cloudflare Worker example.
 * Ref: https://developers.cloudflare.com/workers/examples/ab-testing
 */</span>
<span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fetch</span>(<span class="hljs-params">request: IncomingRequestCf</span>): <span class="hljs-title">Response</span> </span>{
  const NAME <span class="hljs-operator">=</span> <span class="hljs-string">"experiment-0"</span>;

  const TEST_RESPONSE <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Test group"</span>);
  const CONTROL_RESPONSE <span class="hljs-operator">=</span> <span class="hljs-keyword">new</span> Response(<span class="hljs-string">"Control group"</span>);

  <span class="hljs-comment">// Determine which group this requester is in.</span>
  const cookie <span class="hljs-operator">=</span> request.headers.get(<span class="hljs-string">"cookie"</span>);
  <span class="hljs-keyword">if</span> (cookie <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> cookie.includes(`${NAME}<span class="hljs-operator">=</span>control`)) {
    <span class="hljs-keyword">return</span> CONTROL_RESPONSE;
  } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (cookie <span class="hljs-operator">&amp;</span><span class="hljs-operator">&amp;</span> cookie.includes(`${NAME}<span class="hljs-operator">=</span>test`)) {
    <span class="hljs-keyword">return</span> TEST_RESPONSE;
  } <span class="hljs-keyword">else</span> {
    <span class="hljs-comment">// If there is no cookie, this is a new client. Choose a group and set the cookie.</span>
    const group <span class="hljs-operator">=</span> Math.random() <span class="hljs-operator">&lt;</span> <span class="hljs-number">0</span><span class="hljs-number">.5</span> ? <span class="hljs-string">"test"</span> : <span class="hljs-string">"control"</span>; <span class="hljs-comment">// 50/50 split</span>
    const response <span class="hljs-operator">=</span> group <span class="hljs-operator">=</span><span class="hljs-operator">=</span><span class="hljs-operator">=</span> <span class="hljs-string">"control"</span> ? CONTROL_RESPONSE : TEST_RESPONSE;
    response.headers.append(<span class="hljs-string">"Set-Cookie"</span>, `${NAME}<span class="hljs-operator">=</span>${group}; path<span class="hljs-operator">=</span><span class="hljs-operator">/</span>`);

    <span class="hljs-keyword">return</span> response;
  }
}

export default {
  fetch,
};
</code></pre><p>The last step is to declare a <code>.denoflare</code> file, which Denoflare uses to run and publish your Cloudflare Worker:</p>
<pre><code>{
  <span class="hljs-attr">"$schema"</span>: <span class="hljs-string">"https://raw.githubusercontent.com/skymethod/denoflare/v0.3.3/common/config.schema.json"</span>,
  <span class="hljs-attr">"scripts"</span>: {
    <span class="hljs-attr">"a-b-test-local"</span>: {
      <span class="hljs-attr">"path"</span>: <span class="hljs-string">"index.ts"</span>,
      <span class="hljs-attr">"localPort"</span>: <span class="hljs-number">3030</span>
    }
  },
  <span class="hljs-attr">"profiles"</span>: {
    <span class="hljs-attr">"account1"</span>: {
      <span class="hljs-attr">"accountId"</span>: <span class="hljs-string">"INSERT_ACCOUNT_ID_FROM_PREVIOUS_STEP"</span>,
      <span class="hljs-attr">"apiToken"</span>: <span class="hljs-string">"INSERT_API_TOKEN_FROM_PREVIOUS_STEP"</span>
    }
  }
}
</code></pre><p>That’s it! Let’s serve the Cloudflare Workers locally to make sure everything is working properly:</p>
<pre><code>vr denoflare serve a<span class="hljs-operator">-</span>b<span class="hljs-operator">-</span>test<span class="hljs-operator">-</span>local
</code></pre><p>This should be the output:</p>
<pre><code><span class="hljs-attribute">Compiling</span> https://raw.githubusercontent.com/skymethod/denoflare/v<span class="hljs-number">0</span>.<span class="hljs-number">3</span>.<span class="hljs-number">3</span>/cli-webworker/worker.ts into worker contents...
<span class="hljs-attribute">Compiled</span> https://raw.githubusercontent.com/skymethod/denoflare/v<span class="hljs-number">0</span>.<span class="hljs-number">3</span>.<span class="hljs-number">3</span>/cli-webworker/worker.ts into worker contents in <span class="hljs-number">277</span>ms
<span class="hljs-attribute">runScript</span>: index.ts
<span class="hljs-attribute">Compiled</span> index.ts into module contents in <span class="hljs-number">142</span>ms
<span class="hljs-attribute">worker</span>: start
<span class="hljs-attribute">Started</span> in <span class="hljs-number">456</span>ms (isolation=isolate)
<span class="hljs-attribute">Local</span> server running <span class="hljs-literal">on</span> http://localhost:<span class="hljs-number">3030</span>
</code></pre><p>Now, open a browser and go to <code>localhost:3030</code>. You will see that the response is the group the user is put into:</p>
<p><img src="https://paper-attachments.dropbox.com/s_BE214C9BDAEB2DFD8E7D323B5801786E774F2116262B2AC48FF5E2854FEFE528_1640978898683_Xnip2021-12-30_14-52-38.jpg" alt="We were put in the test group!" /></p>
<h2 id="heading-testing-the-application">Testing the application</h2>
<p>Because Cloudflare Workers is a Deno application, we can use its tools to validate and test our code. All of these run automatically as you are developing in Visual Studio Code, thanks to the extension you installed. </p>
<p>Run the Deno linter with:</p>
<pre><code><span class="hljs-attribute">deno</span> lint
</code></pre><p>Make sure the Deno app compiles with:</p>
<pre><code>deno compile <span class="hljs-keyword">index</span>.ts
</code></pre><p>Write a few tests and run them with:</p>
<pre><code><span class="hljs-selector-tag">deno</span> <span class="hljs-selector-tag">test</span> <span class="hljs-selector-tag">index</span><span class="hljs-selector-class">.test</span><span class="hljs-selector-class">.ts</span>
</code></pre><h2 id="heading-deploying-cloudflare-workers">Deploying Cloudflare Workers</h2>
<p>You should have no trouble deploying Cloudflare Workers once you have the correct account ID and token in the Denoflare configuration (<code>.denoflare</code>). </p>
<p>By default, Denoflare pushes your worker to the .dev subdomain you have set up for your account.
Deploy it to the Cloudflare instance by running:</p>
<pre><code>vr denoflare push a<span class="hljs-operator">-</span>b<span class="hljs-operator">-</span>test<span class="hljs-operator">-</span>local
</code></pre><p>You should see the worker appear instantly on your worker dashboard.</p>
<p><img src="https://paper-attachments.dropbox.com/s_BE214C9BDAEB2DFD8E7D323B5801786E774F2116262B2AC48FF5E2854FEFE528_1640978938130_Xnip2021-12-30_08-32-39.jpg" alt="The Cloudflare Workers dashboard." /></p>
<p>By default, its public access is disabled. Go to the <strong>service</strong> <strong>details</strong> page and enable the route.</p>
<p><img src="https://paper-attachments.dropbox.com/s_BE214C9BDAEB2DFD8E7D323B5801786E774F2116262B2AC48FF5E2854FEFE528_1640978952334_Xnip2021-12-30_08-34-27.jpg" alt="Enabling the route." /></p>
<p>When you go this route, you should see a response with what group this request gets put into.</p>
<p><img src="https://paper-attachments.dropbox.com/s_BE214C9BDAEB2DFD8E7D323B5801786E774F2116262B2AC48FF5E2854FEFE528_1640979005379_Untitled.png" alt="It works!" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Denoflare is a simple mini-framework built around Deno that allows us to easily publish Cloudflare Workers. </p>
<p>Due to the way it implements web standards API and the similarity to Cloudflare Workers in the security model, Deno is a natural fit. Cloudflare Workers is a powerful way to deploy middleware logic to the cloud close to where users are using your applications due to their edge CDN strategy, where they deploy workers around the globe. </p>
<p><a target="_blank" href="https://developers.cloudflare.com/workers/examples">Have a look at other examples the Cloudflare team put together</a> so you can get an even better idea of what else it can be used for.</p>
]]></content:encoded></item><item><title><![CDATA[Why I (finally) switched to urql from Apollo Client]]></title><description><![CDATA[Using GraphQL in your frontend application is a like playing a different ball game than when using REST. Client libraries such as urql, Apollo Client, and Relay are able to offer different capabilities than REST libraries such as Axios or fetch.
How ...]]></description><link>https://blog.alec.coffee/why-switched-apollo-client-urql</link><guid isPermaLink="true">https://blog.alec.coffee/why-switched-apollo-client-urql</guid><category><![CDATA[React]]></category><category><![CDATA[GraphQL]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 09 Jun 2021 00:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1645305004588/ZkdGsR8WG.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Using GraphQL in your frontend application is a like playing a different ball game than when using REST. Client libraries such as <a target="_blank" href="https://formidable.com/open-source/urql/">urql</a>, <a target="_blank" href="https://www.apollographql.com/">Apollo Client</a>, and <a target="_blank" href="https://relay.dev/">Relay</a> are able to offer different capabilities than REST libraries such as <a target="_blank" href="https://github.com/axios/axios">Axios</a> or <a target="_blank" href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch">fetch</a>.</p>
<p>How come? Because GraphQL is an opinionated API spec where both the server and client buy into <a target="_blank" href="https://graphql.org/learn/schema/">a schema format</a> and <a target="_blank" href="https://graphql.org/learn/queries/">querying format</a>. Based on this, they can provide multiple advanced features, such as utilities for caching data, auto-generation of React Hooks based on operations, and optimistic mutations.</p>
<p>Sometimes libraries can be too opinionated and offer too much "magic". I’ve been using Apollo Client for quite some time and have become frustrated with its caching and local state mechanisms.</p>
<p>This “bloat,” along with recently seeing how mismanaged the open-source community is, finally broke the camel's back for me. I realized that I needed to look elsewhere for a GraphQL client library.</p>
<h2 id="heading-what-is-urql">What is urql?</h2>
<p>Enter urql, which is a great alternative. It isn’t the new kid on the block — it’s been around since 2019 — but I’ve just made the switch and stand by my decision.</p>
<p><a target="_blank" href="https://blog.logrocket.com/exploring-urql-from-an-apollo-perspective/">Most of the lingo is the same as Apollo Client</a>, which made switching from Apollo to urql fairly straightforward. urql has most of the same features but also offers improvements, including better documentation, better configuration defaults, and first-party support for things like offline mode, file uploads, authentication flows, and a first-party Next.js plugin.</p>
<p>When you stack Apollo Client and urql against each other, you’ll start wondering why Apollo Client has been so popular in the first place.</p>
<h2 id="heading-bye-apollo-client-hello-urql">Bye Apollo Client 👋, hello urql</h2>
<p>As I'm writing this, the <a target="_blank" href="https://github.com/apollographql/apollo-client">Apollo Client Github repository</a> issue count stands at 795. In comparison, <a target="_blank" href="https://github.com/FormidableLabs/urql">urql has 16</a>. “But issue count doesn't correlate to code quality!" is what you may say to me. That’s true, but it gives you the same feeling as a code smell — you know something isn't right.</p>
<p>Looking deeper, you can see a large amount of issues open, bugs taking months to fix, and pull requests never seem to be merged from outside contributors. Apollo seems unfocused on building the great client package the community wants.</p>
<p>This sort of behaviour indicates to me that Apollo is using open-source merely for marketing and not to make their product better. The company wants you to get familiar with Apollo Client and then buy into their products, not truly open-source software in my opinion. This is one of the negatives of <a target="_blank" href="https://linuxinsider.com/story/open-core-debate-the-battle-for-a-business-model-66807.html">the open-core business model</a>.</p>
<p>I started to look elsewhere for a GraphQL Client that had a more happy and cohesive community. When a tool is designed well and with features the community wants, fewer issues are created and there is less of a need for pull requests. <a target="_blank" href="https://formidable.com/">Formidable</a> is the agency behind urql, and they care about creating applications in fast and maintainable ways, compared to trying to funnel users into using their products.</p>
<h2 id="heading-why-use-urql">Why use urql?</h2>
<p>For me, urql is a breath of fresh air after working with Apollo Client for so long. There are a lot of little things that add up to a much better developer experience, especially for newcomers. Here are just a few.</p>
<p><strong>Documentation in urql is thorough</strong>
Having great documentation is a key feature for any open-source library. Without great docs, there will be more confusion among the community over how to use it and how it works internally. I attribute <a target="_blank" href="https://formidable.com/open-source/urql/docs/">urql</a><a target="_blank" href="https://formidable.com/open-source/urql/docs/">’</a>s thorough <a target="_blank" href="https://formidable.com/open-source/urql/docs/">docs</a> to why it has such a low issue count. It only took me a few hours to read the <em>entire</em> documentation.</p>
<p>This is impressive because it shows how focused the library is and how thought-out the structure is. Some of the highlights include <a target="_blank" href="https://formidable.com/open-source/urql/docs/architecture/">this one-pager on the architecture of how urql works</a> and <a target="_blank" href="https://formidable.com/open-source/urql/docs/comparison/">this table comparing itself to other GraphQL clients (like Apollo).</a></p>
<p><strong>Plugins and packages have first-party support in urql</strong>
urql really caught my attention when I heard it had first-class support for additional functionality such as <a target="_blank" href="https://formidable.com/open-source/urql/docs/graphcache/offline/">offline mode</a>, <a target="_blank" href="https://formidable.com/open-source/urql/docs/advanced/persistence-and-uploads/#file-uploads">file uploads</a>, <a target="_blank" href="https://formidable.com/open-source/urql/docs/api/auth-exchange/">authentication</a>, and <a target="_blank" href="https://formidable.com/open-source/urql/docs/advanced/server-side-rendering/#nextjs">Next.js</a>. These are all features that I've always thought of as basic for a GraphQL client, and it's great to see urql have first-party support for them.</p>
<p>For instance, <a target="_blank" href="https://formidable.com/open-source/urql/docs/api/auth-exchange/">the urql authentication exchange package</a> has you implementing only a few methods to have an entire authentication flow within your client, including token refresh logic. You can achieve all of these things in Apollo Client, but there are no official docs or packages. This means you spend more time to research community solutions, hacks, and code.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// All the code needed to support offline mode in urql</span>
<span class="hljs-keyword">import</span> { createClient } <span class="hljs-keyword">from</span> <span class="hljs-string">"urql"</span>;
<span class="hljs-keyword">import</span> { offlineExchange } <span class="hljs-keyword">from</span> <span class="hljs-string">"@urql/exchange-graphcache"</span>;
<span class="hljs-keyword">import</span> { makeDefaultStorage } <span class="hljs-keyword">from</span> <span class="hljs-string">"@urql/exchange-graphcache/default-storage"</span>;

<span class="hljs-keyword">const</span> storage = makeDefaultStorage({
  <span class="hljs-attr">idbName</span>: <span class="hljs-string">"apiCache"</span>,
  <span class="hljs-attr">maxAge</span>: <span class="hljs-number">7</span>, <span class="hljs-comment">// The maximum age of the persisted data in days</span>
});

<span class="hljs-keyword">const</span> cache = offlineExchange({
  schema,
  storage,
  <span class="hljs-attr">updates</span>: {
    <span class="hljs-comment">/* ... */</span>
  },
  <span class="hljs-attr">optimistic</span>: {
    <span class="hljs-comment">/* ... */</span>
  },
});

<span class="hljs-keyword">const</span> client = createClient({
  <span class="hljs-attr">url</span>: <span class="hljs-string">"http://localhost:3000/graphql"</span>,
  <span class="hljs-attr">exchanges</span>: [cache],
});
</code></pre>
<p>It's also great that I haven't had to give up things I loved when working with Apollo Client, such as the dev tools and React hooks generation because urql has a <a target="_blank" href="https://formidable.com/open-source/urql/docs/advanced/debugging/#devtools">dev tools browser extension</a> and <a target="_blank" href="https://www.graphql-code-generator.com/docs/plugins/typescript-urql">a plugin for graphql-code-generator</a>.</p>
<p><strong>Caching</strong> <strong>in urql is easy and effective</strong>
There is a common developer motto that cache invalidation is one of the hardest things in programming. After many hours debugging Apollo Clients normalized cache, I believe it. urql's caching defaults are sensible to the newcomer and can be extended to become more advanced.</p>
<p>I appreciate that it doesn't force you to use a normalized cache by default, but <a target="_blank" href="https://formidable.com/open-source/urql/docs/basics/document-caching/">comes with a document cache</a> instead. This works by just hashing the query and its variables — it’s simple and effective!</p>
<p>Learning how a complex, fully normalized caching store works just to get started using a client library seems heavy handed. Only offering normalized caching is something I felt Apollo Client got wrong.</p>
<p>There is a steep learning curve to managing a normalized cache, and it's unnecessary for many applications. It's fantastic that urql offers this as <a target="_blank" href="https://formidable.com/open-source/urql/docs/graphcache/normalized-caching/">a separate package</a> that you can opt into at a later time. I’ve seen this trend demonstrated with other packages as well such as <a target="_blank" href="https://react-query.tanstack.com/">React Query.</a></p>
<blockquote>
<p>While a vast majority of users do not actually need a normalized cache or even benefit from it as much as they believe they do. <a target="_blank" href="https://react-query.tanstack.com/graphql#_top">- React Query Docs</a></p>
</blockquote>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { ApolloClient, InMemoryCache } <span class="hljs-keyword">from</span> <span class="hljs-string">"@apollo/client"</span>;

<span class="hljs-keyword">const</span> client = <span class="hljs-keyword">new</span> ApolloClient({
  <span class="hljs-attr">uri</span>: <span class="hljs-string">"http://localhost:4000/graphql"</span>,
  <span class="hljs-comment">// Normalized cache is required</span>
  <span class="hljs-attr">cache</span>: <span class="hljs-keyword">new</span> InMemoryCache(),
});

<span class="hljs-keyword">import</span> { createClient } <span class="hljs-keyword">from</span> <span class="hljs-string">"urql"</span>;

<span class="hljs-comment">// Document cache enabled by default</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> client = createClient({
  <span class="hljs-attr">url</span>: <span class="hljs-string">"http://localhost:4000/graphql"</span>,
});
</code></pre>
<p><strong>Local state is simplified in urql</strong>
urql stays true to server data and doesn't provide functions to manage local state like Apollo Client does. In my opinion, this is perfectly fine as full-on libraries to manage local state in React are becoming less needed. Mixing server-side state and local state seems ideal at first (one place for all state) but can lead to problems when you need to figure out which data is fresh versus which is stale and when to update it.</p>
<p><a target="_blank" href="https://reactjs.org/docs/context.html">React Context</a> is a great solution for situations where you have lots of prop drilling going on, which is sometimes the main reason people reach for a local state management library. I would also recommend <a target="_blank" href="https://github.com/davidkpiano/xstate">XState</a> if you are looking for a way to manage stateful workflows, which sometimes people use <a target="_blank" href="https://redux.js.org/recipes/reducing-boilerplate#reducers">Redux reducers</a> for.</p>
<p><strong>Understandable default behavior with Exchanges</strong>
<a target="_blank" href="https://formidable.com/open-source/urql/docs/architecture/#the-exchanges">Exchanges</a> are similar to links in Apollo Client and offer ways to extend the functionality of the client by intercepting requests. The difference with urql is that you can opt into even the basic ones, allowing you more control and understanding over the behaviour of the client.</p>
<p><a target="_blank" href="https://formidable.com/open-source/urql/docs/basics/react-preact/#setting-up-the-client">When getting started</a>, the client has no required exchanges and uses a default list. In my experience, starting off with just a few exchanges and adding more as time went on or when I needed them made debugging easier. urql shows that it takes extensibility seriously in supporting many different use-cases.</p>
<p>Here is an example of the exchanges you might use after you get used to urql:</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> {
  createClient,
  dedupExchange,
  cacheExchange,
  fetchExchange,
} <span class="hljs-keyword">from</span> <span class="hljs-string">"urql"</span>;

<span class="hljs-keyword">const</span> client = createClient({
  <span class="hljs-attr">url</span>: <span class="hljs-string">"http://localhost:4000/graphql"</span>,
  <span class="hljs-attr">exchanges</span>: [
    <span class="hljs-comment">// deduplicates requests if we send the same queries twice</span>
    dedupExchange,
    <span class="hljs-comment">// from prior example</span>
    cacheExchange,
    <span class="hljs-comment">// responsible for sending our requests to our GraphQL API</span>
    fetchExchange,
  ],
});
</code></pre>
<p><strong>uqrl offers a Next.js support plugin</strong>
Next.js is one of the most popular ways to use React these days. Integrating Apollo Client to use Next.js SSR in the past has always been a huge pain. With every upgrade, <a target="_blank" href="https://github.com/vercel/next.js/blob/canary/examples/with-apollo/lib/apolloClient.js#L30">you will have to look for examples</a> and likely need to change how it works.</p>
<p>With no official plugin from Apollo, you will have to keep maintaining this integration. As mentioned previously, urql has an official plugin for Next.js. This makes it easy to integrate.</p>
<pre><code class="lang-javascript"><span class="hljs-comment">// Simple React component integrating with Next.js using the plugin</span>
<span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;
<span class="hljs-keyword">import</span> Head <span class="hljs-keyword">from</span> <span class="hljs-string">"next/head"</span>;
<span class="hljs-keyword">import</span> { withUrqlClient } <span class="hljs-keyword">from</span> <span class="hljs-string">"next-urql"</span>;

<span class="hljs-keyword">import</span> PokemonList <span class="hljs-keyword">from</span> <span class="hljs-string">"../components/pokemon_list"</span>;
<span class="hljs-keyword">import</span> PokemonTypes <span class="hljs-keyword">from</span> <span class="hljs-string">"../components/pokemon_types"</span>;

<span class="hljs-keyword">const</span> Root = <span class="hljs-function">() =&gt;</span> (
  <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">Head</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">title</span>&gt;</span>Root<span class="hljs-tag">&lt;/<span class="hljs-name">title</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">link</span> <span class="hljs-attr">rel</span>=<span class="hljs-string">"icon"</span> <span class="hljs-attr">href</span>=<span class="hljs-string">"/static/favicon.ico"</span> /&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">Head</span>&gt;</span>

    <span class="hljs-tag">&lt;<span class="hljs-name">PokemonList</span> /&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">PokemonTypes</span> /&gt;</span>
  <span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>
);

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> withUrqlClient(<span class="hljs-function">() =&gt;</span> ({
  <span class="hljs-attr">url</span>: <span class="hljs-string">"https://graphql-pokemon.now.sh"</span>,
}))(Root);
</code></pre>
<p><strong>Conclusion</strong>
urql has advantages over Apollo Client when it comes to its unified community, great documentation, and first-party plugins and caching system. I especially like how they seem to be working and engaging with the community instead of against it.</p>
<p>I’ve been trying a lot of GraphQL clients lately to see what else is out there to compare them to Apollo and it's been refreshing to see how great urql is. I foresee myself using it going forward for all my GraphQL apps. I hope this prompts you to try out urql for yourself and see what you think. Thanks for reading!</p>
]]></content:encoded></item><item><title><![CDATA[Using Storybook to Develop React Components Faster]]></title><description><![CDATA[When your goal as a product developer is to ship things faster, it’s a constant process of adding things that work and removing things that don’t. You need to try new processes that enable you to complete your work faster.
So what tools can you add t...]]></description><link>https://blog.alec.coffee/storybook-develop-react-components-faster</link><guid isPermaLink="true">https://blog.alec.coffee/storybook-develop-react-components-faster</guid><category><![CDATA[React]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Sat, 13 Mar 2021 00:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/IojCPQ2rWe8/upload/v1645305147559/rVmBQIMan.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When your goal as a product developer is to ship things faster, it’s a constant process of adding things that work and removing things that don’t. You need to try new processes that enable you to complete your work faster.</p>
<p>So what tools can you add to your workflow to supercharge development in React? Storybook.</p>
<h2 id="heading-what-is-storybook">What is Storybook?</h2>
<p>Storybook is, <a target="_blank" href="https://storybook.js.org/">according to its website</a>, an “open-source platform that allows you to document, view, and test many permutations of your JavaScript components within an isolated environment.”</p>
<p>Before I start to create a <a target="_blank" href="https://www.componentdriven.org/">component</a>, I first create stories for it in Storybook, then I start integrating it into my React app. This leads to writing more code, but leads me to reduce my churn.</p>
<p>It also forces me to think about edge cases, how the API of a component should be defined, and to decouple it from my main application.</p>
<p>It's similar to <a target="_blank" href="https://en.wikipedia.org/wiki/Test-driven_development">Test</a><a target="_blank" href="https://en.wikipedia.org/wiki/Test-driven_development">-</a>d<a target="_blank" href="https://en.wikipedia.org/wiki/Test-driven_development">riven Development</a>: write out the test cases before the code is written, but in this case, the tests are stories of all the states a component can be in. Being its own app, iterating through component designs is quick. Storybook has led to me ironing out edge cases, catching more bugs, and ultimately, finishing features faster.</p>
<p>Integrating Storybook with visual testing services like <a target="_blank" href="https://percy.io/">Percy</a> can help your team move fast with each pull request showing you diffs of new component changes. You can also further test components that are API-driven by mocking out query responses with <a target="_blank" href="https://mswjs.io/">Mock Service Worker</a>.</p>
<h2 id="heading-setting-up-components-in-storybook">Setting up components in Storybook</h2>
<p>Let's go through an example of how to get features done quicker using Storybook. Stories are just components rendered with a particular set of props. You can have as many or few as you like.</p>
<p>Let's pretend we are building a blog and want to have a list of entries on the index page. Let's make a story for each state of the component which will render each entry.</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;

<span class="hljs-keyword">import</span> { BlogEntryListItem } <span class="hljs-keyword">from</span> <span class="hljs-string">"./BlogEntryListItem"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> {
  <span class="hljs-attr">title</span>: <span class="hljs-string">"BlogEntryListItem"</span>,
  <span class="hljs-attr">component</span>: BlogEntryListItem,
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> BlogEntryListItemLoaded = <span class="hljs-function">() =&gt;</span> (
  <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">BlogEntryListItem</span>
    <span class="hljs-attr">title</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">A</span> <span class="hljs-attr">Fake</span> <span class="hljs-attr">Blog</span> <span class="hljs-attr">Post</span> <span class="hljs-attr">Title</span>"}
    <span class="hljs-attr">excerpt</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">Lorem</span> <span class="hljs-attr">Khaled</span> <span class="hljs-attr">Ipsum</span> <span class="hljs-attr">is</span> <span class="hljs-attr">a</span> <span class="hljs-attr">major</span> <span class="hljs-attr">key</span> <span class="hljs-attr">to</span> <span class="hljs-attr">success.</span>"}
    <span class="hljs-attr">date</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">2019-01-01</span>"}
    <span class="hljs-attr">lastUpdatedAt</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">2020-01-02</span>"}
    <span class="hljs-attr">slug</span>=<span class="hljs-string">{</span>"/<span class="hljs-attr">a-fake-blog-post-title</span>"}
  /&gt;</span></span>
);

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> BlogEntryListItemLongExcerpt = <span class="hljs-function">() =&gt;</span> (
  <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">BlogEntryListItem</span>
    <span class="hljs-attr">title</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">A</span> <span class="hljs-attr">Fake</span> <span class="hljs-attr">Blog</span> <span class="hljs-attr">Post</span> <span class="hljs-attr">Title</span>"}
    <span class="hljs-attr">excerpt</span>=<span class="hljs-string">{</span>
      "<span class="hljs-attr">Lorem</span> <span class="hljs-attr">Khaled</span> <span class="hljs-attr">Ipsum</span> <span class="hljs-attr">is</span> <span class="hljs-attr">a</span> <span class="hljs-attr">major</span> <span class="hljs-attr">key</span> <span class="hljs-attr">to</span> <span class="hljs-attr">success.</span> <span class="hljs-attr">You</span> <span class="hljs-attr">should</span> <span class="hljs-attr">never</span> <span class="hljs-attr">complain</span>, <span class="hljs-attr">complaining</span> <span class="hljs-attr">is</span> <span class="hljs-attr">a</span> <span class="hljs-attr">weak</span> <span class="hljs-attr">emotion</span>, <span class="hljs-attr">you</span> <span class="hljs-attr">got</span> <span class="hljs-attr">life</span>, <span class="hljs-attr">we</span> <span class="hljs-attr">breathing</span>, <span class="hljs-attr">we</span> <span class="hljs-attr">blessed.</span> <span class="hljs-attr">The</span> <span class="hljs-attr">key</span> <span class="hljs-attr">to</span> <span class="hljs-attr">success</span> <span class="hljs-attr">is</span> <span class="hljs-attr">to</span> <span class="hljs-attr">be</span> <span class="hljs-attr">yourself.</span>"
    }
    <span class="hljs-attr">date</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">2019-01-01</span>"}
    <span class="hljs-attr">lastUpdatedAt</span>=<span class="hljs-string">{</span>"<span class="hljs-attr">2020-01-02</span>"}
    <span class="hljs-attr">slug</span>=<span class="hljs-string">{</span>"/<span class="hljs-attr">a-fake-blog-post-title</span>"}
  /&gt;</span></span>
);

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> BlogEntryListItemLoading = <span class="hljs-function">() =&gt;</span> <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">BlogEntryListItem</span> <span class="hljs-attr">loading</span> /&gt;</span></span>;
</code></pre>
<p>We focus on the API of the component before coding the real thing. I like to mirror product requirements here.</p>
<p>In this example, I knew some blog entry excerpts would be long, so I created a story for it. I also needed to have a loading state because I planned to use <a target="_blank" href="https://github.com/dvtng/react-loading-skeleton">react-loading-skeleton</a>.</p>
<p>The next step is creating the basic code for the component:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> BlogEntryListItem = <span class="hljs-function">(<span class="hljs-params">props</span>) =&gt;</span> {
  <span class="hljs-keyword">if</span> (props.loading) {
    <span class="hljs-keyword">return</span> <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">div</span>&gt;</span>Loading...<span class="hljs-tag">&lt;/<span class="hljs-name">div</span>&gt;</span></span>;
  }
  <span class="hljs-keyword">return</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">NoColorLink</span> <span class="hljs-attr">to</span>=<span class="hljs-string">{props.slug}</span>&gt;</span>
      <span class="hljs-tag">&lt;<span class="hljs-name">BlogListItemWrapper</span>&gt;</span>
        <span class="hljs-tag">&lt;<span class="hljs-name">Description</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">Title</span>&gt;</span>{props.title}<span class="hljs-tag">&lt;/<span class="hljs-name">Title</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">span</span>&gt;</span>
            Published: {props.date}
            <span class="hljs-tag">&lt;<span class="hljs-name">br</span> /&gt;</span>
            Last Updated: {props.lastUpdatedAt}
          <span class="hljs-tag">&lt;/<span class="hljs-name">span</span>&gt;</span>
          <span class="hljs-tag">&lt;<span class="hljs-name">Excerpt</span>&gt;</span>{props.excerpt}<span class="hljs-tag">&lt;/<span class="hljs-name">Excerpt</span>&gt;</span>
        <span class="hljs-tag">&lt;/<span class="hljs-name">Description</span>&gt;</span>
      <span class="hljs-tag">&lt;/<span class="hljs-name">BlogListItemWrapper</span>&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">NoColorLink</span>&gt;</span></span>
  );
};
</code></pre>
<p>Here’s what it looks like in Storybook:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645303040568/fslfpD5XW.jpeg" alt="I rendered all the states in one story to make it fit the screenshot 🙃." /></p>
<p>The great thing about this is that we haven't touched our main application at all. We didn't have to muck around with production configuration, environment variables, or running local API services.</p>
<h2 id="heading-improving-components-with-storybook">Improving components with Storybook</h2>
<p>Defining all the states the component needs and writing a simple implementation has us looking great so far!</p>
<blockquote>
<p>Without adding our <code>BlogEntryListItem</code> component to to the main application, we can start making improvements right away. As you probably noticed, the excerpt is quite long and wraps the inside the <code>&lt;div&gt;</code>, so let's fix that using <code>overflow: hidden</code>.</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645303041930/SPbUAQ4CV.jpeg" alt="Some overflow: hidden action 🤘." /></p>
<p>Look! We improved our component without even stepping foot into our main app. We can go even further using some add-ons that ensure our component is even more capable.</p>
<p>One of the add-ons that Storybook comes with by default is <a target="_blank" href="https://github.com/storybookjs/storybook/tree/next/addons/viewport">Storybook Viewport Addon</a>, which allows you to see what your components look like on various screen sizes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645303043301/jfC2uEU91.jpeg" alt="We can't read excerpts on mobile!" /></p>
<p>Using this add-on in this example shows us that we can’t read excerpts on mobile.</p>
<p>You can see how using Storybook can improve our components without ever needing to run our main React application. This is the true power of working with components.</p>
<h2 id="heading-improving-speed-with-storybook">Improving speed with Storybook</h2>
<p>When iterating through components, many visual changes are bound to happen. Having a coworker pull your code changes and run Storybook locally to see changes works is slow, and we can certainly work faster.</p>
<p><a target="_blank" href="https://www.learnstorybook.com/visual-testing-handbook/">Visual Testing</a> tools give you screenshots of the visual diff between components as you iterate. For example, a tool can generate a screenshot of a fix for our component to properly render entries on mobile.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645303044671/cw2hqbN7h.jpeg" alt="Visual Testing example of a mobile text fix in Percy." /></p>
<p>This works through a Continuous Integration service like <a target="_blank" href="https://circleci.com/">CircleCI</a> or <a target="_blank" href="https://github.com/features/actions">Github Actions</a><a target="_blank" href="https://github.com/features/actions">,</a> you can build Storybook and use the <a target="_blank" href="https://github.com/percy/percy-storybook">Percy Storybook plugin</a> to snapshot all of your stories. It renders every story in a consistent browser environment and sends the HTML over to Percy for it to render. It then compares these rendered stories to previous builds to mark differences, like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645303045984/7TL32QaMC.jpeg" alt="What the Percy app looks like in a pull request" /></p>
<blockquote>
<p><a target="_blank" href="https://github.com/percy/storybook-action">Percy provides a great Github Action</a><a target="_blank" href="https://github.com/percy/storybook-action">,</a> <a target="_blank" href="https://github.com/percy/storybook-action">which does all of this automatically</a>. Here is an example pull request which implements this.</p>
</blockquote>
<p>In my experience, using visual testing with Storybook has caught many regressions by spotting changes which we didn't catch in code review.</p>
<h2 id="heading-mocking-out-api-queries-with-storybook">Mocking out API queries with Storybook</h2>
<p>Not only can Storybook provide us with a way to test components’ look and feel, but it can also can help us test behavior. Some components in your application most likely query data from a remote API. These are most often called "container components" or "page components.”</p>
<p>Providing fake data for your components is great, but we can get closer to reality by mocking the API requests that the components perform.</p>
<p>This example uses a REST API but the libraries used are compatible with GraphQL.</p>
<p>Thinking back to our blog entries, typically a parent component would query for a bunch of entries:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;
<span class="hljs-keyword">import</span> { useQuery } <span class="hljs-keyword">from</span> <span class="hljs-string">"react-query"</span>;

<span class="hljs-keyword">import</span> { BlogEntryListItem } <span class="hljs-keyword">from</span> <span class="hljs-string">"./BlogEntryListItem"</span>;

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">fetchBlogEntries</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">const</span> res = <span class="hljs-keyword">await</span> fetch(<span class="hljs-string">"&lt;https://fake-blog-entries-url.com&gt;"</span>);
  <span class="hljs-keyword">if</span> (!res.ok) {
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(res.statusText);
  }
  <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> res.json();
  <span class="hljs-keyword">return</span> data.results;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> BlogEntries = <span class="hljs-function">(<span class="hljs-params">props</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> { status, data, error } = useQuery(<span class="hljs-string">"blog-entries"</span>, fetchBlogEntries);
  <span class="hljs-keyword">return</span> data.map(<span class="hljs-function">(<span class="hljs-params">datum, index</span>) =&gt;</span> {
    <span class="hljs-keyword">return</span> <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">BlogEntryListItem</span> <span class="hljs-attr">key</span>=<span class="hljs-string">{index.toString()}</span> {<span class="hljs-attr">...datum</span>} /&gt;</span></span>;
  });
};
</code></pre>
<p>It would be nice if we could mock a response from the server in Storybook to see how the component behaves in different scenarios. There is a great library called <a target="_blank" href="https://mswjs.io">Mock Service Worker</a> that will intercept browser network queries and provide mock responses. Coupled with the <a target="_blank" href="https://github.com/itaditya/msw-storybook-addon">Storybook add-on for this module</a>, we can provide mock data:</p>
<pre><code class="lang-js"><span class="hljs-keyword">import</span> React <span class="hljs-keyword">from</span> <span class="hljs-string">"react"</span>;
<span class="hljs-keyword">import</span> { QueryClient, QueryClientProvider } <span class="hljs-keyword">from</span> <span class="hljs-string">"react-query"</span>;
<span class="hljs-keyword">import</span> { rest } <span class="hljs-keyword">from</span> <span class="hljs-string">"msw"</span>;

<span class="hljs-keyword">import</span> { BlogEntries } <span class="hljs-keyword">from</span> <span class="hljs-string">"./BlogEntries"</span>;

<span class="hljs-keyword">const</span> mockedQueryClient = <span class="hljs-keyword">new</span> QueryClient({
    <span class="hljs-attr">defaultOptions</span>: {
    <span class="hljs-attr">queries</span>: {
        <span class="hljs-attr">retry</span>: <span class="hljs-literal">false</span>,
    },
    },
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> {
    <span class="hljs-attr">title</span>: <span class="hljs-string">"BlogEntries"</span>,
    <span class="hljs-attr">component</span>: BlogEntries,
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> BlogEntriesStates = <span class="hljs-function">() =&gt;</span> (
    <span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">QueryClientProvider</span> <span class="hljs-attr">client</span>=<span class="hljs-string">{mockedQueryClient}</span>&gt;</span>
    <span class="hljs-tag">&lt;<span class="hljs-name">BlogEntries</span> /&gt;</span>
    <span class="hljs-tag">&lt;/<span class="hljs-name">QueryClientProvider</span>&gt;</span></span>
);

BlogEntriesStates.story = {
    <span class="hljs-attr">parameters</span>: {
    <span class="hljs-attr">msw</span>: [
        rest.get(<span class="hljs-string">"&lt;https://fake-blog-entries-url.com&gt;"</span>, <span class="hljs-function">(<span class="hljs-params">req, res, ctx</span>) =&gt;</span> {
        <span class="hljs-keyword">return</span> res(
            ctx.json({
            <span class="hljs-attr">results</span>: [
                ...
            ],
            })
        );
        }),
    ],
    },
};
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645303047316/rpgc8miFQ.jpeg" alt="It works!" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I covered a lot here, so I'll summarize my Storybook workflow:</p>
<ol>
<li>Receive requirements from product</li>
<li>Think about my component hierarchy</li>
<li>For each component, write a story for each significant state</li>
<li>For each "page" or feature, write a story and add API mocks</li>
<li>Write the code for each component which satisfy each state</li>
<li>Use Visual Testing in CI to test my changes against the main branch</li>
</ol>
<p>Notice that there are many steps, and it can take some time to adapt to this new flow. But after practicing this workflow, it seems natural to me now and I would never go back to writing React code without using Storybook alongside me.</p>
<p>Storybook is a perfect way to prototype components and make sure visual components get the love they deserve.</p>
]]></content:encoded></item><item><title><![CDATA[Running SQL Migrations Before Booting Docker Compose Services]]></title><description><![CDATA[Having a great local development experience is critical to happy engineers. Developers are happy to come into a codebase and start hacking away at problems, as opposed to dreading picking up their laptops. As services become more and more separated i...]]></description><link>https://blog.alec.coffee/running-sql-migrations-before-booting-docker-compose-services</link><guid isPermaLink="true">https://blog.alec.coffee/running-sql-migrations-before-booting-docker-compose-services</guid><category><![CDATA[Docker]]></category><category><![CDATA[SQL]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Tue, 29 Sep 2020 18:45:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/1cqIcrWFQBI/upload/v1645303566553/a6p9bOzRE.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Having a great local development experience is critical to happy engineers. Developers are happy to come into a codebase and start hacking away at problems, as opposed to dreading picking up their laptops. As services become more and more separated into separate "micro-services", new problems arise. When I look back on my short career, one problem that has induced me a lot of pain are SQL Migrations and how they should work in Docker Compose. Ideally, they should run before your web apps boot so they have access to a well-structured database. Your strategy to make this happen may be different for each environment, in this post I'll cover your local experience which easily extends to CI and I'll also touch how to do this in production.</p>
<h2 id="heading-a-bit-of-background">A Bit of Background</h2>
<p>I ran into this problem when we started to adopt <a target="_blank" href="https://www.apollographql.com/blog/apollo-federation-f260cf525d21/">Apollo Federation</a> into our stack. One gateway service lives in front of our other GraphQL services (which have their own databases). This gateway is what the client-side apps ("Web Browser" in diagram) queries to get its data. For clients to start using the gateway API every dependent service must be up and healthy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300243051/ycyzevgLS.png" alt="Service diagram" /></p>
<p>My goal is to start the entire set of services in a deterministic manner, have each service wait until its dependent service is running and healthy. I'll focus on the database and migrations in this post but the concepts here extend to any service having dependencies on another service.</p>
<p>A typical Docker Compose file which runs the products service with it's migrations will look like this:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">"3.7"</span>

<span class="hljs-attr">postgres:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">yourorg/postgres</span>
  <span class="hljs-attr">command:</span> <span class="hljs-string">postgres</span> <span class="hljs-string">-c</span> <span class="hljs-string">'max_connections=1024'</span>
  <span class="hljs-attr">expose:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">"5432"</span>
  <span class="hljs-attr">environment:</span>
    <span class="hljs-attr">POSTGRES_USER:</span> <span class="hljs-string">${DATABASE_USERNAME}</span>
    <span class="hljs-attr">POSTGRES_PASSWORD:</span> <span class="hljs-string">${DATABASE_PASSWORD}</span>
  <span class="hljs-attr">volumes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">${YOUR_ORG_INSTALL_PATH}/volumes/services/postgres-data:/var/lib/postgresql/data:delegated</span>

<span class="hljs-attr">products-run-migrations:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">context:</span> <span class="hljs-string">./apps/products/migrations/.</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">yourorg/products-run-migrations:${IMAGE_TAG:-latest}</span>
  <span class="hljs-attr">env_file:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">./apps/products/products.env</span>
  <span class="hljs-attr">depends_on:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">postgres</span>

<span class="hljs-attr">products:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">context:</span> <span class="hljs-string">./apps/products/.</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">yourorg/products:${IMAGE_TAG:-latest}</span>
  <span class="hljs-attr">env_file:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">./apps/products/products.env</span>
  <span class="hljs-attr">depends_on:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">postgres</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">products-run-migrations</span>
</code></pre>
<p>A few issues with this setup, focusing on <code>depends_on</code>:</p>
<ol>
<li>The <code>products-run-migrations</code> script will run right when <code>postgres</code> starts to run, not when it's healthy and ready to accept connections. There is a 99% chance that the <code>postgres</code> container will not be healthy when the migrations begin to run, causing them to fail.</li>
<li>Similar situation with the <code>products</code> service, it will run right when <code>products-run-migrations</code> starts to run. If you query the <code>products</code> service when it's healthy, there is a good chance the migrations have not yet completed.</li>
</ol>
<h2 id="heading-what-we-want">What We Want</h2>
<p>This is currently not what we want, ideally developers should be able to start the products service and when it's healthy, know that the migrations ran successfully and they can query its API.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300244825/MM9TLLexM.gif" alt /></p>
<p>This required me to do some research.</p>
<p>I found out there is a <code>conditions</code> argument to <code>depends_on</code> I could use to achieve the implementation we wanted. But the next thing I found out is where things went haywire.</p>
<p><strong>We needed to downgrade our docker-compose version.</strong></p>
<p><a target="_blank" href="https://github.com/peter-evans/docker-compose-healthcheck/issues/3">Turns out, Docker Compose 3.x versions are meant to be used for Docker Swarm and Kubernetes environments where services are not strictly dependent on each other.</a> This promotes a more fault-tolerant environment where services run independently.</p>
<p><a target="_blank" href="https://peterevans.dev/posts/how-to-wait-for-container-x-before-starting-y/">What I saw people suggesting</a> was to switch to Docker Compose 2.4 and use a port waiting script like <a target="_blank" href="https://github.com/vishnubob/wait-for-it">wait-for-it</a>.</p>
<p>Our new Docker Compose 2.4 file:</p>
<pre><code class="lang-yaml"><span class="hljs-attr">version:</span> <span class="hljs-string">'2.4'</span>

<span class="hljs-attr">postgres:</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">yourorg/postgres</span>
  <span class="hljs-attr">command:</span> <span class="hljs-string">postgres</span> <span class="hljs-string">-c</span> <span class="hljs-string">'max_connections=1024'</span>
  <span class="hljs-attr">expose:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">'5432'</span>
  <span class="hljs-attr">environment:</span>
    <span class="hljs-attr">POSTGRES_USER:</span> <span class="hljs-string">${DATABASE_USERNAME}</span>
    <span class="hljs-attr">POSTGRES_PASSWORD:</span> <span class="hljs-string">${DATABASE_PASSWORD}</span>
  <span class="hljs-attr">volumes:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-string">${YOUR_ORG_INSTALL_PATH}/volumes/services/postgres-data:/var/lib/postgresql/data:delegated</span>
  <span class="hljs-attr">healthcheck:</span>
    <span class="hljs-attr">test:</span> [<span class="hljs-string">'CMD-SHELL'</span>, <span class="hljs-string">'pg_isready -U root'</span>]
    <span class="hljs-attr">interval:</span> <span class="hljs-string">60s</span>

<span class="hljs-attr">products-run-migrations:</span>
  <span class="hljs-attr">build:</span>
    <span class="hljs-attr">context:</span> <span class="hljs-string">./apps/products/migrations/.</span>
  <span class="hljs-attr">image:</span> <span class="hljs-string">yourorg/products-run-migrations:${IMAGE_TAG:-latest}</span>
  <span class="hljs-attr">entrypoint:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">/bin/bash</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">-c</span>
      <span class="hljs-bullet">-</span> <span class="hljs-string">'wait-for-it $$DATABASE_HOSTNAME:5432 -s -t 60 -- npm run db-migrate:products
  restart: on-failure:5
  env_file:
    - ./apps/products/products.env
  depends_on:
      postgres:
        condition: service_healthy

products:
  build:
    context: ./apps/products/.
  image: yourorg/products:${IMAGE_TAG:-latest}
  env_file:
    - ./apps/products/products.env
  restart: on-failure:5
  depends_on:
      postgres:
        condition: service_healthy
      products-run-migrations:
        condition: service_started</span>
</code></pre>
<p>A few new additions:</p>
<ol>
<li>Added a <code>healthcheck</code> to the <code>postgres</code> container. This made it so migration scripts like <code>products-run-migrations</code> can start to run only when the database is ready to accept connections.</li>
<li>We added an <code>entrypoint</code> along with the <code>condition</code> arg to <code>depends_on</code> to our migration images. This made sure that not only the database is running but that the port is returning a 200 response code.</li>
<li>Added <code>condition</code> args to both <code>depends_on</code> in the <code>products</code> service definition.</li>
</ol>
<p>It's not perfect but works much better than before. The issue still remains of the app not waiting for the migrations to be fully finished before booting up. If they run relatively quickly, local developers may not run into problems. In CI, we have full control over the environment so we can add an extra command to skirt around this issue.</p>
<pre><code class="lang-yaml"><span class="hljs-comment"># .circleci/config.yml</span>
<span class="hljs-attr">test-products:</span>
  <span class="hljs-attr">executor:</span> <span class="hljs-string">node-docker</span>
  <span class="hljs-attr">steps:</span>
    <span class="hljs-bullet">-</span> <span class="hljs-attr">test-project:</span>
        <span class="hljs-attr">project-name:</span> <span class="hljs-string">products</span>
        <span class="hljs-attr">pre-test-command:</span> <span class="hljs-string">docker-compose</span> <span class="hljs-string">run</span> <span class="hljs-string">-T</span> <span class="hljs-string">products-run-migrations</span>
        <span class="hljs-attr">tests:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">run-project-test:</span>
              <span class="hljs-attr">project-name:</span> <span class="hljs-string">products</span>
</code></pre>
<h2 id="heading-the-issues-that-still-remain">The Issues That Still Remain</h2>
<p>What I didn't touch on is what our production environment looks like in terms of Docker Compose. We decided to maintain two versions of our Docker Compose files, one in 2.4 and one in 3.7. This is because we want to adopt Kubernetes easily in the future. You may stick with one or the other but for a great local development experience, we decided to always have a 2.4 file going.</p>
]]></content:encoded></item><item><title><![CDATA[Build a Next.js Blog with Cosmic’s GraphQL API]]></title><description><![CDATA[Want to follow along with the build? Click here to grab the app or fork the project.

With so many choices for which technology to use when building a website, it can get overwhelming. You need to consider who is going to use it, what content to disp...]]></description><link>https://blog.alec.coffee/build-next-js-blog-with-cosmics-graphql-api</link><guid isPermaLink="true">https://blog.alec.coffee/build-next-js-blog-with-cosmics-graphql-api</guid><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 23 Sep 2020 14:06:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1645328869085/K1IjFikAs.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Want to follow along with the build? <a href="https://www.cosmicjs.com/apps/nextjs-static-blog" target="_blank">Click here to grab the app or fork the project.</a></strong></p>
<hr />
<p>With so many choices for which technology to use when building a website, it can get overwhelming. You need to consider who is going to use it, what content to display and who will maintain it. A static website is a great choice when creating a blog, band website or e-commerce store. Static websites are an ode to the past when websites were just plain-ol files on a server you accessed via URL. They provide benefits like being fast, having great SEO and not being dependent on a certain runtime like PHP. This is in comparison to a server-rendered website like what you would have with Wordpress, Drupal or Ruby on Rails.</p>
<p>Static websites are built using static assets. The next question becomes where to store (and manage) this content. If you are a solo webmaster, the content can be files in a Git repo. If you have clients or other developers who will want to manage the content, a CMS (Content Management System) is what you need. A CMS is a service which stores your website content, for example blog posts and concert dates.</p>
<div class="Image__Medium">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1600443356/next-js-cosmic-post/CleanShot_2020-09-18_at_10.15.26_2x.png" alt="Screenshot of the Cosmic CMS Dashboard" />
  Cosmic CMS!
</div>

<p>With <a target="_blank" href="https://nextjs.org/docs/basic-features/pages#static-generation-recommended">Next.js SSG</a>, we are using the CMS in a <a target="_blank" href="https://www.cosmicjs.com/headless-cms">"headless" fashion</a>. After trying a bunch of Headless CMS offerings, one I've stuck with is Cosmic. <a target="_blank" href="https://www.cosmicjs.com">Cosmic</a> is an intuitive, powerful, and simple-to-use service which lets us get up and running quickly. They provide <a target="_blank" href="https://www.cosmicjs.com/apps">many starter apps</a> that you can preview or fork. For example, I chose the Next.js Static Blog and had a production version of the website running in under <strong>5 minutes</strong>.</p>
<h3 id="heading-choosing-the-tech">Choosing the Tech</h3>
<p><a target="_blank" href="https://nextjs.org/">Next.js</a> with GraphQL is my personal choice when it comes to Static site development. Next.js is a hybrid React framework which supports building static websites. It also lets you build <a target="_blank" href="https://nextjs.org/docs/basic-features/pages#server-side-rendering">server-side rendered pages</a> when the need arises. It handles routing, has a large community supporting it making it one of the best ways to build a React app in 2020. The other tech you may have heard also does this is <a target="_blank" href="https://www.gatsbyjs.com/">Gatsby.js</a>. Gatsby is more user-friendly but is more opinionated with its technology choices (forced use of GraphQL versus it being a choice).</p>
<p>We are choosing to use GraphQL over <a target="_blank" href="https://www.npmjs.com/package/cosmicjs">the Cosmic NPM module</a>. <a target="_blank" href="https://www.cosmicjs.com/blog/what-is-graphql">GraphQL</a> is a standardized way to get data from services and is a great choice when needing to get data from a CMS. As you create custom data types in Cosmic, you are able to query for it in the GraphQL API. One of the benefits of using GraphQL is that you are less dependent on a specific SDK.</p>
<h2 id="heading-tutorial">Tutorial</h2>
<blockquote>
<p>For reference, I forked the example Cosmic Next.js project <a target="_blank" href="https://github.com/vercel/next.js/tree/canary/examples/cms-cosmic">here</a>.</p>
</blockquote>
<h3 id="heading-creating-the-cosmic-project">Creating the Cosmic Project</h3>
<p>After creating an account on Cosmic and going through the product tour, you will be shown the “Create New Bucket” screen.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300648015/Adm9Tz95Q.png" alt="Screenshot of the Cosmic CMS App Search Page" /></p>
<p>Click "Start with an App" then search and select "<a target="_blank" href="https://www.cosmicjs.com/apps/nextjs-static-blog">Next.js Static Blog</a>" from the list of apps presented. This will do many of things.</p>
<ol>
<li>Create a Cosmic bucket</li>
<li>Create sane data-types inside the bucket for use with a blog</li>
<li>Fill the bucket with demo content</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300650119/HZZLE2_1S.png" alt="Screenshot of the Cosmic CMS Dashboard after creating a bucket" /></p>
<p>Here is what the created bucket looks like on your Cosmic dashboard</p>
<h3 id="heading-nextjs-local-development">Next.js local development</h3>
<p>The next thing we need to do is clone the Next.js code to our local environments. This will enable us to run the Next.js locally and pull content from the demo Cosmic bucket we created.</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> git@github.com:aleccool213/nextjs-cosmic-graphql-app.git
</code></pre>
<p>You can also choose to create a GitHub repository for yourself using <a target="_blank" href="https://github.com/aleccool213/nextjs-cosmic-graphql-app/generate">the template</a>.</p>
<p>Once inside the new directory, make sure you are using the correct Node.js version by using <a target="_blank" href="https://github.com/nvm-sh/nvm">NVM</a>.</p>
<pre><code class="lang-bash">nvm use v12.18.3
</code></pre>
<p>Install Yarn and install the project dependencies.</p>
<pre><code class="lang-bash">brew install yarn &amp;&amp; yarn
</code></pre>
<p>Run the app!</p>
<pre><code class="lang-bash">yarn dev
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300651922/eUgEYjopZ.png" alt="Screenshot of the app running locally but encountering an error due to no environment variables being set" /></p>
<p>Almost there!</p>
<h3 id="heading-cosmic-environment-variables">Cosmic Environment Variables</h3>
<p>Before we are able to query the Cosmic GraphQL API, our app needs to know where it lives. Environment Variables are deployment specific values which contain sensitive things like API keys.</p>
<p>There are three env vars we need to define to have the app work locally. Create a file named <code>.env.local</code> (don't worry it's ignored by Git), it should look like this:</p>
<pre><code class="lang-bash">COSMIC_BUCKET_SLUG=demo-nextjs-static-blog
COSMIC_READ_KEY=77H1zN7bTktdsgekxyB9FTpOrlVNE3KUP0UTptn5EqA7T0J8Qt
<span class="hljs-comment"># Preview secret can be anything you choose</span>
COSMIC_PREVIEW_SECRET=iwvrzpakhaavqbihwlrv
</code></pre>
<p>To get these values, head over to the Settings sidebar menu in your bucket, and click "Basic Settings".</p>
<p>Run the app again with <code>yarn dev</code></p>
<p><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1599829500/next-js-cosmic-post/CleanShot_2020-09-11_at_09.04.40_2x.png" alt="Screenshot of the example blog running on a local machine" /></p>
<p>And we are up!</p>
<h3 id="heading-looking-inside-the-box">Looking inside the box</h3>
<p>There are two things that we need to understand when it comes to Statically-Generated Next.js apps, pages and routes. <a target="_blank" href="https://nextjs.org/docs/basic-features/pages#scenario-1-your-page-content-depends-on-external-data">Pages are content which depend on external data</a>, and <a target="_blank" href="https://nextjs.org/docs/basic-features/pages#scenario-1-your-page-content-depends-on-external-data">routes are URL routes which depend on external data</a>. Both have you defining special Next.js specific functions, <code>getStaticProps</code> and <code>getStaticPaths</code>.</p>
<p>The file which contains the logic for generating page content based on the Cosmic GraphQL API is located at <a target="_blank" href="https://github.com/aleccool213/nextjs-cosmic-graphql-app/blob/661144a8eddebff19c709ec18ad8e1765f7600ec/pages/posts/%5Bslug%5D.js#L57">pages/posts/[slug].js</a>.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getStaticProps</span>(<span class="hljs-params">{ params, preview = null }</span>) </span>{
  <span class="hljs-comment">// Get the data from the API</span>
  <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> getPostAndMorePosts(params.slug, preview);
  <span class="hljs-comment">// Convert markdown content to HTML content</span>
  <span class="hljs-keyword">const</span> content = <span class="hljs-keyword">await</span> markdownToHtml(data.post?.metadata?.content || <span class="hljs-string">""</span>);
  <span class="hljs-keyword">return</span> {
    <span class="hljs-attr">props</span>: {
      preview,
      <span class="hljs-attr">post</span>: {
        ...data.post,
        content,
      },
      <span class="hljs-attr">morePosts</span>: data.morePosts || [],
    },
  };
}
</code></pre>
<pre><code class="lang-javascript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getPostAndMorePosts</span>(<span class="hljs-params">slug, preview</span>) </span>{
  <span class="hljs-comment">// Query for the data through the Cosmic GraphQL API using Apollo Client</span>
  ...
  const moreObjectsResults = <span class="hljs-keyword">await</span> client.query({
    <span class="hljs-attr">query</span>: gql<span class="hljs-string">`
      query getPostQuery(
        $bucketSlug: String!
        $readKey: String!
        $status: status
      ) {
        getObjects(
          bucket_slug: $bucketSlug
          input: {
            read_key: $readKey
            type: "posts"
            status: $status
            limit: 3
          }
        ) {
          objects {
            _id
            slug
            title
            metadata
            created_at
          }
        }
      }
    `</span>,
    <span class="hljs-attr">variables</span>: {
      <span class="hljs-attr">bucketSlug</span>: BUCKET_SLUG,
      <span class="hljs-attr">readKey</span>: READ_KEY,
      status,
    },
  });
</code></pre>
<p>This is one example of a page using <code>getStaticProps</code>. It is <a target="_blank" href="https://github.com/aleccool213/nextjs-cosmic-graphql-app/blob/661144a8eddebff19c709ec18ad8e1765f7600ec/pages/index.js#L40">also used in the Index page</a> for getting all the blog post titles and excerpts.</p>
<p><code>pages/posts/[slug].js</code> <a target="_blank" href="https://github.com/aleccool213/nextjs-cosmic-graphql-app/blob/661144a8eddebff19c709ec18ad8e1765f7600ec/pages/posts/%5Bslug%5D.js#L73">also contains <code>getStaticPaths</code></a> which tells our Next.js app which routes to generate.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">getStaticPaths</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-comment">// Get all post data (including content)</span>
  <span class="hljs-keyword">const</span> allPosts = (<span class="hljs-keyword">await</span> getAllPostsWithSlug()) || [];
  <span class="hljs-keyword">return</span> {
    <span class="hljs-comment">// Tell Next.js all of the potential URL routes based on slugs</span>
    <span class="hljs-attr">paths</span>: allPosts.map(<span class="hljs-function">(<span class="hljs-params">post</span>) =&gt;</span> <span class="hljs-string">`/posts/<span class="hljs-subst">${post.slug}</span>`</span>),
    <span class="hljs-attr">fallback</span>: <span class="hljs-literal">true</span>,
  };
}
</code></pre>
<p>After understanding these two aspects, the blog is just a regular React app.</p>
<h2 id="heading-deploying">Deploying</h2>
<p>Now that we have our website working locally, let's deploy it to <a target="_blank" href="https://vercel.com/">Vercel</a>. The first step is making sure you have the code in a Git repository.</p>
<p>I recommend you have the code in GitHub. You can use the <a target="_blank" href="https://cli.github.com/">GitHub CLI</a> to create a repo in your current directory with <code>gh repo create</code>.</p>
<p>We now need to sign up for Vercel and have it use the code from the GitHub repo. You can sign up for Vercel with your GitHub account <a target="_blank" href="https://vercel.com/signup">here</a>. You can import the code from GitHub using the "Import Project" feature.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300655924/2sJ9M7Shf_.png" alt="Screenshot of the Vercel project view with the Import Project button highlighted" /></p>
<p>When importing the project, make sure you define the environment variables, <code>COSMIC_BUCKET_SLUG</code>, <code>COSMIC_READ_KEY</code>, and <code>COSMIC_PREVIEW_SECRET</code>.</p>
<p>When deployed, all pushes to your default Git branch will have Vercel deploy a new version of your website!</p>
<h2 id="heading-bonus">Bonus</h2>
<h3 id="heading-previewing">Previewing</h3>
<blockquote>
<p>The Next.js docs on preview mode are right <a target="_blank" href="https://nextjs.org/docs/advanced-features/preview-mode">here</a>.</p>
</blockquote>
<p>Local development and deploying the website to production will cover most of your use-cases. Another common workflow is saving a draft of changes on your CMS and then previewing those changes on your local machine. To do so, we will enable "Preview" mode both on Cosmic and our Next.js app.</p>
<p>First thing we will need to do is have Cosmic know that the Posts object type will be preview-able. On the Posts setting page, add the preview link.</p>
<pre><code class="lang-bash">http://localhost:3000/api/preview?secret=iwvrzpakhaavqbihwlrv&amp;slug=[object_slug]
</code></pre>
<p>When finished, click "Save Object Type".</p>
<p>Let's try editing a post and see it show up on our local machine. Try changing the title of "Learn How to Pre-render Pages Using Static Generation with Next.js" and click "Save Draft" instead of "Publish".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300657314/PFjpavNJS.png" alt="Screenshot of the Cosmic CMS Post data type editing page with Save Draft button highlighted" /></p>
<p>The <code>Save Draft</code> button</p>
<p>We now have unpublished changes. Run the app locally with <code>yarn dev</code> and then click "Preview" right under "Save Draft".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300659202/DjrH77EQF.png" alt="Screenshot of the example blog running on a local machine with the preview edits being shown" /></p>
<p>Our preview mode!</p>
<h3 id="heading-webhooks">Webhooks</h3>
<blockquote>
<p>Note this feature requires a Cosmic paid plan</p>
</blockquote>
<p>The only way to deploy new content to our blog is to have a developer push to the default git branch. This action will trigger Vercel take the new code and push a new version of our website. We ideally want our content creators to have the same control. Webhooks are a way we can do this.</p>
<p>Let's set up a webhook which lets Vercel know when our posts in Cosmic have new updates. This will let us deploy new versions of the website without developers needing to do anything.</p>
<p>Go to the Git integration settings page (https://vercel.com/[project]/settings/git-integration) in your Vercel project and create a new Deploy Hook named "Cosmic Hook".</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300661288/K16hWZv7W.png" alt="Screenshot of the Vercel webhook settings" /></p>
<p>What your settings should look like when the webhook is created</p>
<p>Now over in Cosmic settings, we can add this webhook. Let's add Cosmic notify Vercel when changes get published. You can see how we can do this for previews as well if we wanted to in the future.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300663318/3VFdUb7fNy.png" alt="Screenshot of the Cosmic CMS webhooks settings" /></p>
<p>Edited/Created and Published!</p>
<p>To test this go to the same post we tested Previews with and add some content to the end of the article and publish. You should see a deploy happen on Vercel with the new content deployed to the live version of your website!</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Want to see what the final website looks like? <a target="_blank" href="https://nextjs-cosmic-graphql-app.vercel.app/">Click here to check it out.</a></p>
<blockquote>
<p>Like this post? <a target="_blank" href="https://mailchi.mp/f91826b80eb3/alecbrunelleemailsignup">Subscribe to my email newsletter</a> for more like it in the future.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[HEY Email Review]]></title><description><![CDATA[You might have seen Basecamp making headlines when they took on Apple to fight for fair app-store monetization policies. HEY is a new email service which, you guessed it, sends and receives email. Going up against competitors with huge head starts, H...]]></description><link>https://blog.alec.coffee/hey-app-review-changing-how-you-feel-about-email</link><guid isPermaLink="true">https://blog.alec.coffee/hey-app-review-changing-how-you-feel-about-email</guid><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 29 Jul 2020 12:41:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/LPZy4da9aRo/upload/v1645328907591/G1TGgAxL1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You might have seen Basecamp making headlines when <a target="_blank" href="https://www.theverge.com/2020/6/18/21296180/apple-hey-email-app-basecamp-rejection-response-controversy-antitrust-regulation">they took on Apple to fight for fair app-store monetization policies</a>. HEY is a new email service which, you guessed it, sends and receives email. Going up against competitors with huge head starts, HEY sets out to change the flavour of email as opposed to adding a little spice. After trying it for two weeks, HEY made my email experience a more calm, private and healthy experience with little to complain about.</p>
<p>HEY targets those who are tired with existing solutions, the users who see email as a chore and not what it should be, a delightful, curated experience. HEY gives you complete control over where email should go, who is allowed to give you a push notification and so on. Shifting away from automated filters and into a more controlling experience is tough at first but is worth it in the end.</p>
<p>I switched from Gmail to <a target="_blank" href="https://www.fastmail.com">Fastmail</a> years ago for simplicity and enhanced privacy. There is something to be said about spending money on a small business versus one which <a target="_blank" href="https://techcrunch.com/2020/01/23/squint-and-youll-click-it/">cares about its ad revenue more than it's users</a> that lures me to a service. HEY is a big step forward in the same direction as Fastmail.</p>
<h2 id="heading-email-hygiene">Email Hygiene</h2>
<p>Basecamp is looking at email from the ground up. Jason Fried, founder and CEO of Basecamp, says it well <a target="_blank" href="https://youtu.be/UCeYTysLyGI?t=44">in his walk-through of the product:</a></p>
<blockquote>
<p>One of the problems with email is that everybody can email you, which is also one of the great things about email</p>
</blockquote>
<p>I see myself as a hygienic email user, using filters and labels as much as I can, and try my best to combat the fact that anyone has access to my inbox. A few custom rules to put emails to certain folders, rules to make my inbox feel a more clear and less cluttered. For example, I have a newsletter rule where emails I don't care about go (sorry to the marketers who are reading this) and a rule for receipts. This happened to map 1-1 with <a target="_blank" href="https://hey.com/how-it-works">HEY's "The Feed" and "The Paper Trail"</a>. This meant that using HEY was already familiar to me. It made my workflow for creating rules much easier with <a target="_blank" href="https://hey.com/how-it-works">"The Screener"</a> which buckets sender's into these categories with unmatched speed. I can say that these buckets were enough. Non-categorized senders turned out to be important and kept going into my <a target="_blank" href="https://hey.com/features/the-imbox/">"Imbox"</a>. If they abused the fact they had this access, I would screen them out, banished to the shadow-realm, my spam folder.</p>
<p>When all the filters are at full steam, checking email is a calming and less stressful experience. This is something I never thought email would make me feel. Another reason for this is that no notifications are sent to you for new emails by default, you need to choose who is able to notify you. This falls in line with how much control HEY gives you, it lets you decide what is important.</p>
<div class="Image__Medium">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1595858961/hey-email-post/preview-imbox-508814e250e89a00b534371089a2310ff7d89796fbaa17d199bf8ae1f44ab114.jpg" alt="HEY email app screenshot" />
  The Imbox
</div>

<h2 id="heading-ultimate-privacy">Ultimate privacy</h2>
<p>Okay, get this, every email that you receive has a tracker inside it, it can tell <strong>when you opened it, where you opened it and how many times you opened it</strong>. Well, not every single one, but every email you get might have it because you wouldn't be able to tell the difference. Senders put a 1x1 HTML image element inside an email (hidden from view), when you open it this image gets loaded from the sender's server. This is how a bunch of information like I described earlier is then sent back to the email sender. Blocking this behaviour is as easy as not loading images when you open an email, which no email client does by default, it needs to be set by the user. The other problem is that if you turn that on, you can hardly read emails because usually, the content depends on images loading. <a target="_blank" href="https://hey.com/features/spy-pixel-blocker/">HEY brings a feature</a> that surprises me no one else is doing yet, filtering out those pixel trackers.</p>
<p>Filtering out pixel trackers is akin to a DNS server that has ad-blocking enabled. I already use one of these called <a target="_blank" href="https://nextdns.io/">NextDNS</a>. It works well and makes it so I don't have to do ad-blocking client-side, for example with a browser extension. How HEY does this is that it loads images within your email on a server within their network. This makes the sender think the email has been opened by you (but it was opened by HEY). HEY then shows you these pre-loaded images from their server, along with the email it was placed in. Being the privacy-oriented person I am the price tag for HEY may be worth it just for this.</p>
<h2 id="heading-the-horror-vendor-lock-in">The horror, vendor lock-in</h2>
<p>To do this magic, Basecamp decided to not let users integrate using email protocols such as <a target="_blank" href="https://www.emailaddressmanager.com/tips/protocol.html">IMAP, POP3 and SMTP</a>. This means you can't use HEY with any other client than their own. Mail for MacOS, Thunderbird, yup, all out of the question. This is a con of the service, other email clients have been in development for eons and come with useful features, too many to mention here. HEY comes with native apps for every device you use but it stills feels wrong to not talk about it. I get it why they don't want to support it, most of HEY's features wouldn't make sense on a traditional email client. For example, screening senders, merging/renaming threads, and others.</p>
<p>If you decide to switch off of HEY, exporting all your emails is possible so you don't have to worry about your history being lost. HEY promises to forward your emails sent to your old HEY address to a new address if you do decide to switch (that is if you paid for HEY in the first place).</p>
<h2 id="heading-yet-another-thing-to-learn">Yet another thing to "learn"</h2>
<p>The problem with new software products is that the learning curve usually is steep. New product fatigue is hard to deal with and this is why new products that come out try to steal features from existing solutions for easy adoption. HEY suffers from this, you will need to learn how to use it or you will get lost and won't feel comfortable switching. An on-boarding experience and tutorials covering HEY's features help but I felt like these need improvement. I felt sad when the tutorials stopped coming, I wasn't comfortable to navigate my email alone yet. Nuances like moving emails around to different folders and not knowing if future emails will go there. Little things to criticize but shouldn't stop anyone from enjoying what Basecamp has built.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Getting to this point took some strong-willed people willing to take a risk, going up against established players in the game. HEY isn't a perfect email app but it pushes the boundaries of how email <strong>should</strong> work. This alone is commendable, most companies don't have the gusto to do this. I decided not to pay for a full-year sub as they don't support custom domains and switching email addresses was painful the last time I did. I only have the energy, time and motivation to do this once a decade. <a target="_blank" href="https://hey.com/custom-domains/">They have a form to stay up to date</a> and get notified when this feature ships as they know it's the reason a lot of people are waiting to pay.</p>
]]></content:encoded></item><item><title><![CDATA[A Better Way to use GraphQL Fragments in React]]></title><description><![CDATA[One of the great reasons to use a component-based framework (React, Vue) is that it allows for more isolated component design, which helps with decoupling and unit-testing. Another benefit is using showcase apps such as Storybook, these continue the ...]]></description><link>https://blog.alec.coffee/better-way-to-use-graphql-in-react</link><guid isPermaLink="true">https://blog.alec.coffee/better-way-to-use-graphql-in-react</guid><category><![CDATA[React]]></category><category><![CDATA[GraphQL]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Tue, 12 May 2020 13:27:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/pHw08h_EvO4/upload/v1645300819870/nJNmHOXEd.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the great reasons to use a component-based framework (React, Vue) is that it allows for more isolated component design, which helps with decoupling and unit-testing. Another benefit is using showcase apps such as <a target="_blank" href="https://storybook.js.org/">Storybook</a>, these continue the philosophy of isolation and allow for design and prototyping outside the main application. When component count starts to grow and we start to fetch data, we need a new pattern, <a target="_blank" href="https://learn.co/lessons/react-container-components">the Container Component pattern</a>. If using GraphQL for your data transport, we want to keep using this pattern but with a new twist. When creating isolated components, they should define the data they need to render. This can be better achieved by each component, even presentational ones, defining the data they need to render with their own GraphQL fragment.</p>
<h2 id="heading-show-time">Show Time</h2>
<p>Let's say we have a component which renders a list of Github issues showing their title. In the Container Component pattern, we would have a "container" component, <code>GithubIssueListContainer</code>, which handles running the query. After this, it passes down the data to its presentational components which need it to render, <code>GithubIssueInfoCard</code>.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> GITHUB_ISSUES_LIST_QUERY = gql<span class="hljs-string">`
  query GithubIssuesListContainerQuery {
    organization {
      id
      name
    }
    issues {
    totalCount
    pageInfo {
      endCursor
      hasNextPage
    }
    edges {
      node {
        id
        title
        description
      }
    }
  }
`</span>;

<span class="hljs-keyword">const</span> GithubIssueListContainer = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">const</span> { loading, error, data } = useQuery(GITHUB_ISSUES_LIST_QUERY);
  <span class="hljs-keyword">return</span> (
    {data.issues.edges.map(
      <span class="hljs-function"><span class="hljs-params">edge</span> =&gt;</span>
      (
        &lt;span key={edge.node.id}&gt;
          &lt;GithubIssueInfoCard issueDetails={edge.node} /&gt;
        &lt;/span&gt;
      ),
    )}
  );
}

<span class="hljs-keyword">interface</span> GithubIssueInfoCardProps {
  issueDetails: {
    id: <span class="hljs-built_in">string</span>;
    title: <span class="hljs-built_in">string</span>;
    description: <span class="hljs-built_in">string</span>;
  }
}

<span class="hljs-keyword">const</span> GithubIssueInfoCard = <span class="hljs-function">(<span class="hljs-params">{ issueDetails }</span>) =&gt;</span> {
  <span class="hljs-keyword">return</span> (
    &lt;&gt;
      {issueDetails.id} {issueDetails.title} {issueDetails.description}
    &lt;/&gt;
  )
}
</code></pre>
<p>The issue here is that <code>GithubIssueInfoCard</code> is dependent on its parent component in its knowledge of where data comes from in the GraphQL graph.</p>
<p>If we want to render a new field from the graph, e.g. <code>labels</code>, we will need to add that to the query in <code>GithubIssueListContainer</code> and pass that down to <code>GithubIssueInfoCard</code> via props. This requires changes to the both the query in <code>GithubIssueListContainer</code> and the props in <code>GithubIssueInfoCard</code>.</p>
<h2 id="heading-this-is-the-way">This is the Way</h2>
<p>Following along our mantra of isolation, how about if <code>GithubIssueInfoCard</code> defined what data it needs to render from the GraphQL graph. That way, when we make changes to what data this component, only this component needs to change.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> GITHUB_ISSUES_LIST_QUERY = gql<span class="hljs-string">`
  <span class="hljs-subst">${GITHUB_ISSUE_INFO_CARD_FRAGMENT}</span>
  query GithubIssuesListContainerQuery {
    organization {
      id
      name
    }
    issues {
      totalCount
      pageInfo {
        endCursor
        hasNextPage
      }
      edges {
        node {
          ...GithubIssueInfoCardFragment
        }
      }
    }
  }
`</span>;

<span class="hljs-keyword">const</span> GithubIssueListContainer = <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-keyword">const</span> { data } = useQuery(GITHUB_ISSUES_LIST_QUERY);
  <span class="hljs-keyword">return</span> (
    {data.issues.edges.map(
      <span class="hljs-function"><span class="hljs-params">edge</span> =&gt;</span>
      (
        &lt;span key={edge.node.id}&gt;
          &lt;GithubIssueInfoCard issueDetails={edge.node} /&gt;
        &lt;/span&gt;
      ),
    )}
  );
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> GITHUB_ISSUE_INFO_CARD_FRAGMENT = gql<span class="hljs-string">`
  fragment GithubIssueInfoCardFragment on Issue {
    id
    title
    description
  }
`</span>;

<span class="hljs-keyword">interface</span> GithubIssueInfoCardProps {
  issueDetails: {
    id: <span class="hljs-built_in">string</span>;
    title: <span class="hljs-built_in">string</span>;
    description: <span class="hljs-built_in">string</span>;
  }
}

<span class="hljs-keyword">const</span> GithubIssueInfoCard = <span class="hljs-function">(<span class="hljs-params">{ issueDetails }</span>) =&gt;</span> {
  <span class="hljs-keyword">return</span> (
    &lt;&gt;
      {issueDetails.id} {issueDetails.title} {issueDetails.description}
    &lt;/&gt;
  )
}
</code></pre>
<p>This might seem odd at first, but the benefits are worth it. As with anything in programming it doesn't come without tradeoffs.</p>
<h2 id="heading-benefits">Benefits</h2>
<h3 id="heading-less-parent-component-coupling">Less parent component coupling</h3>
<p>When components define the data it needs to render, it de-couples the component from its parent. If for example you wanted to show <code>GithubIssueInfoCard</code> on another page, import the fragment into that container component to get the right data fetched. e.g.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> {
  GITHUB_ISSUE_INFO_CARD_FRAGMENT,
  GithubIssueInfoCard,
} <span class="hljs-keyword">from</span> <span class="hljs-string">"./GithubIssueInfoCard"</span>;

<span class="hljs-keyword">const</span> NOTIFICATIONS_LIST_QUERY = gql<span class="hljs-string">`
  <span class="hljs-subst">${GITHUB_ISSUE_INFO_CARD_FRAGMENT}</span>
  query NotificationsContainerQuery {
    notifications {
      totalCount
      pageInfo {
        endCursor
        hasNextPage
      }
      edges {
        node {
          id
          eventText
          eventAssignee {
            id
            avatar
            username
          }
          relatedIssue {
            ...GithubIssueInfoCardFragment
          }
        }
      }
    }
  }
`</span>;
</code></pre>
<h3 id="heading-types-become-easier-to-maintain">Types become easier to maintain</h3>
<p>If using a TypeScript, you likely are generating types from your GraphQL queries. A large benefit of our new pattern comes with defining props in components. You can define the data it needs to render as a type from our generated type file.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { GithubIssueInfoCardFragment } <span class="hljs-keyword">from</span> <span class="hljs-string">"../../graphql-types"</span>;

<span class="hljs-keyword">interface</span> GithubIssueInfoCardProps {
  issueDetails: GithubIssueInfoCardFragment;
}
</code></pre>
<p>When the fragment changes, after you generate types, no prop changes needed!</p>
<h3 id="heading-less-chance-of-changes-when-developing-component-first">Less chance of changes when developing component first</h3>
<p>With Storybook becoming popular, many developers are starting to develop components in Storybook first and the integrating them into the app at a later time. What may happen is that in app integration, props are defined incorrectly.</p>
<p>Defining the fragment of the GraphQL graph this component needs to render, there are less chances of code changes when integration happens due to forcing the developer to know the exact shape of the data it needs to render. This of course is only possible defining the api in advance which sometimes isn't always the case.</p>
<h2 id="heading-trade-offs">Trade-offs</h2>
<p>Of course, like everything in programming, there are trade-offs in this approach. It's up to you to see if it's worth it.</p>
<h3 id="heading-presentational-components-are-not-generic">Presentational components are not generic</h3>
<p>The crummy thing is that our presentational components become more coupled to the application and API data model. If we want to migrate over to a component library for others to use, these components will need to be refactored to have their fragments removed. It's not too much work, but it is more work than the alternative.</p>
<h3 id="heading-fragments-sometimes-become-difficult-to-manage">Fragments sometimes become difficult to manage</h3>
<p>Importing many fragments into a single GraphQL query isn't the best experience. If we have many presentational components within a container component, importing them all can be hairy. Sometimes you may forget to import the fragment and Apollo can return some unhelpful messages.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> GITHUB_ISSUES_LIST_QUERY = gql<span class="hljs-string">`
  <span class="hljs-subst">${GITHUB_ORG_INFO_CARD_FRAGMENT}</span>
  <span class="hljs-subst">${GITHUB_ISSUE_COUNT_CARD_FRAGMENT}</span>
  <span class="hljs-subst">${GITHUB_ISSUE_INFO_CARD_FRAGMENT}</span>
  query GithubIssuesListContainerQuery {
    ...GithubOrgInfoCardFragment
    issues {
      ...GithubIssueCountCardFragment
      pageInfo {
        endCursor
        hasNextPage
      }
      edges {
        node {
          ...GithubIssueInfoCardFragment
        }
      }
    }
  }
`</span>;
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>We have been using this pattern at Yolk for a while now and it has grown on everyone. We develop our components first in Storybook and it forces the developer to understand where the data is coming from and ask questions about the data model and it's usage.</p>
]]></content:encoded></item><item><title><![CDATA[Publishing a JavaScript Package to NPM automatically with Github Actions]]></title><description><![CDATA[Maintaining an open-source package can be a time-consuming task. Issues to be triaged, pull requests to be reviewed and changelogs to write. Publishing new versions of the code is usually done manually and making it automated is often on the back-bur...]]></description><link>https://blog.alec.coffee/publishing-javascript-package-automatically-with-github-actions</link><guid isPermaLink="true">https://blog.alec.coffee/publishing-javascript-package-automatically-with-github-actions</guid><category><![CDATA[GitHub]]></category><category><![CDATA[npm]]></category><category><![CDATA[github-actions]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 25 Mar 2020 15:03:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/wX2L8L-fGeA/upload/v1645301057771/o5w8yJRmGG.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<hr />
<p>Maintaining an open-source package can be a time-consuming task. Issues to be triaged, pull requests to be reviewed and changelogs to write. Publishing new versions of the code is usually done manually and making it automated is often on the back-burner of the maintainers' to-do list. There are a couple of key features of a rock-solid release process, the <a target="_blank" href="https://www.techopedia.com/definition/13934/changelog">changelog</a>, <a target="_blank" href="https://git-scm.com/book/en/v2/Git-Basics-Tagging">Git tags</a>, <a target="_blank" href="https://stackoverflow.com/questions/10972176/find-the-version-of-an-installed-npm-package">NPM versions</a>, and enforcing <a target="_blank" href="https://semver.org/">Semantic Versioning</a>. Keeping all these in sync makes it so users understand changes in a release and understand how to keep up-to-date. Maintainers who fail to perform all of these steps will have a hard time triaging issues, which leads to more time debugging and less time spent coding. I recently came across a combo of tools, <a target="_blank" href="https://github.com/semantic-release/semantic-release">semantic-release</a> and <a target="_blank" href="https://github.com/features/actions">Github Actions</a>, which made the entire release process automated, transparent, and simple to understand.</p>
<h2 id="heading-how-it-works">How It Works</h2>
<p>Before we talk about implementation, it's important to understand what work our tools will perform. That way, if there are problems or modifications, we can fix them. semantic-release is going to do the majority of the work here, they say it best on their README.</p>
<blockquote>
<p>It automates the whole package release workflow including determining the next version number, generating the release notes and publishing the package.</p>
</blockquote>
<h3 id="heading-the-next-version-number">The Next Version Number</h3>
<p>During a release, to determine the next version number, the tool reads commits since the last release. It knows your last release by looking at your Git tags. Based on the type of commit, it can determine how to bump up the version of the package. For this to work, commits need to be written in a certain way. By default, semantic-release uses the <a target="_blank" href="https://github.com/angular/angular.js/blob/master/DEVELOPERS.md#-git-commit-guidelines">Angular Commit Message Conventions</a>. This is critical because consumers of the package need to know if a new version releases a new feature, introduces breaking changes or both. For example, if someone commits <code>fix(pencil): stop graphite breaking when too much pressure applied</code>, semantic-release knows this contains a fix and to create a patch release. This will increase the version in the minor version range (0.0.x).</p>
<blockquote>
<p>Never seen this type of versioning before? <a target="_blank" href="https://semver.org/">Check out Semantic Versioning</a>.</p>
</blockquote>
<p>After analyzing all the commits, it takes the highest priority type of change and makes sure that is the one that is applied. For example, if two commits were introduced since the last release, one breaking (x.0.0) and one minor (0.0.x), it would know to just up the version by breaking range.</p>
<h3 id="heading-generating-release-notes">Generating Release Notes</h3>
<p>Once it has done finding out what type of release the next version is, changelog notes are generated based on the commits. semantic-release doesn't use conventional CHANGELOG.md file to notify users of what has changed, it does so with a <a target="_blank" href="https://help.github.com/en/github/administering-a-repository/about-releases">Github Release</a> which is attached to a Git tag.</p>
<blockquote>
<p><a target="_blank" href="https://github.com/Yolk-HQ/next-utils/releases/tag/v1.0.3">An example of a Github Release that semantic-release generates and pushes on builds.</a></p>
</blockquote>
<h2 id="heading-automating-with-github-actions">Automating With Github Actions</h2>
<p>So semantic-release will be the tool to perform most of the work, but we still need a service to run the tool on. That is where <a target="_blank" href="https://github.com/features/actions">Github Actions</a> comes into play. When pull-requests are merged into master (or any base branch you configure), Github Actions will run a job that simply runs semantic-release with your configuration. All of the work we described previously will be performed.</p>
<blockquote>
<p><a target="_blank" href="https://github.com/Yolk-HQ/next-utils/runs/463521573?check_suite_focus=true">An example of a Github Actions run using semantic-release to publish a new release.</a></p>
</blockquote>
<h2 id="heading-steps-to-take">Steps to Take</h2>
<p>We will be using as many defaults as possible to make configuration dead simple. semantic-release uses a plugins system to enhance functionality. <a target="_blank" href="https://github.com/semantic-release/semantic-release/blob/master/docs/usage/plugins.md#default-plugins">Here are the default plugins semantic-release uses.</a></p>
<p>Let's go over the steps which will make this all run smoothly.</p>
<ol>
<li>Add a dummy version property to the package.json of package. Released code will have the proper version written to this file by semantic-release.</li>
</ol>
<pre><code class="lang-json">        <span class="hljs-string">"version"</span>: <span class="hljs-string">"0.0.0-development"</span>,
</code></pre>
<ol>
<li>Add a new property to the package.json, <code>publishConfig</code>. This will be the home of our semantic-release configuration.</li>
</ol>
<pre><code class="lang-json">        <span class="hljs-string">"publishConfig"</span>: { <span class="hljs-attr">"access"</span>: <span class="hljs-string">"public"</span>, <span class="hljs-attr">"branches"</span>: ['master'] }
</code></pre>
<ol>
<li>The last step is to create a Github Action YAML file. This will tell Github Actions what to do when a commit is made to the repository.</li>
</ol>
<pre><code class="lang-yml">        <span class="hljs-comment"># .github/workflows/test-and-release.yaml</span>

        <span class="hljs-attr">name:</span> <span class="hljs-string">Test</span> <span class="hljs-string">and</span> <span class="hljs-string">Release</span>
        <span class="hljs-attr">on:</span> [<span class="hljs-string">push</span>]

        <span class="hljs-attr">jobs:</span>
        <span class="hljs-attr">test-and-release:</span>
            <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span> <span class="hljs-string">and</span> <span class="hljs-string">release</span>
            <span class="hljs-attr">runs-on:</span> <span class="hljs-string">ubuntu-18.04</span>
            <span class="hljs-attr">steps:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Checkout</span>
                <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/checkout@v1</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Setup</span> <span class="hljs-string">Node.js</span>
                <span class="hljs-attr">uses:</span> <span class="hljs-string">actions/setup-node@v1</span>
                <span class="hljs-attr">with:</span>
                <span class="hljs-attr">node-version:</span> <span class="hljs-number">12</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Install</span> <span class="hljs-string">dependencies</span>
                <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">ci</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Run</span> <span class="hljs-string">tests</span>
                <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">test</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">name:</span> <span class="hljs-string">Release</span>
                <span class="hljs-attr">env:</span>
                <span class="hljs-attr">GITHUB_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.GITHUB_TOKEN</span> <span class="hljs-string">}}</span>
                <span class="hljs-attr">NPM_TOKEN:</span> <span class="hljs-string">${{</span> <span class="hljs-string">secrets.NPM_TOKEN</span> <span class="hljs-string">}}</span>
                <span class="hljs-attr">run:</span> <span class="hljs-string">npm</span> <span class="hljs-string">run</span> <span class="hljs-string">semantic-release</span>
</code></pre>
<ol>
<li><p>Add <code>NPM_TOKEN</code> to the secrets in the Github repos settings page. You can generate one of these from your NPM account at https://www.npmjs.com/settings//tokens</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301025702/Z7SaYZwFa.png" alt="screenshot of github repo settings screen" /></p>
</li>
</ol>
<p>And that's it! You have a fully automated package release process 🎉</p>
<h2 id="heading-bonus">Bonus</h2>
<p>I implemented this on a repo we recently open-sourced at Yolk AI. It's named next-utils and everything described here can be found there.</p>
<p>{% github Yolk-HQ/next-utils %}</p>
<p>Another great thing about using semantic-release with Github Actions is that it has out-of-the-box support for bot comments. It will go into every issue and pull-request closed since the last release and comment to make sure everyone is aware. Here is an example:</p>
<p>{% github https://github.com/Yolk-HQ/next-utils/issues/12#issuecomment-581484992 %}</p>
<blockquote>
<p>If you liked this post, check out more at <a target="_blank" href="https://blog.alec.coffee">https://blog.alec.coffee</a> and <a target="_blank" href="https://mailchi.mp/f91826b80eb3/alecbrunelleemailsignup">signup for my newsletter</a></p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Quit Google Analytics and Self-host Your Gatsby Statistics with Ackee]]></title><description><![CDATA[There are many different goals one can have when it comes to hosting your own website or blog. For myself, it means just having a place where I own the content of my words and can customize it to my liking. When it comes to analytics, my needs aren’t...]]></description><link>https://blog.alec.coffee/quit-google-analytics-self-hosted-gatsby-statistics-with-ackee</link><guid isPermaLink="true">https://blog.alec.coffee/quit-google-analytics-self-hosted-gatsby-statistics-with-ackee</guid><category><![CDATA[Google]]></category><category><![CDATA[privacy]]></category><category><![CDATA[Gatsby]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Wed, 12 Feb 2020 20:26:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/DYLsNF8hNho/upload/v1645300932131/0AyfzGhk1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are many different goals one can have when it comes to hosting your own website or blog. For myself, it means just having a place where I own the content of my words and can customize it to my liking. When it comes to analytics, my needs aren’t many, as most of my audience reads my content via platforms like <a target="_blank" href="http://dev.to">dev.to</a> or <a target="_blank" href="http://medium.com">Medium</a>. All I need to know is how many people visit my site, which posts are doing well and where users come from (referral links). Given my recent obsessive elimination of all things tracking and advertising in my life, I chose to stop supporting Google and move from Google Analytics to something self-hosted. It wasn't an easy product to use and most of the features were useless to me as I don't sell anything on my blog. This way I own the data and am not contributing it to a company that could use it in potentially malicious ways.</p>
<p>I set out to search for a new tracking tool for my blog. My criteria for choosing a new product were:</p>
<ul>
<li>Be simple</li>
<li>Have features I will use</li>
<li>Put a focus on privacy</li>
<li>Built with a programming language I know so making changes is easy</li>
<li>Be able to easily host on a Platform-as-a-Service like Heroku</li>
<li>Have the ability to be easily added to a Gatsby blog</li>
<li>Have an option to not collect unique user data such as OS, Browser Info, Device &amp; ScreenSize</li>
</ul>
<h2 id="heading-meet-ackee">Meet Ackee</h2>
<div class="Image__Medium">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581282052/ackee-post/Screenshot_at_Feb_09_16-00-43.png" alt="ackee website homepage" />
  Beautiful, isn't it
</div>

<p>I came across <a target="_blank" href="https://ackee.electerious.com/">Ackee 🔮</a>, a self-hosted analytics tool. This tool fit my requirements almost perfectly. It is built using Node.js which I have experience in and it focuses on anonymizing data that it collects. More information on how Ackee anonymizes data <a target="_blank" href="https://github.com/electerious/Ackee/blob/master/docs/Anonymization.md">here</a>.</p>
<p>The steps you need to take to start collecting statistics with Ackee are to start running it on a server, Heroku in my case, add the Javascript tracker to your Gatsby site and test to see if the data is flowing correctly.</p>
<blockquote>
<p>This a detailed guide on how I went about deploying it to Heroku. Afterwards, <a target="_blank" href="https://github.com/electerious/Ackee/pull/77">I contributed back a Deploy-to-Heroku</a> button which deploys it in one-click. <a target="_blank" href="https://github.com/electerious/Ackee/blob/master/docs/Get%20started.md#with-heroku">Find the button here</a>.</p>
</blockquote>
<h2 id="heading-up-and-running-on-heroku">Up and running on Heroku</h2>
<p>First thing is to start running the server which is going to receive the tracking data from your website.</p>
<ol>
<li><p>Create a new Heroku app instance</p>
<p><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581282566/ackee-post/Screenshot_at_Feb_09_16-09-18.png" alt="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300887207/q9WCH3Kfj.png" /></p>
</li>
<li><p>Use the <a target="_blank" href="https://devcenter.heroku.com/articles/heroku-cli">heroku-cli</a> to upload the code</p>
<pre><code><span class="hljs-comment"># clone the code</span>
git clone git@github.com:electerious/Ackee.git

<span class="hljs-comment"># login to heroku</span>
heroku login

<span class="hljs-comment"># add the heroku remote</span>
heroku git:remote -a ackee-server

<span class="hljs-comment"># push the code</span>
git <span class="hljs-keyword">push</span> heroku master
</code></pre></li>
<li><p>Configure a MongoDB add-on, this is where the data will be stored</p>
<p><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581282745/ackee-post/Screenshot_at_Feb_09_16-12-18.png" alt="https://cdn.hashnode.com/res/hashnode/image/upload/v1645300888836/MrQuu7ZSF.png" /></p>
</li>
<li><p><a target="_blank" href="https://devcenter.heroku.com/articles/config-vars#using-the-heroku-cli">Configure the environment variables</a></p>
<pre><code>heroku config:set ACKEE_PASSWORD<span class="hljs-operator">=</span><span class="hljs-operator">&lt;</span>your password<span class="hljs-operator">&gt;</span>
heroku config:set ACKEE_USERNAME<span class="hljs-operator">=</span><span class="hljs-operator">&lt;</span>your username<span class="hljs-operator">&gt;</span>
</code></pre></li>
</ol>
<p>And voila! You are finished, that was easy, wasn't it? Open the webpage Heroku automatically configures for you, it should be <a target="_blank" href="https://ackee-instance.herokuapp.com/"><code>https://ackee-server.herokuapp.com/</code></a>, you should see this:</p>
<div class="Image__Small">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581283089/ackee-post/Screenshot_at_Feb_09_16-18-00.png" alt="ackee login page" />
    The log in page!
</div>

<h2 id="heading-adding-the-tracker">Adding the tracker</h2>
<p>Now we need to send data over from the website to the server we now have running on Heroku. If you are using Gatsby, this is incredibly easy with the plugin.</p>
<ol>
<li><p>Install the tracker</p>
<pre><code>npm install gatsby<span class="hljs-operator">-</span>plugin<span class="hljs-operator">-</span>ackee<span class="hljs-operator">-</span>tracker
</code></pre></li>
<li><p>Create a domain on Ackee and get the domain id. Find this option in the settings tab of your Ackee instance.</p>
</li>
<li>Add it to your Gatsby config</li>
</ol>
<pre><code class="lang-javascript">{
    <span class="hljs-attr">resolve</span>: <span class="hljs-string">"gatsby-plugin-ackee-tracker"</span>,
    <span class="hljs-attr">options</span>: {
        <span class="hljs-comment">// Domain ID found when adding a domain in the admin panel.</span>
        <span class="hljs-attr">domainId</span>: <span class="hljs-string">"&lt;your domain id&gt;"</span>,
        <span class="hljs-comment">// URL to Server eg: "https://analytics.test.com".</span>
        <span class="hljs-attr">server</span>: <span class="hljs-string">"https://ackee-server.herokuapp.com"</span>,
        <span class="hljs-comment">// Disabled analytic tracking when running locally</span>
        <span class="hljs-comment">// IMPORTANT: Set this back to false when you are done testing</span>
        <span class="hljs-attr">ignoreLocalhost</span>: <span class="hljs-literal">true</span>,
        <span class="hljs-comment">// If enabled it will collect info on OS, BrowserInfo, Device  &amp; ScreenSize</span>
        <span class="hljs-comment">// False due to detailed information being personalized:</span>
        <span class="hljs-comment">// https://github.com/electerious/Ackee/blob/master/docs/Anonymization.md#personal-data</span>
        <span class="hljs-attr">detailed</span>: <span class="hljs-literal">false</span>
    }
},
</code></pre>
<ol>
<li><p>Run the site locally</p>
<pre><code><span class="hljs-attribute">gatsby</span> develop
</code></pre></li>
</ol>
<h2 id="heading-testing-to-make-sure-it-worked">Testing to make sure it worked</h2>
<p>Open up your site at <code>http://localhost:8000</code> and go to a new url.</p>
<p>Observe the network requests your site is sending. You will notice it now sends requests to your Heroku instance.</p>
<div class="Image__Small">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581283787/ackee-post/Screenshot_at_Feb_09_16-29-09.png" alt="using the brave browser dev tools" />
    Using the dev tools
</div>

<p>And with that, we now have the server running Ackee and our Gatsby sending analytics!</p>
<h2 id="heading-what-you-get">What you get</h2>
<p>Let’s explore Ackee, shall we.</p>
<div class="Image__Small">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581518650/ackee-post/Screenshot_at_Feb_12_09-32-59.png" alt="ackee home page screenshot" />
    Home page with total site views
</div>

<div class="Image__Small">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581518650/ackee-post/Screenshot_at_Feb_12_09-33-47.png" alt="ackee list of referrers screenshot" />
    List of referrers
</div>

<div class="Image__Small">
  <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1581518650/ackee-post/Screenshot_at_Feb_12_09-31-43.png" alt="ackee per page view count screenshot" />
    Per page view count
</div>

<h2 id="heading-alternatives">Alternatives</h2>
<p>Here are some alternative methods I considered when thinking about analytics for my blog.</p>
<h3 id="heading-no-tracking">No tracking</h3>
<p>Combined with the fact more and more people are blocking trackers all-together (Firefox, Brave and Chrome ad blocking extensions), JavaScript-based tracking is becoming less and less valuable over-time. Most analytics can easily become a way to be vain about your blog and you can start a bad habit of always checking them (wasted time compared to producing actual content). Deciding not to track any analytics at all is not a bad decision these days.</p>
<h3 id="heading-server-side-analytics">Server-side analytics</h3>
<p>The most private and fast way of collecting analytics on your website may be to collect analytics at the server level. What this means is instead of using a JavaScript tracker (which may be blocked by the browser), stats are collected when the HTML is sent from the server. Integration with your static host provider or DNS provider is needed here. The main con about this method is that data is collected by a third party service and also is usually not free. <a target="_blank" href="https://www.cloudflare.com/en-ca/analytics/">Cloudflare</a> offers these types of analytics alongside <a target="_blank" href="https://www.netlify.com/products/analytics/">Netlify</a>. A huge benefit is the ease of setup, usually the provider just turns it on with a switch on their side, no setup needed from you.</p>
]]></content:encoded></item><item><title><![CDATA[1 year with Cypress: The Guide to End-To-End Testing 🚀]]></title><description><![CDATA[In software development, the faster you move, the more things break. As a codebase grows larger and larger, its pieces become more and more complex, every line adding a potential bug. The best organizations keep a handle on this through rigorous amou...]]></description><link>https://blog.alec.coffee/the-hitchhikers-guide-to-cypress-end-to-end-testing</link><guid isPermaLink="true">https://blog.alec.coffee/the-hitchhikers-guide-to-cypress-end-to-end-testing</guid><category><![CDATA[Cypress]]></category><category><![CDATA[Testing]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Fri, 20 Dec 2019 13:35:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/cbEvoHbJnIE/upload/v1645301183069/inHTxr8TP.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In software development, the faster you move, the more things break. As a codebase grows larger and larger, its pieces become more and more complex, every line adding a potential bug. The best organizations keep a handle on this through rigorous amounts of testing. Manual testing requires a lot of effort, that's where automated testing comes in. One of the hot frameworks on the scene is <a target="_blank" href="https://www.cypress.io/">Cypress</a>, a complete end-to-end testing solution.</p>
<p>In the past, web-app end-to-end testing has been a tricky beast. <a target="_blank" href="https://www.npmjs.com/package/selenium-webdriver">Selenium</a> has been the main solution for quite some time and has a huge history. It has great browser compatibility but having your tests be consistent is difficult because it wasn't designed for app testing. That's why I got so excited when I heard about Cypress, promising to fix all of the old and broken ways of past frameworks. After writing and reviewing close to 200 test scenarios in the past year (that's with a small team), I wanted to write about what I wish I knew when I started and share my thoughts on my journey with Cypress thus far.</p>
<h2 id="heading-whats-in-the-box">What's in the box</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301113258/OKN_EPlBE.png" alt="cypress features from website" /></p>
So many features packed in 😃

<p>End-to-end testing has always been a fragmented experience. You need to bring a lot of your own tools, for example, a test runner, an assertion library, and maybe other things like mocks. With Cypress, it packages all of those things together, this makes set up and configuration, dead simple. Not only that, the documentation is some of the best I have ever read in my career, with <a target="_blank" href="https://docs.cypress.io/guides/guides/command-line.html#Installation">guides on everything</a> you are likely to encounter. Not only do they do a great job telling you how to use the product, but have in-depth explanations on the <a target="_blank" href="https://docs.cypress.io/guides/overview/key-differences.html#Architecture">architecture</a>, <a target="_blank" href="https://docs.cypress.io/guides/overview/key-differences.html#Flake-resistant">flakey tests</a> and <a target="_blank" href="https://docs.cypress.io/guides/references/best-practices.html">best practices</a>.</p>
<h2 id="heading-prototyping">Prototyping</h2>
<p>{% vimeo 379033499 %}</p>
<p>If you have the chance, before adopting anything of this scale, I always think it's a good idea to test it on a small project first, just to get a feel. Before advocating for it, I added it to my personal blog, just to see how the experience was.</p>
<p>A very simple scenario:</p>
<ul>
<li>Load up the app</li>
<li>Go to the index page</li>
<li>Click the first blog post link</li>
<li>Assert content shows up</li>
</ul>
<p>I was blown away with how fast it took me, under an hour. This was really as simple as <a target="_blank" href="https://github.com/aleccool213/blog/blob/eb20d81a1bd10ba6037d5ac26ee5142ce951d7df/cypress/integration/home_page_spec.js#L8">writing a few lines of Javascript for the test itself</a>, <a target="_blank" href="https://github.com/aleccool213/blog/blob/60aa908e27aa83a539e563498b34f71a93167ec6/package.json#L70">the npm script in the package.json</a>, and <a target="_blank" href="https://github.com/aleccool213/blog/blob/8fa71b2486660e0175ee49cea1559112a224f146/circle.yml#L39">running it in CircleCI</a>. Not only did Cypress perform the assertions but also it was recording videos! This could have been an even faster setup if I used the <a target="_blank" href="https://github.com/cypress-io/circleci-orb">CircleCi Cypress Orb</a>.</p>
<p>This got me a huge amount of test coverage in very little time. This proof of concept was more than enough to convince my team that Cypress was the right choice when it came time to start writing end-to-end automated tests.</p>
<h2 id="heading-decisions-and-tradeoffs">Decisions and Tradeoffs</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301114702/9cToh_sme.png" alt /></p>
<p>The browser-based products we have at <a target="_blank" href="https://www.yolk.ai/">Yolk</a> are completely separated from the server-side API's they fetch data from, they build and are served separately. This presents a few ways forward when deciding to write end-to-end tests. You can either deploy your backend with your frontend and test as if the app is in production or completely mock out API responses. Using a real backend means spinning up potentially memory-intensive processes when running on CI but you get the assurance that apps are near-production. With mocking your API responses, you test less of your stack, risk stubbing out non-realistic responses, and have to worry about the extra maintenance of keeping them up-to-date.</p>
<p>We decided on deploying live instances of the backends related to the app we were testing. This decision was easy for us to make due to already having a CLI tool to do much of the hard work. This tool (aptly named yolk-cli) downloads the latest docker images for apps and knows how to spin up products with minimal configuration. This made getting the real APIs working on CI not too huge of a task.</p>
<blockquote>
<p>Turns out, running two or three large python apps and a few <a target="_blank" href="https://nextjs.org/">Next.js</a> servers on CircleCI does crap out the memory limit pretty fast. We reached out to CircleCI and they gave us access to their large resource classes (up to 16gb of RAM), score!</p>
</blockquote>
<h2 id="heading-seeding-data">Seeding Data</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301115959/JYNuDhy03.png" alt /></p>
<p>The next challenge we faced was seeding data. Your test scenarios must share as little state as possible with each other. This is a testing fundamental and <a target="_blank" href="https://docs.cypress.io/guides/references/best-practices.html#Having-tests-rely-on-the-state-of-previous-tests">Cypress addresses it</a> in their guides. Having test scenarios data-independent goes a long way when debugging why things are going wrong. On the flip side, having all of your data be created through the UI will make for slow tests, there is a balance. This will be highly customized to how your app works but I will go into what worked for us.</p>
<p>Going back to our cli tool once again, it had a few commands which seeded some basic data. The commands looked like this:</p>
<p><code>yolk seed-articles</code></p>
<p><code>yolk seed-bots</code></p>
<p>Getting your feet off the ground with data that is basic to your app, static data or very high-level entities, for example, will speed up this process and will be easy to run on each CI build.</p>
<p>The next part will be seeding data for entities that may be more specific to individual tests. This is where things get contested, there is no silver bullet for this. We decided to call the APIs directly for these situations and use <a target="_blank" href="https://docs.cypress.io/api/cypress-api/custom-commands.html#Syntax">Cypress custom commands</a> to initiate these requests. This was a decent choice because we are using GraphQL; the custom commands that use the API were easy to write and document.</p>
<p>{% gist https://gist.github.com/aleccool213/5df73bab0eff93f4be583178e32a6554 %}</p>
<p>Writing custom commands for actions which your tests will be performing over and over are a great way to consolidate all code, not just data seeders!</p>
<h2 id="heading-writing-scenarios-with-gherkin">Writing Scenarios with Gherkin</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301117236/SRrviBYUV.png" alt /></p>
<p>If you have written end-to-end tests before, you may be familiar with Gherkin syntax, used by Cucumber. This is an expressive, English-like way to write test scenarios. It can help with documenting your features and non-developers can contribute to writing test cases. We found <a target="_blank" href="https://github.com/TheBrainFamily/cypress-cucumber-preprocessor">a way to integrate this file syntax into Cypress using a plugin</a>.</p>
<p>{% gist https://gist.github.com/aleccool213/2747de9793028d16c8cad3e3f6fb3b85 %}</p>
<p>After writing these commands, the plugin will then go to Cypress to actually run the implementations:</p>
<p>{% gist https://gist.github.com/aleccool213/8b5af73f64791b6d551787e06bab05d5 %}</p>
<h2 id="heading-asserting-elements-and-best-practices">Asserting Elements and Best Practices</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301118859/PyRf4Ux1Z.png" alt /></p>
<p>When it comes down to it, end-to-end testing is just making sure elements on the page have the correct content. When writing Cypress tests, 90% of the time you will need to be selecting elements and peering inside them. Cypress has a standard <a target="_blank" href="https://docs.cypress.io/api/commands/get.html#Syntax">get()</a> command which exposes a JQuery-like selector to you, this should be familiar to those who have worked with Selenium. The problem with this selector is that it can be used incorrectly and you can't enforce (with code) it's usage. Welcome <a target="_blank" href="https://github.com/testing-library/cypress-testing-library">cypress-testing-library</a>, a wonderful tool maintained by a great testing advocate in the community, <a target="_blank" href="https://kentcdodds.com/">Kent C. Dodds</a>.</p>
<p>This plugin exposes a myriad of commands prefixed with <code>find</code> which work similarly to how <code>get()</code> does in native Cypress. All of these commands make for selectors <a target="_blank" href="https://kentcdodds.com/blog/making-your-ui-tests-resilient-to-change">that are resilient to change</a>. This can have a dramatic effect on how your tests stay consistent as your application progresses.</p>
<h2 id="heading-debugging">Debugging</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301120360/Z1haAPPAA.png" alt /></p>
<p>If you have ever worked with Selenium before, you know that debugging end-to-end tests can be somewhat of a nightmare. With Cypress, this pain is at an all-time low. Being a focus of the core product, being able to debug is one of the more pleasant experiences in your Cypress journey. Like for most things, <a target="_blank" href="https://on.cypress.io/plugins-guide">they have a great guide to get you started</a>.</p>
<p>Most of the things they have mentioned are great but the case which you will likely run into the most is a selector being incorrect. For this type of scenario, the GUI is a great way to find out what is going wrong. <a target="_blank" href="https://vimeo.com/237115455">There is a nice video explaining how to write your first test</a> and it shows the GUI in action.</p>
<h2 id="heading-visual-testing-and-catching-regressions">Visual Testing and Catching Regressions</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301122100/uIpGvlsH_.png" alt /></p>
<p>Another critical part of end-to-end testing will be how things look. HTML and CSS play a huge part in how your application will look like in different scenarios. Cypress can give you a lot of coverage in terms of how your app works but starts to break down when you want to assert its looks. Especially when it comes to browser compatibility and the different screen sizes your application is used in, visual regressions are hard to catch without proper implementation of <a target="_blank" href="https://blog.hichroma.com/visual-testing-the-pragmatic-way-to-test-uis-18c8da617ecf">Visual Snapshot Testing</a>.</p>
<p>The solution we ended up with was <a target="_blank" href="https://percy.io/">Percy</a> as it integrates nicely with Cypress and <a target="_blank" href="https://storybook.js.org/">Storybook</a>. What it can do is take the current HTML and CSS which is being rendered in your Cypress test scenario and send it over to Percy's servers. Percy then renders the markup on its own internal browsers, with Chrome and Firefox being options. Percy knows which feature branch your Cypress test is being run in and compares this with your configured base branch. This can give you great confidence in pull requests when you don't know if code is changing the look of a certain component in your application. This can be a big time-saver if you have a lot of code in your Cypress tests that assert css values or how things should look.</p>
<p>Hot Tip: You can have Cypress take snapshots locally and then with Percy only when it's enabled by creating a new <code>takeSnapshot</code> custom command:</p>
<p>{% gist https://gist.github.com/aleccool213/e0445d07bdab874cea06eb96284d2661 %}</p>
<h2 id="heading-parallel-builds-and-the-cypress-dashboard">Parallel Builds and the Cypress Dashboard</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301123795/SyHEc8xwA.png" alt /></p>
<p>Once test runs start to become long enough, you will start looking for other strategies to speed them up. Parallelization is something that can be performed due to Cypress running feature scenario files with a clean state each time they are run. You can decide on your own balance strategy, how your tests can be broken up, but the hosted version of <a target="_blank" href="https://docs.cypress.io/faq/questions/dashboard-faq.html#What-is-the-Dashboard">Cypress Dashboard</a> provides a way to do this automatically.</p>
<p>Let's say I can afford to have three CircleCI containers to run my Cypress tests. First, I define the <code>parallelism: 3</code> in <a target="_blank" href="https://circleci.com/docs/2.0/parallelism-faster-jobs/">my CircleCI job step config</a>. What this will do is spin up three instances of your job, all with different job ids. Pass those ids off to Cypress, and you are off to the races. If you have Cypress Dashboard set up correctly, that service will tell your container which tests it should run. Here is an example of the config:</p>
<p>{% gist https://gist.github.com/aleccool213/6e7ba6f9f3d975509a207e3c6c95923f %}</p>
<blockquote>
<p>The super neat thing about this is that Cypress Dashboard knows your past test history and their speeds. It will use this knowledge to optimize your parallel builds by making sure the containers get a balanced load of tests to run!</p>
</blockquote>
<p>Don't worry if this doesn't make much sense to you, <a target="_blank" href="https://docs.cypress.io/faq/questions/dashboard-faq.html#My-CI-setup-is-based-on-Docker-but-is-very-custom-How-can-I-load-balance-my-test-runs">Cypress has answered how to do this</a>.</p>
<h2 id="heading-browser-support">Browser Support</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301125250/gy-QegzOm.png" alt /></p>
<p>Unfortunately, if your organization needs to have support for IE11, you are out of luck. <a target="_blank" href="https://github.com/cypress-io/cypress/issues/310#issuecomment-337349727">The Cypress team has explicitly said they won't be supporting it</a>. <a target="_blank" href="https://github.com/cypress-io/cypress/issues/310">There is an incredible thread on Github</a> that I really hope you read through. It goes into why they are rolling this out slowly and didn't choose WebDriver from the beginning and wrote their own custom driver.</p>
<p>For us at Yolk, we needed IE11 support for a couple of our applications. We kept getting regressions within IE11 and needed more comprehensive test coverage. We decided to use <a target="_blank" href="https://www.browserstack.com/automate">Browserstack Automate</a> and Selenium to cover these apps. For CI, we already had the app built and running in Cypress, we just needed to add a new build step that ran these tests using the <a target="_blank" href="https://github.com/browserstack/browserstack-local-nodejs">Browserstack Local Proxy</a>.</p>
<p>For the tests themselves, we decided to integrate Selenium with <a target="_blank" href="https://github.com/cucumber/cucumber-js">Cucumber</a>, a common pairing. To make this process easier, we copied our Gherkin <code>.feature</code> files over to a new folder and wrote specific Selenium-based step implementations.</p>
<blockquote>
<p>A cool concept I had an idea for was to re-use the same <code>.feature</code> files across both Cypress and Selenium. If anyone has ideas on this, please comment below with your suggestion 😃</p>
</blockquote>
<p>It depends on how far you take this strategy and to decide if having duplicate test coverage is worth it to you. For us, having at least happy-path end-to-end test coverage in I.E.11 gave us a huge amount of confidence when deploying so the cost was worth it. In my opinion, it isn't as bad as it seems, our Cypress tests cover Chromium-based browsers (with Firefox support coming soon) and our Selenium tests cover I.E.11. With I.E.11 being phased out more and more, even in the enterprise, the need for Selenium will go away and the need for Cypress will get even larger.</p>
<h2 id="heading-bonus-typescript-support-and-code-coverage">Bonus: Typescript Support and Code Coverage</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301126610/ykPTJ8Xnx.png" alt /></p>
<p>All of the libraries and modules I have mentioned previously come with Typescript support. Getting Typescript to work with Cypress doesn't require many configs and is totally worth it in the long run. All you will need are Webpack, TS config, plugin files which integrate with Cypress. A good guide provided by Cypress is <a target="_blank" href="https://docs.cypress.io/guides/tooling/typescript-support.html">here</a>.</p>
<p>I know a lot of people wonder about code coverage and generating reports, Cypress can do that as well! Again, there <a target="_blank" href="https://github.com/cypress-io/code-coverage">is a nice plugin</a> that lets you do it. The one caveat is that it will attach coverage counters to your code so running your tests will be slower and may not mimic production. A good strategy here is to just generate them locally once in a while to see how you are doing.</p>
<p>If your backend and frontend are in Typescript, a cool idea is having code coverage running in both apps when Cypress runs. You can then see the coverage across your entire app!</p>
]]></content:encoded></item><item><title><![CDATA[Why I use Fish Shell over Bash and Zsh 🐟]]></title><description><![CDATA[One of the main allurements of Apple is that things "just work". Most of the people who use their products are covered with the features they release and Apple spends little time on anything else. The features they do ship, are polished, have sensibl...]]></description><link>https://blog.alec.coffee/beautiful-dev-tools-fish-shell</link><guid isPermaLink="true">https://blog.alec.coffee/beautiful-dev-tools-fish-shell</guid><category><![CDATA[shell]]></category><category><![CDATA[zsh]]></category><category><![CDATA[Bash]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Mon, 08 Jul 2019 12:56:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/crUGdn1j-RE/upload/v1645301331826/3_xuwwt7E.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One of the main allurements of Apple is that things "just work". Most of the people who use their products are covered with the features they release and Apple spends little time on anything else. The features they do ship, are <strong>polished</strong>, have <strong>sensible defaults</strong>, and are <strong>intentional</strong>. This is what I believe the <a target="_blank" href="https://fishshell.com">Fish shell</a> has become. No wasted time scouring the web for config files others have shared, the best plugins to use, or how to get integrations working with your particular setup.</p>
<p>This shell is meant for <strong>most</strong> people, Fish stands for <strong>Friendly Interactive Shell</strong>, that's why I can recommend it to anyone I work with. They have a very detailed <a target="_blank" href="https://fishshell.com/docs/current/design.html">design document</a>. It's not meant for the likes of system admins who are constantly logging into multiple servers a day. It will never be the default installed shell on most operating systems.</p>
<p>Once you install it, <code>brew install fish</code>, you are off to the races. You have a shell where you can become super productive and your favourite tools work as intended. It doesn't try to be the best at everything, but nails the essential core features which make the user experience extremely enjoyable.</p>
<h2 id="heading-tldr">TLDR</h2>
<ul>
<li>Syntax highlighting</li>
<li>Inline auto-suggestions based on history</li>
<li>Tab completion using man page data</li>
<li>Intuitive wildcard support</li>
<li>Web based configuration</li>
</ul>
<table>
  <tr><td><img src="https://media.giphy.com/media/QC7Pr3M4gN0yuEDGgj/giphy.gif" alt /></td></tr>
</table>

<h3 id="heading-lets-break-it-down">Let's break it down</h3>
<h4 id="heading-syntax-highlighting">Syntax highlighting</h4>
<p>My worst memories of bash come from the absence of this feature, <a target="_blank" href="https://fishshell.com/docs/current/tutorial.html#tut_syntax_highlighting">syntax highlighting</a>. A simple thing which makes you think, "wow, now I am using a shell from the 90's"! You can notice it working in the below gif when I try to go to <code>folder_that_doesnt_exist</code>, the text turns red. The text then turns blue when it's a valid command.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301303053/saTOd3_Mj.gif" alt /></p>
<h4 id="heading-inline-auto-suggestions-based-on-history">Inline auto-suggestions based on history</h4>
<p>Smart <a target="_blank" href="https://fishshell.com/docs/current/index.html#autosuggestions">auto-suggestions</a> are seldom seen, let alone built-in. Instead of just beating the competition, the Fish team thought to demolish it. Using the history of your commands, it suggests commands which you can complete with the <code>right-arrow key</code>. You can also, as I do this gif, auto-complete one word or folder at a time with <code>option + right-arrow key</code>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301304885/Uj7mlY9A2.gif" alt /></p>
<blockquote>
<p>Fun fact, if search results are huge, Fish shell will paginate!</p>
</blockquote>
<h4 id="heading-tab-completion-using-man-page-data">Tab completion using man page data</h4>
<p>This is because <a target="_blank" href="https://fishshell.com/docs/current/index.html#completion">Fish knows how to parse CLI tool man pages</a> in many different formats. Git, Docker CLI, package.json, you name it, most commands you try, it will have auto-completions for it.</p>
<p>You can use <code>tab</code> to get all the options.</p>
<table>
  <caption>All npm scripts for this package, with values of what they actually run, IN THE TERMINAL WUT
</caption>
  <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1565390833/fish-post/2019-08-09_18.46.53.gif" alt="example-of-fish-shell-tab-complete" /></td></tr>
</table>

<h4 id="heading-intuitive-wildcard-support">Intuitive wildcard support</h4>
<p>In bash, I never liked having to use different flags for selecting files or contents of a folder.</p>
<p>Regularly, this would be done with:</p>
<pre><code class="lang-bash">rm -r folder_1
</code></pre>
<p>I have always been a fan of familiarity, and <a target="_blank" href="https://fishshell.com/docs/current/tutorial.html#tut_wildcards">wildcards</a> are just that. You can use them in any command filter down the exact files you need with ease.</p>
<p>e.g.</p>
<pre><code><span class="hljs-attribute">ls</span> <span class="hljs-regexp">*.jpg</span>
</code></pre><table>
  <caption>How I feel while using Fish
</caption>
  <tr><td><img src="https://media.giphy.com/media/26tPplGWjN0xLybiU/giphy.gif" alt /></td></tr>
</table>

<h4 id="heading-web-based-configuration">Web based configuration</h4>
<p>Type in:</p>
<pre><code>web_config
</code></pre><p>and you get an entire website dedicated to messing around with any config you do need to touch.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301306692/Z8d4l2ENz.png" alt /></p>
<h3 id="heading-a-tiny-customization-needed-to-go-the-extra-mile">A tiny customization needed to go the extra mile</h3>
<p>There aren't a lot of extra packages needed for Fish. Personally, I only use 2, which is wild because at one point I know my Oh-My-Zsh plugins were past 10.</p>
<h4 id="heading-oh-my-fish">Oh My Fish</h4>
<p>A homage to the great Oh My Zsh, <code>omf</code> is the most popular package manager for Fish. I use this to install just two packages, one for <a target="_blank" href="https://github.com/derekstavis/plugin-nvm">nvm</a> and one for <a target="_blank" href="https://github.com/matchai/spacefish/">spacefish</a>.</p>
<h4 id="heading-spacefish">SpaceFish</h4>
<p>Special mention to <a target="_blank" href="https://github.com/matchai/spacefish/">Spacefish</a> for being the best shell prompt I have ever used. Support for showing:</p>
<ul>
<li>Current Git branch and rich repo status</li>
<li>Current Node.js version, through nvm</li>
<li>Package version, if there is a package in the current directory (package.json for example)</li>
</ul>
<table>
  <caption>Spacefish example
</caption>
  <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1565391692/fish-post/spacefish_example.png" alt="spacefish-shell-prompt-example" /></td></tr>
</table>

<h4 id="heading-config-file">Config file</h4>
<p>You also have access to a config file at <code>.config/fish/config.sh</code>. This is where you can set aliases up or set some extra path extensions.</p>
<h3 id="heading-caveats">Caveats</h3>
<p>Not being POSIX compliant can scare some developers away. But really in my three years of usage (mostly Node.js, javascript, ruby, e.t.c.), I have not encountered any issues. Some commands I get from the internet which are Bash specific, I'll just <code>exit</code> and then come back to Fish when I finish. <a target="_blank" href="https://stackoverflow.com/questions/48732986/why-how-fish-does-not-support-posix">This Stackoverflow post</a> goes into it more if you are so inclined.</p>
<h4 id="heading-but-its-easy-to-be-compatible">But it's easy to be compatible...</h4>
<p>Say you have a bash script to run, with Fish you still can:</p>
<pre><code class="lang-bash">bash script.sh
</code></pre>
<p>Another tip is that you can put this at the top of the file:</p>
<pre><code class="lang-bash"><span class="hljs-meta">#!/usr/bin/env bash</span>
</code></pre>
<p>and then make sure its an executable:</p>
<pre><code class="lang-bash">chmod +x script.sh
</code></pre>
<p>and voila, you can run it as a regular script:</p>
<pre><code class="lang-bash">./script.sh
</code></pre>
<h2 id="heading-resources">Resources:</h2>
<ul>
<li><a target="_blank" href="https://fishshell.com/">Fish Shell Website</a></li>
<li><a target="_blank" href="https://fishshell.com/docs/current/tutorial.html#tut_syntax_highlighting">Fish Shell Syntax Highlighting</a></li>
<li><a target="_blank" href="https://fishshell.com/docs/current/index.html#autosuggestions">Fish Shell Autosuggestions</a></li>
<li><a target="_blank" href="https://rootnroll.com/d/fish-shell/">Try out the Fish Shell tutorial online</a></li>
<li><a target="_blank" href="https://github.com/oh-my-fish/oh-my-fish">Oh My Fish Package Manager</a></li>
<li><a target="_blank" href="https://github.com/derekstavis/plugin-nvm">NVM wrapper plugin</a></li>
<li><a target="_blank" href="https://github.com/matchai/spacefish/">Spacefish Fish Shell Theme</a></li>
<li><a target="_blank" href="https://github.com/jorgebucaran/awesome-fish">List of awesome Fish related software</a></li>
<li><a target="_blank" href="https://github.com/jorgebucaran/fisher">Fisher, another package manager with a file-based extension config</a><ul>
<li><a target="_blank" href="https://github.com/elliottsj/dotfiles/blob/master/common/.config/fish/fishfilehttps://github.com/elliottsj/dotfiles/blob/master/common/.config/fish/fishfile">A friends fishfile for fisher</a></li>
</ul>
</li>
<li><a target="_blank" href="https://github.com/edc/bass">Support Bash scripts in Fish</a></li>
</ul>
<blockquote>
<p>Like this post? Consider <a target="_blank" href="https://www.buymeacoffee.com/yourboybigal">buying me a coffee</a> to support me writing more. </p>
<p>Want to receive quarterly emails with new posts? <a target="_blank" href="https://mailchi.mp/f91826b80eb3/alecbrunelleemailsignup">Signup for my newsletter</a> </p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Hacking my Honeymoon with JavaScript 🦒]]></title><description><![CDATA[When my wife saw this post on Instagram, she was immediately hooked:
{% instagram BvmFY7DgPyg %}
With our honeymoon in Kenya on the horizon, we set out to book a room. Consulting my aunt who had been to Kenya years ago, she stayed here and had no dif...]]></description><link>https://blog.alec.coffee/hacking-my-honeymoon-with-javascript</link><guid isPermaLink="true">https://blog.alec.coffee/hacking-my-honeymoon-with-javascript</guid><category><![CDATA[JavaScript]]></category><category><![CDATA[twilio]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Thu, 23 May 2019 14:12:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/xdSDchtfZHI/upload/v1645301532062/wAc1v7AjZ.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When my wife saw this post on Instagram, she was immediately hooked:</p>
<p>{% instagram BvmFY7DgPyg %}</p>
<p>With our honeymoon in Kenya on the horizon, we set out to book a room. Consulting my aunt who had been to Kenya years ago, she stayed here and had no difficulties booking. It came to our surprise when we heard that this place was fully booked a <strong>year or two in advance</strong>.</p>
<p>The sudden popularity had to stem from something. A little researched showed this place being recently <em>Ellen’ed</em>.</p>
<p>{% instagram BjQFBIDhsH_ %}</p>
<p>Damn it, Ellen.</p>
<p>Initially, we checked their website to see if the dates we would be in Kenya were available, no luck. We then emailed the manor and again, no beauno, we were told we were put on their “waitlist”. Likely competing with other people on the waitlist, and our trip only being a few months away, me and my wife's hopes drew thin.</p>
<h3 id="heading-the-search-for-solutions">The search for solutions</h3>
<p>The website they were using to show availability was read-only, no functionality to book rooms.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301463349/plR3xePVe.gif" alt="trying-my-hardest-to-click-on-readonly-elements" /></p>
<p>Calling and email were the only way to reach them, a slow and arduous process. I assumed that when a date became free, their website would update <em>first</em> and <em>then</em> they would start contacting waitlist members. This way, they would still get bookings if people fell through.</p>
<h3 id="heading-assumptions">Assumptions</h3>
<p>What I assumed next is that if we were to contact them the day the room became available, likely we would bypass the waitlist. But checking the website every hour was not going to fun.</p>
<p>I put my programmer pants on and thought that this would be a good use case for a good-ol web-scrapper, <em>jazz hands</em>. Hit the site every 30 min and SMS both my phone and my wife’s so that we could give them a call. Unlikely that this 1990's Kenyan website had protection against bots.</p>
<p>What looked like a simple table turned out to be a simple table:</p>
<pre><code class="lang-html">// Example of a unbooked day HTML node

<span class="hljs-tag">&lt;<span class="hljs-name">td</span>
  <span class="hljs-attr">width</span>=<span class="hljs-string">"25"</span>
  <span class="hljs-attr">unselectable</span>=<span class="hljs-string">"on"</span>
  <span class="hljs-attr">ab</span>=<span class="hljs-string">"0"</span>
  <span class="hljs-attr">style</span>=<span class="hljs-string">"border-top: none; "</span>
  <span class="hljs-attr">name</span>=<span class="hljs-string">"WB15:Salas Camp:Keekorok Honeymoon
  Tent-Tent 1:0*:1:11e8485f8b9898cc8de0ac1f6b165406:0"</span>
  <span class="hljs-attr">id</span>=<span class="hljs-string">"WB15:07:28:2019"</span>
  <span class="hljs-attr">darkness</span>=<span class="hljs-string">"0"</span>
  <span class="hljs-attr">onmousedown</span>=<span class="hljs-string">"mouseDownFunction(arguments[0]);"</span>
  <span class="hljs-attr">onmouseup</span>=<span class="hljs-string">"cMouseUp(arguments[0]);"</span>
  <span class="hljs-attr">onmouseover</span>=<span class="hljs-string">"mouseOverFunction(arguments[0]);"</span>
  <span class="hljs-attr">class</span>=<span class="hljs-string">"overbooking calIndicator0"</span>
&gt;</span>
  1
<span class="hljs-tag">&lt;/<span class="hljs-name">td</span>&gt;</span>
</code></pre>
<p>This is what I needed to find, if it the node text was <code>1</code>, it was available.</p>
<p>After investigating the simple html structure, I started writing the Node.js service to scrap it. I stumbled upon an NPM module, <a href="https://www.npmjs.com/package/crawler" target="_blank">crawler</a>, which gave me all I needed out of the box.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> Crawler = <span class="hljs-built_in">require</span>(<span class="hljs-string">"crawler"</span>);

<span class="hljs-keyword">const</span> startCrawler = <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Promise</span>(<span class="hljs-function"><span class="hljs-params">resolve</span> =&gt;</span> {
    <span class="hljs-keyword">const</span> c = <span class="hljs-keyword">new</span> Crawler({
      <span class="hljs-attr">maxConnections</span>: <span class="hljs-number">10</span>,
      <span class="hljs-attr">callback</span>: <span class="hljs-function">(<span class="hljs-params">error, res, done</span>) =&gt;</span> {
        <span class="hljs-keyword">if</span> (error) {
          <span class="hljs-built_in">console</span>.log(error);
          <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> <span class="hljs-built_in">Error</span>(
            <span class="hljs-string">`Error with sending request to website! <span class="hljs-subst">${<span class="hljs-built_in">JSON</span>.stringify(error)}</span>`</span>
          );
        }
        <span class="hljs-keyword">const</span> $ = res.$;
        <span class="hljs-comment">// get the table of bookings</span>
        <span class="hljs-keyword">const</span> results = $(<span class="hljs-string">"#tblCalendar tbody tr"</span>).slice(<span class="hljs-number">12</span>, <span class="hljs-number">17</span>);
        done();
        <span class="hljs-comment">// return the results</span>
        resolve(results);
      }
    });
    <span class="hljs-comment">// hit giraffe manors website</span>
    c.queue(
      <span class="hljs-string">"http://thesafaricollection.resrequest.com/reservation.php?20+2019-02-08"</span> +
        <span class="hljs-string">"+RS12:RS14:RS16:WB656:RS2274+15:20:30:25++WB5++n/a++true+true+0+0"</span>
    );
  });
};
</code></pre>
<p>This took a bit of debugging but now I had the HTML from Giraffe Manors website to play around with.</p>
<p>Next up, I went searching through the results with an NPM package called <a href="https://www.npmjs.com/package/cheerio" target="_blank">cheerio</a>.</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> parseResults = <span class="hljs-keyword">async</span> () =&gt; {
  <span class="hljs-keyword">let</span> availability = <span class="hljs-literal">false</span>;

  <span class="hljs-comment">// get HMTL</span>
  <span class="hljs-keyword">const</span> results = <span class="hljs-keyword">await</span> startCrawler();

  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> x = <span class="hljs-number">0</span>; x &lt; results.length; x++) {
    <span class="hljs-comment">// Feb 13th - Feb 20th</span>
    <span class="hljs-keyword">const</span> validDates = cheerio(results[x]).find(<span class="hljs-string">"td"</span>).slice(<span class="hljs-number">7</span>, <span class="hljs-number">14</span>);
    <span class="hljs-comment">// See if any of the dates are not booked</span>
    <span class="hljs-keyword">for</span> (<span class="hljs-keyword">let</span> y = <span class="hljs-number">0</span>; y &lt; validDates.length; y++) {
      <span class="hljs-keyword">if</span> (<span class="hljs-built_in">parseInt</span>(validDates[y].children[<span class="hljs-number">0</span>].data, <span class="hljs-number">10</span>) === <span class="hljs-number">1</span>) {
        availability = <span class="hljs-literal">true</span>;
      }
    }
  }
  ...
</code></pre>
<p>Now comes the interesting part, SMS text my wife when the room show as available. I used <a href="https://www.twilio.com/" target="_blank">Twilio</a> for this but many other services exist. This required setting up a free account, I know I wouldn't be sending more than a few SMS messages.</p>
<pre><code class="lang-javascript">  ...
  <span class="hljs-comment">// send text message if availability</span>
  <span class="hljs-keyword">if</span> (availability) {
    <span class="hljs-comment">// Your Account Sid and Auth Token from twilio.com/console</span>
    <span class="hljs-keyword">const</span> accountSid = process.env.ACCOUND_SID;
    <span class="hljs-keyword">const</span> authToken = process.env.AUTH_TOKEN;
    <span class="hljs-keyword">const</span> twilio = <span class="hljs-built_in">require</span>(<span class="hljs-string">"twilio"</span>);
    <span class="hljs-keyword">const</span> client = twilio(accountSid, authToken);

    client.messages
      .create({
        <span class="hljs-attr">body</span>: <span class="hljs-string">"Giraffe manor is available for our dates!"</span>,
        <span class="hljs-attr">from</span>: process.env.SMS_FROM,
        <span class="hljs-attr">to</span>: process.env.SMS_TO
      })
      .then(<span class="hljs-function"><span class="hljs-params">message</span> =&gt;</span> <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Sent a text! <span class="hljs-subst">${message.sid}</span>`</span>))
      .done();
    <span class="hljs-keyword">return</span>;
  }
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"No availability!"</span>);
}
</code></pre>
<p>After testing with a few dates that were unbooked, it worked! Now to schedule it to run every 5 minutes (because why not?).</p>
<pre><code class="lang-javascript"><span class="hljs-keyword">const</span> schedule = <span class="hljs-built_in">require</span>(<span class="hljs-string">"node-schedule"</span>);

schedule.scheduleJob(<span class="hljs-string">"*/5 * * * *"</span>, <span class="hljs-function">() =&gt;</span> {
  <span class="hljs-built_in">console</span>.log(<span class="hljs-string">"Running availability checker!"</span>);
  <span class="hljs-keyword">try</span> {
    main();
  } <span class="hljs-keyword">catch</span> (e) {
    <span class="hljs-built_in">console</span>.log(<span class="hljs-string">`Error! <span class="hljs-subst">${<span class="hljs-built_in">JSON</span>.stringify(e)}</span>`</span>);
  }
});
</code></pre>
<p>To host and run the code, I chose <a href="https://www.heroku.com" target="_blank">Heroku</a> as I have experience with it and knew the free tier would work for what I needed. I have no idea how their free tier supports background service jobs but anyways.</p>
<p>A couple of weeks later, (I actually forgot it was running), my wife received the text to her phone! We called them immediately and got it! Seemingly bypassing the waitlist, just like we had hoped. She got a barrage of texts and used up my free tier on Twilio as I didn't write a stop method when it found an available room 🤣</p>
<p>I particularly liked doing this because it's not often I code to solve a problem in my life but I thought it would be worth it for pictures like this:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1645301465343/5XrPyT23v.jpeg" alt="me-and-zeena-giraffe-manor-pic" /></p>
<p>This was one example of how I used my programming skills for a "real" world problem. I would love to hear a problem you may have solved, comment here.</p>
<p><a href="https://github.com/aleccool213/giraffe-manor-ping" target="_blank">The code</a></p>
<blockquote>
<p>Like this post? Consider <a target="_blank" href="https://www.buymeacoffee.com/yourboybigal">buying me a coffee</a> to support me writing more. </p>
<p>Want to receive quarterly emails with new posts? <a target="_blank" href="https://mailchi.mp/f91826b80eb3/alecbrunelleemailsignup">Signup for my newsletter</a> </p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Wrestling with Apollo Local State and Winning]]></title><description><![CDATA[Recently we bought into GraphQL and use it in every one of our web apps, both on the client and server level. It’s been helpful reducing unnecessary communication needed between our different teams when it comes to knowing what our many, many differe...]]></description><link>https://blog.alec.coffee/apollo-local-state-pains</link><guid isPermaLink="true">https://blog.alec.coffee/apollo-local-state-pains</guid><category><![CDATA[Apollo GraphQL]]></category><category><![CDATA[GraphQL]]></category><category><![CDATA[React]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Thu, 16 May 2019 00:00:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1645328967173/QcNd2lxrJ.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Recently we bought into <a target="_blank" href="https://graphql.org/">GraphQL</a> and use it in every one of our web apps, both on the client and server level. It’s been helpful reducing unnecessary communication needed between our different teams when it comes to knowing what our many, many different API’s do. This contributes to our async work strategy and keeps developers moving and focusing on difficult problems versus organizational bloat.</p>
<p>For use with our frontend applications, we opted for <a target="_blank" href="https://github.com/apollographql/apollo-client">Apollo Client</a> with <a target="_blank" href="https://github.com/apollographql/react-apollo">React</a> which seems to be the one true GraphQL client at this point. As the library is fairly new (the javascript ecosystem moves fast, who knew?) we have experienced our fair share of pains and troubleshooting. Some of which included:</p>
<ul>
<li>Using <a target="_blank" href="https://www.apollographql.com/docs/react/essentials/local-state">Apollo Local State</a> on a large codebase and running in production</li>
<li>Integration with Server Side Rendering (<a target="_blank" href="https://nextjs.org/">Next.js</a>)</li>
<li><a target="_blank" href="https://github.com/apollographql/apollo-tooling#apollo-clientcodegen-output">Generating Typescript types</a> on a <a target="_blank" href="https://www.apollographql.com/docs/graphql-tools/schema-stitching">stitched schema</a></li>
</ul>
<p>This article focuses on the first point, managing frontend local state. When looking for state management solutions, Apollo Local State (<a target="_blank" href="https://github.com/apollographql/apollo-link-state/blob/master/README.md#L5">formally apollo-link-state</a>) popped up. A couple of reasons led us to using this library:</p>
<ul>
<li>The data models and store structure can be shared between the data fetching cache and the local state management cache<ul>
<li>This leads to sharing of Typescript types as well 😉</li>
</ul>
</li>
<li>Actions are performed with mutations, something that previous GraphQL users already understand.</li>
<li>Staying within the Apollo ecosystem meant smooth integration with existing tools, meaning less overhead for developers.</li>
<li>Backed by Apollo meant that support would be there for a significant amount of time.</li>
</ul>
<p>Another good sign was <a target="_blank" href="https://blog.apollographql.com/reducing-our-redux-code-with-react-apollo-5091b9de9c2a">this very attractive article</a> which explains the advantages of keeping your local state close to the GraphQL schema vs using something like Redux.</p>
<h2 id="heading-here-comes-the-pain">Here comes the pain</h2>
<p>A pattern established by <a target="_blank" href="https://facebook.github.io/flux/docs/in-depth-overview.html#content">Flux</a> (the paradigm, not the library), has you splitting up actions for every event which happens in your app. User clicked a button, action is triggered, user scrolls down a certain length, action is triggered. Your app can observe these actions and manipulate the state accordingly. With Apollo Local State Mutations, this becomes much more intentional. No observability is given, each action is directly related to a resolver.</p>
<p>For example, you want to update a name on an issue (think GitHub just for examples sake) in the cache, this is Apollo’s term for state:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> IssueContainer: React.FC&lt;{ issue: GithubIssue }&gt; = <span class="hljs-function">(<span class="hljs-params">{ issue }</span>) =&gt;</span> (
  &lt;UpdateIssueNameMutation mutation={UPDATE_ISSUE_NAME}&gt;
    {<span class="hljs-function">(<span class="hljs-params">updateIssuename</span>) =&gt;</span> (
      &lt;IssueFields
        issue={issue}
        onChange={<span class="hljs-function">(<span class="hljs-params">name</span>) =&gt;</span> {
          updateIssuename({
            variables: {
              input: {
                id: issue.id,
                name,
              },
            },
          });
        }}
      /&gt;
    )}
  &lt;/UpdateIssueNameMutation&gt;
);
</code></pre>
<p>For this above mutation, here is an example of what we would need to write in Typescript, I break down what’s going on in the comments:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> gql <span class="hljs-keyword">from</span> <span class="hljs-string">"graphql-tag"</span>;
<span class="hljs-keyword">import</span> { IFieldResolver } <span class="hljs-keyword">from</span> <span class="hljs-string">"graphql-tools"</span>;

<span class="hljs-keyword">import</span> { ISSUE_PARTS } <span class="hljs-keyword">from</span> <span class="hljs-string">"../issues"</span>;
<span class="hljs-keyword">import</span> { IssueParts, UpdateIssueNameVariables } <span class="hljs-keyword">from</span> <span class="hljs-string">"../../graphql-types"</span>;
<span class="hljs-keyword">import</span> { ResolverContext } <span class="hljs-keyword">from</span> <span class="hljs-string">"."</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> UPDATE_ISSUE_NAME = gql<span class="hljs-string">`
  mutation UpdateIssueName($input: UpdateIssueNameInput!) {
    updateIssueName(input: $input) @client
  }
`</span>;

<span class="hljs-keyword">const</span> ISSUE_FRAGMENT = gql<span class="hljs-string">`
  <span class="hljs-subst">${ISSUE_PARTS}</span>

  fragment IssueParts on Issue {
    id @client
    name @client
  }
`</span>;

<span class="hljs-comment">/**
 * Updates the name of a Github issue.
 **/</span>
<span class="hljs-keyword">const</span> updateIssuename: IFieldResolver&lt;<span class="hljs-built_in">void</span>, ResolverContext, <span class="hljs-built_in">any</span>&gt; = <span class="hljs-function">(<span class="hljs-params">
  _obj,
  args: UpdateIssueNameVariables,
  context
</span>) =&gt;</span> {
  <span class="hljs-keyword">const</span> { input } = args;
  <span class="hljs-keyword">const</span> { cache, getCacheKey } = context;

  <span class="hljs-comment">// 1. Get the id of the object in the cache using the actual issue id</span>
  <span class="hljs-keyword">const</span> id = getCacheKey({
    __typename: <span class="hljs-string">"Issue"</span>,
    id: input.id,
  });

  <span class="hljs-comment">// 2. Get the data from the cache</span>
  <span class="hljs-keyword">const</span> issue: IssueParts | <span class="hljs-literal">null</span> = cache.readFragment({
    fragment: ISSUE_FRAGMENT,
    fragmentName: <span class="hljs-string">"IssueParts"</span>,
    id,
  });
  <span class="hljs-keyword">if</span> (!issue) {
    <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
  }

  <span class="hljs-comment">// 3. Update the data locally</span>
  <span class="hljs-keyword">const</span> updatedIssue = {
    ...issue,
    name: input.name,
  };

  <span class="hljs-comment">// 4. Write the data back to the cache</span>
  cache.writeFragment({
    fragment: ISSUE_FRAGMENT,
    fragmentName: <span class="hljs-string">"IssueParts"</span>,
    id,
    data: updatedIssue,
  });
  <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
};

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> updateIssuename;
</code></pre>
<p>That’s 60 lines to update a single attribute on one data entity in the cache. After doing a couple of these, you will start to pull your hair out. Mutations as is, don’t do a whole lot for you, this results in <strong>a lot of boilerplate</strong>. Having all of the boilerplate code is not ideal, it leads to more bugs and thus more tests need to be written to avoid those bugs.</p>
<h2 id="heading-looking-for-patterns">Looking for patterns</h2>
<p>After writing about ten of these with plans to write a lot more, we wrote a small <a target="_blank" href="https://yeoman.io/">Yeoman</a> generator to speed up the process. This made writing them a lot faster but didn’t solve the bloat in our codebase. Every mutation ended up doing the same thing as described in the comments above:</p>
<ol>
<li>Get the id of the object in the cache using the actual entity id</li>
<li>Get the data from the cache</li>
<li>Update the data locally</li>
<li>Write the data back to the cache</li>
</ol>
<h2 id="heading-the-solution">The solution</h2>
<p>Naturally, we wrote a helper which would help us refactor our resolvers.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">import</span> { IFieldResolver } <span class="hljs-keyword">from</span> <span class="hljs-string">"graphql-tools"</span>;
<span class="hljs-keyword">import</span> { DocumentNode } <span class="hljs-keyword">from</span> <span class="hljs-string">"graphql"</span>;

<span class="hljs-keyword">import</span> { ResolverContext } <span class="hljs-keyword">from</span> <span class="hljs-string">"."</span>;

<span class="hljs-keyword">interface</span> InputVariablesShape&lt;TInput&gt; {
  input: TInput;
}

<span class="hljs-comment">/**
 * Creates a client-side mutation resolver which reads one item from the cache,
 * mutates it using the given mutation, then writes it back to the cache.
 * @param reducer Func which mutates data and returns it. Must be a pure function.
 * @param fragment Used to read/write data to apollo-link-state.
 * @param fragmentName Name of fragment inside
 * @param getId A func which returns the id of the entity to be used in the `reducer` func.
 */</span>
<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> createResolver = &lt;InputShape, EntityType&gt;(
  reducer: <span class="hljs-function">(<span class="hljs-params">entity: EntityType, input: InputShape</span>) =&gt;</span> EntityType,
  fragment: DocumentNode,
  fragmentName: <span class="hljs-built_in">string</span>,
  getId: <span class="hljs-function">(<span class="hljs-params">input: InputShape</span>) =&gt;</span> <span class="hljs-built_in">string</span>
) =&gt; {
  <span class="hljs-keyword">const</span> resolver: IFieldResolver&lt;<span class="hljs-built_in">void</span>, ResolverContext, <span class="hljs-built_in">any</span>&gt; = <span class="hljs-function">(<span class="hljs-params">
    _obj,
    args: InputVariablesShape&lt;InputShape&gt;,
    { cache, getCacheKey }
  </span>) =&gt;</span> {
    <span class="hljs-keyword">const</span> { input } = args;

    <span class="hljs-comment">// 1. Get the id of the object in the cache using `getId`</span>
    <span class="hljs-keyword">const</span> id = getCacheKey({ id: getId(input) });

    <span class="hljs-comment">// 2. Get the data from the cache</span>
    <span class="hljs-keyword">const</span> entity: EntityType | <span class="hljs-literal">null</span> = cache.readFragment({
      fragment,
      fragmentName,
      id,
    });

    <span class="hljs-keyword">if</span> (!entity) {
      <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
    }

    <span class="hljs-comment">// 3. Update the data locally</span>
    <span class="hljs-keyword">const</span> newEntity: EntityType = reducer(entity, input);

    <span class="hljs-comment">// 4. Write the data back to the cache</span>
    cache.writeFragment({
      fragment,
      fragmentName,
      id,
      data: newEntity,
    });
    <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
  };

  <span class="hljs-keyword">return</span> resolver;
};
</code></pre>
<p>This resulted in the resolver code becoming a small <a target="_blank" href="https://en.wikipedia.org/wiki/Pure_function">pure</a> func:</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> fragment = gql<span class="hljs-string">`
  <span class="hljs-subst">${ISSUE_PARTS}</span>

  fragment BasicIssueParts on BasicIssue {
    ... on Node {
      id
    }
    parameters {
      id
      value
    }
  }
`</span>;

<span class="hljs-keyword">export</span> <span class="hljs-keyword">const</span> reducer = (
  issue: IssueParts,
  input: UpdateIssueNameInput
): <span class="hljs-function"><span class="hljs-params">IssueParts</span> =&gt;</span> ({
  ...issue,
  name: input.name,
});

<span class="hljs-keyword">export</span> <span class="hljs-keyword">default</span> createResolver(
  reducer,
  fragment,
  <span class="hljs-string">"IssueParts"</span>,
  <span class="hljs-function">(<span class="hljs-params">input: UpdateIssueNameInput</span>) =&gt;</span> {
    <span class="hljs-keyword">return</span> input.id;
  }
);
</code></pre>
<p>A two-line resolver which does the same as before. This covered about <strong>90%</strong> <strong>of our use-cases</strong>!</p>
<h2 id="heading-the-road-ahead">The road ahead</h2>
<p>If you found this blog post helpful, don’t hesitate to steal the <a target="_blank" href="https://gist.github.com/aleccool213/2c05dda3e017d7c2af699da435f5e895">gist</a> of this code.</p>
]]></content:encoded></item><item><title><![CDATA[Using Bull.js to manage job queues in a Node.js micro-service stack]]></title><description><![CDATA[When switching to a micro-service oriented stack versus the ol' single monolith, new problems arise. The simple job processor of the past doesn't fit in this new architecture. We found Bull, a Node.js package, to cover our needs, but needed tweaks to...]]></description><link>https://blog.alec.coffee/cross-service-job-processor-for-the-rest-of-us</link><guid isPermaLink="true">https://blog.alec.coffee/cross-service-job-processor-for-the-rest-of-us</guid><category><![CDATA[Node.js]]></category><category><![CDATA[message queue]]></category><category><![CDATA[kafka]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Sat, 02 Feb 2019 15:32:10 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/EQUaOV_1DCM/upload/v1645301865392/UNpeZXYLJ.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When switching to a micro-service oriented stack versus the ol' single monolith, new problems arise. The simple job processor of the past doesn't fit in this new architecture. We found Bull, a Node.js package, to cover our needs, but needed tweaks to work in this new world. Due to this module being open-source, I knew the improvements we made to it could be easily integrated into the main remote repo.</p>
<h2 id="heading-goals">Goals</h2>
<p>Let's say we want do some specialized work, scanning an image to extract text for instance. This is a situation where a job queue could come in handy, this work is being done in the background, away from a user facing interface.</p>
<ul>
<li>Get image from user</li>
<li>Queue job with image attached</li>
<li>Job gets worked on</li>
<li>Job results are sent back to app database</li>
</ul>
<p>Two popular packages in the wild which could help you do the forementioned work are DelayedJob and Celery. These allow you to manage jobs with a fast key-store like Redis. These assume <strong>the processing of the job and the job queue live in the same service</strong>. If you have one service performing a task, e.g. the image processor, and another service which acts as a job queue, we cannot use these traditional constructs.</p>
<table>
  <caption>This (Diagram 1)</caption>
  <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1549120910/bull-post/bull-1.png" alt="single-service-with-jobs" /></td></tr>
</table>

<p>versus</p>
<table>
  <caption>This (Diagram 2)</caption>
  <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1549120910/bull-post/bull-post2.png" alt="multiple-services-job-queue" /></td></tr>
</table>

<h2 id="heading-a-solution">A Solution</h2>
<p>Me and my coworkers found ourselves in this situation and when searching for answers, we found Bull might suffice. Keeping it 2018, this Node.js package is lightning fast, built to work with Redis and has an active community. It didn't quite fit our needs at first as it processed jobs in the same app as the queue-mechanism, see diagram 1. This is fine for traditional apps, but for our setup we needed to manage jobs across systems (see diagram 2). We needed to make this work in an async fashion where the worker may not be in the same repo or service as the service running Bull itself.</p>
<p>We need to think about how we want to manage a jobs life-cycle. Good thing someone contributed a diagram quite recently to the projects Github.</p>
<table>
  <caption>Bull's Job Lifecycle <a href="https://github.com/OptimalBits/bull/blob/develop/docs/job-lifecycle.png" target="_blank">Diagram</a></caption>
  <tr><td><img src="https://raw.githubusercontent.com/OptimalBits/bull/develop/docs/job-lifecycle.png" alt="bull-lifecycle-diagram" /></td></tr>
</table>

<p>Bull had a simple way to define the processing logic (refer to diagram 1), what a job does when in the <code>active</code> queue:</p>
<pre><code class="lang-javascript">queue.process(<span class="hljs-keyword">async</span> () =&gt; {
  doWork()
})
</code></pre>
<p>This way, whenever a job came into a <code>waiting</code> queue, Bull knew how to process it and throw it to the <code>completed</code> queue. Right now, Bull managed all the state transitions on it's own, we need to switch to manual. You may be thinking, "to work in this new fashion, how about we just don't define this <code>process</code> method?", we tried this, and it <em>worked!</em>. Forward into the weeds we go.</p>
<blockquote>
<p>but for our setup we needed to manage jobs across systems</p>
</blockquote>
<p>After digging into the code more, Bull defines state transition methods on two simple objects, <code>Job</code> and <code>Queue</code>.</p>
<p>After researching, the methods to do manual state transitions were private. It means that the authors didn't write these methods to be used publicly. This makes sense as Bull was never designed to do what we want to do with it. What do we need to do to make these public? After some more digging, we found someone else trying to do the same thing as us.</p>
<table>
  <caption>The issue can be found <a href="https://github.com/OptimalBits/bull/issues/790" target="_blank">here.</a></caption>
  <tr><td>
    <img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1549120910/bull-post/github-1.png" alt="profession-hacker-gif" />
  </td></tr>
</table>

<p>Just using the private functions as is would have been fine but we are <a href="https://media.giphy.com/media/HoffxyN8ghVuw/giphy.gif" target="_blank"><strong>professional developers.</strong></a></p>
<blockquote>
<p>I would recommend that you write a few unit tests specifically for testing the code using the private functions... - @manast</p>
</blockquote>
<p>The maintainer had a great suggestion, write unit tests for the private functions. The next best thing for this would be to at least write documentation for the functions so that they are understood by the community and strengthened their viability to be used publicly. And <a href="https://github.com/OptimalBits/bull/pull/1017/files#diff-d823dceb04482ab55e5004eebb53fc1cR182" target="_blank">that's what we did</a>.</p>
<h2 id="heading-open-source-bonus">Open Source Bonus</h2>
<p>For the actual pattern we described at the beginning (diagram 2), an addition to the reference docs <a href="https://github.com/OptimalBits/bull/pull/1017/files#diff-375fc823554b090375d9c47199cb5ee2R201" target="_blank">were added</a> to make this a viable pattern. Making this a known pattern encourages usage of the feature and possibly leads to other users finding issues when using in production. Typescript types were also available so we updated <a href="https://github.com/DefinitelyTyped/DefinitelyTyped/pull/27816" target="_blank">those</a> as well. After using it for some time (processing approx. 500k jobs), we found a bug and <a href="https://github.com/OptimalBits/bull/pull/1096" target="_blank">were able to easily fix it</a> using our extended knowledge of the package. Talk about bringing a third class feature to first class!</p>
<p>I am very happy with the outcome of the project as not only did we satisfy our requirements but also made open source contributions. This led to us understanding the packages internals and also led to us being able to easily add features for our use case. Having an active maintainer on the project who knew the ins and outside also made the entire process run smoothly.</p>
]]></content:encoded></item><item><title><![CDATA[How Learning Elixir Made Me a Better Programmer 🥃]]></title><description><![CDATA[After getting comfortable with a couple programming technologies, developers usually stop there; your job and the systems you maintain may all be in one or two languages. You start using similar patterns again and again to solve the same problems. El...]]></description><link>https://blog.alec.coffee/elixir-better-programmer</link><guid isPermaLink="true">https://blog.alec.coffee/elixir-better-programmer</guid><category><![CDATA[Elixir]]></category><dc:creator><![CDATA[Alec Brunelle]]></dc:creator><pubDate>Sun, 09 Dec 2018 13:48:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/unsplash/mmWqrsjZ4Lw/upload/v1645302630745/PD_FookYL.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After getting comfortable with a couple programming technologies, developers usually stop there; your job and the systems you maintain may all be in one or two languages. You start using similar patterns again and again to solve the same problems. Elixir, a relatively new programming language, opened my eyes to new techniques which broke this stagnant thinking. Learning a new programming language can introduce you to techniques you never would've come across using your existing technologies. It expands your toolbox when it comes to designing new systems. Imagine a carpenter being stuck to a certain set of tools for years, they would be limited in what they could build. After learning programming languages for years (school, contract-work, co-ops, etc), it was refreshing to step away from a mindset focused on getting it done as fast as I could. No timelines telling you what velocity to learn at and no peers depending on you to finish what you were working on. I find that in this relaxed setting, it's easier to digest larger cognitive loads.</p>
<table>
   <caption>E.g. of pattern matching. This and many other features of the language make it expressive and easy to read.</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362994/elixir-post/pattern.png" alt="pattern-matching-example" /></td></tr>
</table>

<h3 id="heading-quick-facts-for-the-tldr-in-you">Quick Facts for the T.L.D.R. in you</h3>
<ul>
<li><p>Elixir is simply syntax on top of Erlang, the battle-tested language built on top of the BEAM VM</p>
</li>
<li><p>The syntax is similar to Ruby so learning the syntax is easy and quick, especially for developers familiar with it</p>
</li>
<li><p>Did I mention it's FUNCTIONAL! (Pure, functional programming IMO is worth the investment cognitively, <a href="https://medium.com/making-internets/functional-programming-elixir-pt-1-the-basics-bd3ce8d68f1b" target="_blank">hit this link</a> for how Elixir utilizes it)</p>
</li>
</ul>
<p>One of the benefits of learning a recently-created programming language is that it's built on top of existing best practices. This happens when the creators spend time thinking about what problems other developers face regularly. "State management is hard", "it's hard to have zero time deployments of new code", "it's hard to maintain my systems", something every developer thinks. Elixir wants to make these problems less hairy and does so using functional methodologies wrapped around a VM which puts distributed/concurrent programming as a first-class citizen.
Elixir for example was built by developers who saw the productivity of the Ruby syntax, the maintainability of functional programming and the scalability of Erlang. These features of the language make it a compelling showcase of what a language recently built can be, as showcased in the pattern matching example above.</p>
<blockquote>
<p>Elixir for example was built by developers who saw the productivity of the Ruby syntax, the maintainability of functional programming and the scalability of Erlang.</p>
</blockquote>
<h3 id="heading-wires-connecting-to-wires">Wires connecting to wires</h3>
<table>
   <caption>OTP in the anime-flesh</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362986/elixir-post/telephone_pole.jpg" alt="otp-wires" /></td></tr>
</table>

<p>The rock solid foundation of Elixir is built on top of a library named <a href="https://en.wikipedia.org/wiki/Open_Telecom_Platform" target="_blank">OTP</a>. OTP is an elegant way to handle all of the problems that arise in distributed programming, think work across nodes, handling async messages, etc. It not only is a library of functions but also a paradigm to work within. This keeps things consistent across systems and large teams. Instead of a single process handling your entire app (think Node.js), many isolated processes make up an Elixir app. These processes communicate to each other using messages. This unlocks a lot of cool features, processes can now live across machines as messages can only be immutable, no pointers allowed.</p>
<p>The critic inside you will say the potential downfalls of using such a new language is that it isn't battle-tested. Usually this is a valid criticism, such is not the case for Elixir. The VM Elixir it's built on top of is hella old. The initial open-source release of Erlang was in 1998, and Ericsson was using it in-house for a long time before that. Used by telecom networks, these were critical services which could not afford to have downtime. For example, that's how the very cool <a href="https://github.com/edeliver/edeliver" target="_blank">hot-code-release</a> feature came to be which enabled developers to release new Erlang/Elixir code without taking down servers.</p>
<h3 id="heading-my-experience">My Experience</h3>
<table>
   <caption>A candid photo of me reading Elixir in Action</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362980/elixir-post/bill_reading.jpg" alt="pattern-matching-example" /></td></tr>
</table>

<p>Last year, a coworker invited me to join his book club. "Lets learn this new language." I had heard it was the new hotness so I said, "sure!". We would take a couple of hours every month to go over a chapter in the book, <a href="https://www.amazon.ca/gp/product/161729201X/ref=as_li_tl?ie=UTF8&amp;camp=15121&amp;creative=330641&amp;creativeASIN=161729201X&amp;linkCode=as2&amp;tag=coffeedrive09-20&amp;linkId=97d40dff77b7869475d6ee283c6501d2" target="_blank">Elixir in Action</a>. Initially, it was intimidating to join as I was vastly junior compared to the other members of the group but I gave it a shot. What followed was lots of great discussions and insight into topics I haven't dove into before. I am appreciating my former self for agreeing to join as not only did I learn a lot, I connected with coworkers in the company I would have never connected with otherwise. It helped me through Flipp's adoption of Event Driven Systems (think Kafka) by exposing me to good practices when managing state between processes. Keeping processes small, pure and functional is good-sound engineering practice and are the pillars of how Elixir works. I didn't need anything to build immediately or an assignment to finish, I learned for the joy of learning and got a lot out of it.</p>
<h3 id="heading-common-comments-and-questions">Common comments and questions</h3>
<table>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362988/elixir-post/road_forward.jpg" alt="pattern-matching-example" /></td></tr>
</table>

<blockquote>
<p>My team is not going to be happy that after learning 3 Javascript frameworks in the past week, they have to learn this.</p>
</blockquote>
<p>Once you start building things that have to scale or need to handle millions of requests, your on-call tickets increase. The reason for this is usually you can't predict traffic at that scale, push notifications go out for a new feature and everyone starts hitting your API. How do you handle this currently, with something like Node or Ruby? You just increase your box numbers and then decrease them after the load is done. This gets expensive and developers should not just be throwing money at something to solve a problem. Erlang VM processes (different than the traditional process) are a fixed size, this is <strong>mega</strong>. To a degree, this essentially solves this problem. Knowing how much memory processes are, gives you god-like abilities. The VM can tell the server precisely how much memory it may potentially use. Instead of falling over and the box restarting, you could respond to the client with HTTP Status Code 429 for example. No more unexpected memory loads at 1AM waking up developers!</p>
<blockquote>
<p>Okay, this is dope, how are errors handled?</p>
</blockquote>
<p>Errors are a first class citizen in Elixir. Processes are small and isolated so when an error is thrown, the entire app process doesn't have to dump it's stack, just the isolated process. When errors do happen, they are easier to debug as the process code is small (by Elixir convention). Processes are so small that every process gets a monitor (another OTP blessing), which can run some code when a process dies. An example monitor could restart the process for example so that it could accept more messages.</p>
<table>
   <caption>Everyone gets a monitor</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362991/elixir-post/everyone_gets.gif" alt="pattern-matching-example" /></td></tr>
</table>

<p>Also, it's very neat that there is a proposal for pattern matching in Javascipt. Obvious proof that everyone is drinking the ... wait for it ... <em>Elixir</em>.</p>
<table>
   <caption>🚒</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362983/elixir-post/javascript_pattern_matching.png" alt="pattern-matching-example" /></td></tr>
</table>

<h4 id="heading-the-road-forward">The road forward</h4>
<p>I hope this introduction shows you some of the powers of Elixir and encourages you to learn more. I just scratched the service of what is possible with the BEAM VM. I leave you with this graph showing Elixir's popularity on Stackoverflow compared to other popular languages:</p>
<table>
   <caption>Perspective</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362996/elixir-post/trends.png" alt="pattern-matching-example" /></td></tr>
</table>

<p>The trend is upwards but it still has a long way to go for becoming somewhat mainstream.</p>
<p>Moving forward, I plan is just to write more and more Elixir code and get more comfortable with it. HackerRank has Elixir as an environment so it has been a great resource to practice the syntax. One of the next things I want to do is start creating something in <a target="_blank" href="https://github.com/phoenixframework/phoenix">Phoenix</a>.</p>
<p>Another resource I used in my learning journey was the <a href="https://www.meetup.com/TorontoElixir/" target="_blank">Elixir Toronto Meetup Group on Meetup</a>.</p>
<h2 id="heading-reading-resources">Reading resources</h2>
<p>The book we read during the book club was called Elixir In Action. A very good book which goes through the entire language and its features, in detail. The beginning is quite slow but as you start to wrap your brain around syntax, it soon becomes super interesting.</p>
<table>
   <caption>Elixir in Action</caption>
   <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362993/elixir-post/elixir_in_action.jpg" alt="pattern-matching-example" /></td></tr>
</table>

<p>This is another book I started which is much more approachable. It's a fun book which goes over the main features of why Elixir is a compelling language. This is a heart-pumper as it really just skims the surface.</p>
<table>
     <caption>The Little Elixir &amp; OTP Guidebook</caption>
     <tr><td><img src="https://res.cloudinary.com/dscgr6mcw/image/upload/v1544362985/elixir-post/opt_guidebook.jpg" alt /></td></tr>
 </table>


<blockquote>
<p>Originally posted on <a href="https://blog.alec.coffee/elixir-better-programmer/">my blog</a>.</p>
<p>Like this post? Consider <a target="_blank" href="https://www.buymeacoffee.com/yourboybigal">buying me a coffee</a> to support me writing more. </p>
<p>Want to receive quarterly emails with new posts? <a target="_blank" href="https://mailchi.mp/f91826b80eb3/alecbrunelleemailsignup">Signup for my newsletter</a> </p>
</blockquote>
]]></content:encoded></item></channel></rss>