In the table editor, create a new table messages, and add columns for id, created_at, and content.
id should be a uuid
created_at should default now() and never be null
content is text and should never be null
Setting Up a Remix Project
Create a new remix project
Choose “Just the basics”
Choose Vercel as the service
npx create-remix chatter
For the remix project, you can find the main file in index.tsx
Query Supabase Data with Remix Loaders
npm i @supabase/supabase-js
Add supabase env vars to .env, which can be found in the Project Settings > API. Link
SUPABASE_URL={url}SUPABASE_ANON_KEY={anon_key}
Create a utils/supabase.ts file. Create createClient function
A ”!” can be used at the end of a variable so typescript doesn’t give us errors, if we know those will be available at runtime, like env vars
Supabase has row-level security enabled, meaning you have to write policies in order for the user to do CRUD operations (SELECT, INSERT, UPDATE, DELETE, and ALL).
We added a policy to allow all users to read.
Create the loader in the index page, using import { useLoaderData } from "@remix-run/react";, which will allow us to query supabase using the utils.
supabase.from("messages").select() reminds me a lot like mongodb’s client.
Generate TypeScript Type Definitions with the Supabase CLI
supabase gen types typescript --project-id akhdfxiwrelzhhhofvly > db_types.ts
We have to re-run this command every time we have DB updates
Now we use the db_types.ts into our supabase.ts file by adding a type to the createClient function
You can infer types by using typeof in Typescript. This is useful for showcasing what data’s type is in the Index functional component.
To make sure the data is always present, or an empty array rather than of type null, we use a null coalescing operator on the original data return { messages: data ?? [] };
Implement Authentication for Supabase with OAuth and Github
Enable Github OAuth using Supabase
In the supabase project, go to Authentication > Providers
Choose Github
In Github, go to Settings, Developer Settings > OAuth Apps
Create “Chatter”. Copy the Authorization callback URL
In supabase, enter the Client ID, Client Secret, and the Redirect URL.
The generated secret in Github goes away after a few minutes, so be quick
Create the login component in components/login and then add two buttons for logging in and out.
The handlers should be supabase.auth.signInWithOAuth and supabase.auth.signOut
Add the login component back into the index component.
You’ll notice a ReferenceError in that process is not defined because that should only run on the server.
Change the supabase.ts file to supabase.server.ts file. This shows that the supabase file should only be rendered on the server.
The root.tsx component has an Outlet depending on the route based off the routes files (file-based routing)
In the root component, we add the context in Outlet for the supabase instance.
This can now be used in the login file using useOutletContext.
Types can be added by exporting it from root.
type TypedSupabaseClient = SupabaseClient<Database>;
supabase uses Local Storage to store the OAuth tokens.
You can also check the users in the supabase project
Restrict Access to the Messages Table in a Database with Row Level Security (RLS) Policies
Add a column to our database called user_id and add a foreign key to it, with users and the key being id.
Disable Allow Nullable by adding the logged in user id to the first two messages. This can be found in the users table.
Re-run the db_types script
supabase gen types typescript --project-id akhdfxiwrelzhhhofvly > db_types.ts
Update the policy by changing the target roles to be authenticated.
Now only signed in users will be able to view the data.
Make Cookies the User Session Single Source of Truth with Supabase Auth Helpers
Auth tokens by default are stored in the client’s session, not on the server.
Remix is loading from the server’s session, which is null
npm i @supabase/auth-helpers-remix
We need to change the mechanism for the token to use cookies
Auth helpers allows us to use createServerClient and createBrowserClient to create the supabase instance correctly, based if it’s on the client or server.
You need request and response added in the supabase.server.ts
We need to do the same thing in the loader in root and index
Keep Data in Sync with Mutations Using Active Remix Loader Functions
There’s no update for pressing the button because the client doesn’t update the information after the initial load.
Remix has a revalidation hook.
Supabase has a auth state change hook
Combining these together, on server and client token change (either new token, or no longer has the token), then refetch data from loaders.
Securely Mutate Supabase Data with Remix Actions
To create a new message, we add Form from remix, which has a method post.
This is reminiscent of how forms worked alongside the HTML spec before
An action is created to insert the message, include the response headers from before (passing along the cookie)
The message won’t send yet until the supabase policy is set, so we add a policy for INSERT and make sure the user is authenticated and their user_id matches the one in supabase.
Subscribe to Database Changes with Supabase Realtime
Supabase sends out updates via websockets when there is a change to the database
I got a new Macbook at work, 13” w/ M2 Chip, and I thought it would be great to spend the day setting it up. The following is a document I had continued to write as I was going through every step of the way. This might inspire you to do the same, the next Macbook you start from fresh.
Brave Sync Code (Beware, this code should be private)
beach cheap hidden retire giggle gorilla tone pass length what spread march illegal episode fruit enjoy exact drive humble endless razor today follow treat boy
Brave shortcuts in URL bar
Brave Extensions
React Dev Tools
Redux Dev Tools
Apollo Dev Tools
Reader
iTerm2
Install pip3 through XCode Command Line Tools (CLT)
Create workspaces directory
Check if XCode is already installed
xcode-select -p
A M2 chip issue on brew is solved with the following
# Warning: /opt/homebrew/bin is not in your PATH.# - Run these three commands in your terminal to add Homebrew to your PATH:echo '# Set PATH, MANPATH, etc., for Homebrew.' >> /Users/jeremywong/.zprofileecho 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> /Users/jeremywong/.zprofileeval "$(/opt/homebrew/bin/brew shellenv)"
Modify Fonts
NerdFonts - Iconic font aggregator, collection, & patcher. 3,600+ icons, 50+ patched fonts: Hack, Source Code Pro, more. Glyph collections: Font Awesome, Material Design Icons, Octicons, & more
fnm install 16npm loginnpm config set loglevel="warn"# sudo npm install netlify-cli -g# netlify login# npm i -g sign-bunny fortune-node parrotsay # fun little cli utilities to use# npm i -g undollar # for removing $# npm install -g npm-check-updates # for updating deps# sudo npm install -g trash-cli # to add a `trash` command to so you dont permanently delete files
A feature flag is a decision point in your code that can change the behavior of your application. Feature flags can either be temporary or permanent.
Temporary flags are often used to safely deploy changes to your application or to test new behaviors against old ones. After a new behavior is being used by 100% of your users, the flag is intended to be removed.
Permanent flags give you a way to control the behavior of your application at any time. You might use a permanent flag to create a kill-switch or to reveal functionality only to specific users.
Feature flags are context sensitive. The code path token can change based on the context provided; for example, the user’s identity, the plan they’ve paid for, or any other data.
Feature flags can be used to control which users can see each change. This decouples the act of deploying from the act of releasing.
What do we do today
We have the ability to switch temp flags as kill switch for all customers (new field in result analytics). The permanent flags are controlled per customer (e.g. env mapping). A sub-group of those permanent flags control company integrations with third party services (e.g. Terra).
What are we lacking
What we don’t have is fine control of flags for rollout — using context to switch on/off flags by user, company, or other groups. When introducing new flags, we don’t have a standardized way in storing them in the same place. See companyFeatureToggles vs. companyIntegrations vs. featureFlags. We don’t highlighting flag dependencies. Lastly, permanent flags are limited to a per-company basis.
Definitions
Safety valves are permanent feature flags that you can use to quickly disable or limit nonessential parts of your application to keep the rest of the application healthy.
Kill Switches are permanent feature flags that allows you to quickly turn it off a feature if it’s performing poorly.
Circuit Breakers have the ability to switch off feature flags if they meet certain monitoring criteria.
An Operational Feature Flag are flags around features invisible to customers, such as a new backend improvement or infrastructure change. Operational flags give DevOps teams powerful controls that they can use to improve availability and mitigate risk.
Feature Flag Management Platforms
LaunchDarkly
Split
CloudBees
Deployments
Types of Deployments
There are different types of deployments:
Canary Releases - User groups who would like to opt in
Ring Deployments - Different user segments at a time - e.g. beta or power users
Percentage-based Deployments - Start with low percentage, then move to higher. For operational changes
Each of these can be implemented using feature flags.
Feature flags and blue/green deploys are complementary techniques. Although there are areas of overlap, each approach has distinct benefits, and the best strategy is to use both.
Testing
It isn’t necessary (or even possible) to test every combination of feature flags. Testing each variation of a flag in isolation (using default values for the other flags) is usually enough, unless there’s some known interaction between certain flags.
Library Code
Another decision that affects testing is whether you should use feature flags in reusable library code. I think the answer is no—flags are an application-level concern, not a library concern.
Feature Flag Clean-up
Cleaning up flags aggressively is the key to preventing technical debt from building up. There’s no royal road to flag cleanup, but there are some processes that make it manageable.
A stale flag is a temporary flag that is no longer in use and has not been cleaned up. Too many stale flags are a form of technical debt and an antipattern that you should avoid.
Documentation
Document changes It’s good practice to maintain a log of flag updates. It’s even more helpful to leave a comment with every change. When something is going unexpectedly wrong, being able to quickly see if anything has changed recently (and why it did) is an invaluable asset.
Name your flags well It’s also important to help your team understand what flags are for as easily as possible. So, adopt a naming convention that makes it clear at first glance what a flag is for, what part of the system it affects, and what it does.
Configuration Management
Feature management platforms solve many of these change management problems, but I still do not recommend moving configuration data into feature flags.
Configuration parameters are typically stored in files, environment variables, or services like Consul or Redis. As services become more complex, configuration management becomes a real concern. Tasks like versioning configuration data, rolling back changes, and promoting configuration changes across environments become cumbersome and error prone.
Rather than migrate all configuration data into feature flags, I recommend introducing feature flags selectively on top of whatever configuration management mechanism is in place (files, environment variables, etc.). These flags should be introduced only on an as-needed basis. For example, imagine that you’re trying to manage a database migration via feature flags.
If you had managed your migration by moving the entire database configuration into a feature flag, perhaps by creating a multivariate database-configuration flag, you’d need to keep the flag in place permanently.
Design for Failure
Design multiple layers of redundancy. When you write code you must consider what should happen if the feature flag system fails to respond. Most feature flag APIs include the ability to specify a default option—what is to be served if no other information is available. Ensure that you have a default option and that your defaults are safe and sane.
Flag Distribution via a Networked System
In any networked system there are two methods to distribute information. Polling is the method by which the endpoints (clients or servers) periodically ask for updates. Streaming, the second method, is when the central authority pushes the new values to all the endpoints as they change.
Technique
Pros
Cons
Polling
Simple, Easily Cached
Inefficient. All clients need to connect momentarily, regardless of whether there is a change. Changes require roughly twice the polling interval to propagate to all clients. Because of long polling intervals, the system could create a “split brain” situation, in which both new flag and old flag states exist at the same time.
Streaming
Efficient at scale. Each client receives messages only when necessary. Fast Propagation. Changes can be pushed out to clients in real time.
Requires the central service to maintain connections for every client. Assumes a reliable network.
Relay Proxy
For those customers that have the need for another layer of redundancy on top of the four layers provided by our core service (multiple AWS availability zones, the Fastly CDN, local caching, and default values), we also offer the LaunchDarkly relay proxy (formally known as LD-relay). LD-relay is a small application in a Docker container that can be deployed in your own environment, either in the cloud of your choice or on premise in your datacenter(s).
The Relay Proxy is a small Go application that connects to the LaunchDarkly streaming API and proxies that connection to clients within an organization’s network.
We recommend that customers use the Relay Proxy if they are on an Enterprise plan and primarily use feature flagging in server-side applications. The Relay Proxy adds an additional layer of redundancy in the case of a LaunchDarkly outage.
The following was written for interns starting out with Javascript at Clear Labs.
Base Foundation
Whether this is your first time with Javascript or as a seasoned developer, you should have some base knowledge prior to working with React. While you can learn a framework, it’s more beneficial to understand the language it is written in. For example, what are promises and how does javascript handle asynchronous actions? What is the event loop? And how does Javascript fit in?
Here are some resources to get you started
Freecodecamp - if you have no foundational knowledge of Javascript or need a refresher for the Javascript syntax, start here
MDN Javascript - Mozilla’s documentation on where to get started with Javascript
MDN Promises - Mozilla’s documentation on promises
Async functions - Mozilla’s documentation on handling promises using async functions
Error Handling - Mozilla’s documentation about browser javascript errors
Going Deeper
Many developers find Javascript hard because it started as a scripting language, the syntax looks ugly, and you get these TypeErrors if you’re not careful. That said, with some major changes to the language since Node.js and Google’s V8 engine, Javascript has become a more seasoned programming language. You can develop classes, write generator functions, handle asynchronous events, and enumerate over lists much easier.
Once you’ve started with the basics above, feel free to continue to hone your skills with a deeper understanding of Javascript.
ES2015+ - a new set of functionality in Javascript that allows you to write more effective code. See the Ecmascript section below for more information.
You Don’t Know JS - A series of books written by Kyle Simpson that talks about diving deep into the core mechanisms of Javascript
Ecmascript
JavaScript is a subset of ECMAScript. JavaScript is basically ECMAScript at its core but builds upon it. Languages such as ActionScript, JavaScript, JScript all use ECMAScript as its core. As a comparison, AS/JS/JScript are 3 different cars, but they all use the same engine… each of their exteriors is different though, and there have been several modifications done to each to make it unique.
The history is, Brendan Eich created Mocha which became LiveScript, and later JavaScript. Netscape presented JavaScript to Ecma International, which develops standards and it was renamed to ECMA-262 aka ECMAScript.
It’s important to note that Brendan Eich’s “JavaScript” is not the same JavaScript that is a dialect of ECMAScript. He built the core language which was renamed to ECMAScript, which differs from the JavaScript which browser-vendors implement nowadays.
If your base understanding of Javascript is prior to ES6, you’ll want to read up on the basics. To start, arrow functions, classes, let and const statements are used throughout the app.
Arrow Functions
Often times we have nested functions in which we would like to preserve the context of this from its lexical scope. An example is shown below:
function Person(name) { this.name = name;}Person.prototype.prefixName = function (arr) { return arr.map(function (character) { return this.name + character; // Cannot read property 'name' of undefined });};
One common solution to this problem is to store the context of this using a variable:
function Person(name) { this.name = name;}Person.prototype.prefixName = function (arr) { var that = this; // Store the context of this return arr.map(function (character) { return that.name + character; });};
We can also pass in the proper context of this:
function Person(name) { this.name = name;}Person.prototype.prefixName = function (arr) { return arr.map(function (character) { return this.name + character; }, this);};
As well as bind the context:
function Person(name) { this.name = name;}Person.prototype.prefixName = function (arr) { return arr.map( function (character) { return this.name + character; }.bind(this) );};
Using Arrow Functions, the lexical value of this isn’t shadowed and we can re-write the above as shown:
function Person(name) { this.name = name;}Person.prototype.prefixName = function (arr) { return arr.map((character) => this.name + character);};
Best Practice: Use Arrow Functions whenever you need to preserve the lexical value of this.
Arrow Functions are also more concise when used in function expressions which simply return a value:
Best Practice: Use Arrow Functions in place of function expressions when possible.
Template Literals
Using Template Literals, we can now construct strings that have special characters in them without needing to escape them explicitly.
var text = 'This string contains "double quotes" which are escaped.';let text = `This string contains "double quotes" which don't need to be escaped anymore.`;
Template Literals also support interpolation, which makes the task of concatenating strings and values:
var name = "Tiger";var age = 13;console.log("My cat is named " + name + " and is " + age + " years old.");
Much simpler:
const name = "Tiger";const age = 13;console.log(`My cat is named ${name} and is ${age} years old.`);
In ES5, we handled new lines as follows:
var text = "cat\n" + "dog\n" + "nickelodeon";
Or:
var text = ["cat", "dog", "nickelodeon"].join("\n");
Template Literals will preserve new lines for us without having to explicitly place them in:
let text = `catdognickelodeon`;
Template Literals can accept expressions, as well:
let today = new Date();let text = `The time and date is ${today.toLocaleString()}`;
Classes
Prior to ES6, we implemented Classes by creating a constructor function and adding properties by extending the prototype:
class Personal extends Person { constructor(name, age, gender, occupation, hobby) { super(name, age, gender); this.occupation = occupation; this.hobby = hobby; } incrementAge() { super.incrementAge(); this.age += 20; console.log(this.age); }}
Best Practice: While the syntax for creating classes in ES6 obscures how implementation and prototypes work under the hood, it is a good feature for beginners and allows us to write cleaner code.
Let / Const
Besides var, we now have access to two new identifiers for storing values —let and const. Unlike var, let and const statements are not hoisted to the top of their enclosing scope.
An example of using var:
var snack = "Meow Mix";function getFood(food) { if (food) { var snack = "Friskies"; return snack; } return snack;}getFood(false); // undefined
However, observe what happens when we replace var using let:
let snack = "Meow Mix";function getFood(food) { if (food) { let snack = "Friskies"; return snack; } return snack;}getFood(false); // 'Meow Mix'
This change in behavior highlights that we need to be careful when refactoring legacy code which uses var. Blindly replacing instances of var with let may lead to unexpected behavior.
Note: let and const are block scoped. Therefore, referencing block-scoped identifiers before they are defined will produce a ReferenceError.
console.log(x); // ReferenceError: x is not definedlet x = "hi";
Best Practice: Leave var declarations inside of legacy code to denote that it needs to be carefully refactored. When working on a new codebase, use let for variables that will change their value over time, and const for variables which cannot be reassigned.
Destructuring allows us to extract values from arrays and objects (even deeply nested) and store them in variables with a more convenient syntax.
Destructuring Arrays
var arr = [1, 2, 3, 4];var a = arr[0];var b = arr[1];var c = arr[2];var d = arr[3];let [a, b, c, d] = [1, 2, 3, 4];console.log(a); // 1console.log(b); // 2
Destructuring Objects
var luke = { occupation: "jedi", father: "anakin" };var occupation = luke.occupation; // 'jedi'var father = luke.father; // 'anakin'let luke = { occupation: "jedi", father: "anakin" };let { occupation, father } = luke;console.log(occupation); // 'jedi'console.log(father); // 'anakin'
Parameters
In ES5, we had varying ways to handle functions which needed default values, indefinite arguments, and named parameters. With ES6, we can accomplish all of this and more using more concise syntax.
Default Parameters
function addTwoNumbers(x, y) { x = x || 0; y = y || 0; return x + y;}
In ES6, we can simply supply default values for parameters in a function:
function addTwoNumbers(x = 0, y = 0) { return x + y;}addTwoNumbers(2, 4); // 6addTwoNumbers(2); // 2addTwoNumbers(); // 0
Symbols
Symbols have existed prior to ES6, but now we have a public interface to using them directly. Symbols are immutable and unique and can be used as keys in any hash.
Symbol();
Calling Symbol() or Symbol(description) will create a unique symbol that cannot be looked up globally. A Use case for Symbol() is to patch objects or namespaces from third parties with your own logic, but be confident that you won’t collide with updates to that library. For example, if you wanted to add a method refreshComponent to the React.Component class, and be certain that you didn’t trample a method they add in a later update:
Symbol.for(key) will create a Symbol that is still immutable and unique, but can be looked up globally. Two identical calls to Symbol.for(key) will return the same Symbol instance. NOTE: This is not true for Symbol(description):
A common use case for Symbols, and in particular with Symbol.for(key) is for interoperability. This can be achieved by having your code look for a Symbol member on object arguments from third parties that contain some known interface. For example:
function reader(obj) { const specialRead = Symbol.for("specialRead"); if (obj[specialRead]) { const reader = obj[specialRead](); // do something with reader } else { throw new TypeError("object cannot be read"); }}
A notable example of Symbol use for interoperability is Symbol.iterator which exists on all iterable types in ES6: Arrays, strings, generators, etc. When called as a method it returns an object with an Iterator interface.
Maps is a much needed data structure in JavaScript. Prior to ES6, we created hash maps through objects:
var map = new Object();map[key1] = "value1";map[key2] = "value2";
However, this does not protect us from accidentally overriding functions with specific property names:
> getOwnProperty({ hasOwnProperty: 'Hah, overwritten'}, 'Pwned');> TypeError: Property 'hasOwnProperty' is not a function
Actual Maps allow us to set, get and search for values (and much more).
let map = new Map();> map.set('name', 'david');> map.get('name'); // david> map.has('name'); // true
The most amazing part of Maps is that we are no longer limited to just using strings. We can now use any type as a key, and it will not be type-cast to a string.
Note: Using non-primitive values such as functions or objects won’t work when testing equality using methods such as map.get(). As such, stick to primitive values such as Strings, Booleans and Numbers.
We can also iterate over maps using .entries():
for (let [key, value] of map.entries()) { console.log(key, value);}
Promises
Promises allow us to turn our horizontal code (callback hell):
func1(function (value1) { func2(value1, function (value2) { func3(value2, function (value3) { func4(value3, function (value4) { func5(value4, function (value5) { // Do something with value 5 }); }); }); });});
Into vertical code:
func1(value1) .then(func2) .then(func3) .then(func4) .then(func5, (value5) => { // Do something with value 5 });
Prior to ES6, we used bluebird or Q. Now we have Promises natively:
new Promise((resolve, reject) => reject(new Error("Failed to fulfill Promise"))).catch((reason) => console.log(reason));
Where we have two handlers, resolve (a function called when the Promise is fulfilled) and reject (a function called when the Promise is rejected).
Benefits of Promises: Error Handling using a bunch of nested callbacks can get chaotic. Using Promises, we have a clear path to bubbling errors up and handling them appropriately. Moreover, the value of a Promise after it has been resolved/rejected is immutable - it will never change.
Here is a practical example of using Promises:
var request = require("request");return new Promise((resolve, reject) => { request.get(url, (error, response, body) => { if (body) { resolve(JSON.parse(body)); } else { resolve({}); } });});
We can also parallelize Promises to handle an array of asynchronous operations by using Promise.all():
let urls = [ "/api/commits", "/api/issues/opened", "/api/issues/assigned", "/api/issues/completed", "/api/issues/comments", "/api/pullrequests",];let promises = urls.map((url) => { return new Promise((resolve, reject) => { $.ajax({ url: url }).done((data) => { resolve(data); }); });});Promise.all(promises).then((results) => { // Do something with results of all our promises});
Generators
Similar to how Promises allow us to avoid callback hell, Generators allow us to flatten our code - giving our asynchronous code a synchronous feel. Generators are essentially functions which we can pause their execution and subsequently return the value of an expression.
A simple example of using generators is shown below:
Where next will allow us to push our generator forward and evaluate a new expression. While the above example is extremely contrived, we can utilize Generators to write asynchronous code in a synchronous manner:
// Hiding asynchronousity with Generators
function request(url) { getJSON(url, function (response) { generator.next(response); });}
And here we write a generator function that will return our data:
function* getData() { var entry1 = yield request("https://some_api/item1"); var data1 = JSON.parse(entry1); var entry2 = yield request("https://some_api/item2"); var data2 = JSON.parse(entry2);}
By the power of yield, we are guaranteed that entry1 will have the data needed to be parsed and stored in data1.
While generators allow us to write asynchronous code in a synchronous manner, there is no clear and easy path for error propagation. As such, as we can augment our generator with Promises:
function request(url) { return new Promise((resolve, reject) => { getJSON(url, resolve); });}
And we write a function which will step through our generator using next which in turn will utilize our request method above to yield a Promise:
function iterateGenerator(gen) { var generator = gen(); (function iterate(val) { var ret = generator.next(); if (!ret.done) { ret.value.then(iterate); } })();}
By augmenting our Generator with Promises, we have a clear way of propagating errors through the use of our Promise .catch and reject. To use our newly augmented Generator, it is as simple as before:
iterateGenerator(function* getData() { var entry1 = yield request("https://some_api/item1"); var data1 = JSON.parse(entry1); var entry2 = yield request("https://some_api/item2"); var data2 = JSON.parse(entry2);});
We were able to reuse our implementation to use our Generator as before, which shows their power. While Generators and Promises allow us to write asynchronous code in a synchronous manner while retaining the ability to propagate errors in a nice way, we can actually begin to utilize a simpler construction that provides the same benefits: async-await.
Async Await
While this is actually an upcoming ES2016 feature, async await allows us to perform the same thing we accomplished using Generators and Promises with less effort:
var request = require("request");function getJSON(url) { return new Promise(function (resolve, reject) { request(url, function (error, response, body) { resolve(body); }); });}async function main() { var data = await getJSON(); console.log(data); // NOT undefined!}main();
Under the hood, it performs similarly to Generators. I highly recommend using them over Generators + Promises. A great resource for getting up and running with ES7 and Babel can be found here.
Getter and setter functions
ES6 has started supporting getter and setter functions within classes. Using the following example:
class Employee { constructor(name) { this._name = name; } get name() { if (this._name) { return "Mr. " + this._name.toUpperCase(); } else { return undefined; } } set name(newName) { if (newName == this._name) { console.log("I already have this name."); } else if (newName) { this._name = newName; } else { return false; } }}var emp = new Employee("James Bond");// uses the get method in the backgroundif (emp.name) { console.log(emp.name); // Mr. JAMES BOND}// uses the setter in the backgroundemp.name = "Bond 007";console.log(emp.name); // Mr. BOND 007
Latest browsers are also supporting getter/setter functions in Objects and we can use them for computed properties, adding listeners and preprocessing before setting/getting:
var person = { firstName: "James", lastName: "Bond", get fullName() { console.log("Getting FullName"); return this.firstName + " " + this.lastName; }, set fullName(name) { console.log("Setting FullName"); var words = name.toString().split(" "); this.firstName = words[0] || ""; this.lastName = words[1] || ""; },};person.fullName; // James Bondperson.fullName = "Bond 007";person.fullName; // Bond 007
ES6 Modules
Prior to ES6, we used libraries such as Browserify to create modules on the client-side, and require in Node.js. With ES6, we can now directly use modules of all types (AMD and CommonJS).
Exporting in ES6
With ES6, we have various flavors of exporting. We can perform Named Exports:
export let name = 'David';export let age = 25;
As well as exporting a list of objects:
function sumTwo(a, b) { return a + b;}function sumThree(a, b, c) { return a + b + c;}export { sumTwo, sumThree };
We can also export functions, objects and values (etc.) simply by using the export keyword:
export function sumTwo(a, b) { return a + b;}export function sumThree(a, b, c) { return a + b + c;}
And lastly, we can export default bindings:
function sumTwo(a, b) { return a + b;}function sumThree(a, b, c) { return a + b + c;}let api = { sumTwo, sumThree,};export default api;/* Which is the same as * export { api as default }; */
Best Practices: Always use the export default method at the end of the module. It makes it clear what is being exported, and saves time by having to figure out what name a value was exported as. More so, the common practice in CommonJS modules is to export a single value or object. By sticking to this paradigm, we make our code easily readable and allow ourselves to interpolate between CommonJS and ES6 modules.
Importing in ES6
ES6 provides us with various flavors of importing. We can import an entire file:
import "underscore";
It is important to note that simply importing an entire file will execute all code at the top level of that file.
Similar to Python, we have named imports:
import { sumTwo, sumThree } from "math/addition";
We can also rename the named imports:
import { sumTwo as addTwoNumbers, sumThree as sumThreeNumbers,} from "math/addition";
In addition, we can import all the things (also called namespace import):
import * as util from "math/addition";
Lastly, we can import a list of values from a module:
import * as additionUtil from "math/addition";const { sumTwo, sumThree } = additionUtil;
Importing from the default binding like this:
import api from "math/addition";// Same as: import { default as api } from 'math/addition';
While it is better to keep the exports simple, but we can sometimes mix default import and mixed import if needed. When we are exporting like this:
// foos.jsexport { foo as default, foo1, foo2 };
We can import them like the following:
import foo, { foo1, foo2 } from "foos";
When importing a module exported using commonjs syntax (such as React) we can do:
import React from "react";const { Component, PropTypes } = React;
This can also be simplified further, using:
import React, { Component, PropTypes } from "react";
Note: Values that are exported are bindings, not references. Therefore, changing the binding of a variable in one module will affect the value within the exported module. Avoid changing the public interface of these exported values.
Additional Resources
In addition to those features of ES6+, you’ll notice other features that you can incrementally learn as you go along. Here’s an incomplete list.
The following was written for interns starting out with browsers at Clear Labs.
Javascript was initially developed as a scripting language for the browser. The language has expanded into servers, IoT devices, serverless functions. But let’s take it a step back and talk more about its initial use case with browsers.
Back in the early days of the Web, developers wanted to handle more than reading documents. Forms were introduced to start this interactivity, and soon, developers wanted more APIs. These set of APIs for browsers, known as DOM APIs, became the way a developer could interact with the browser using Javascript. Over the years, this has matured into a large set of APIs.
You can find a separate wiki page for the DOM APIs that we use for our app.
Performance
The DOM, or the document object model, is a representation of the HTML on the page. The browser parses the HTML and puts that HTML in a representation called the DOM. In addition, the browser also parses the CSS and places it in a similar representation known as the CSSDOM. When these two are complete, a paint event can occur which can be shown to the user.
Javascript’s execution is slightly different than HTML and CSS. If Javascript gets loaded prior to the CSSDOM completion, it could block the browser’s paint execution until that Javascript is loaded. This phenomenon, known as Blocking, has some effects on performance.
For a deeper dive into browser performance, here are two (paid) books.
High Performance Web Sites - Written in 2007, still holds value in how browsers run. Some syntax has been updated, but the general advice is sound. It is highly likely you can find this book for free
Even Faster Websites - Written in 2009, a good follow-up to “High Performance Web Sites” that tackles additional topics about Javascript, the Browser, and the Network
To understand blocking, you have to understand the event loop. The following resource is a great primer on the event loop.
What the heck is the event loop anyway? - A Youtube video conference talk on how the event loop works. It also goes over some special topics of multi-threading with Javascript.
Event Handling
One of Javascript’s purposes is to handle events from the user. You could write some code like this:
var input = document.getElementById("input-text-username");input.onchange((event) => { // Do something with the event console.log(event);});
The onchange attribute is function that takes a callback. A callback is a function that gets triggered when the event is triggered. Any event that takes place on the DOM can include a callback, for example, focus in on the element or mouseover the element.
The first number is the MAJOR version. The next is the MINOR version. Last digit is the PATCH version.
Patch Update
In our example table above, react-dates has a patch version update.
21.5.0 -> 21.5.1
The last digit changed from 0 to 1. That means the version is backwards compatible.
Usually this means the package has bug fixes.
You can safely update the package.json with this package without doing any checks.
Minor Update
In our example table above, normalizr has a minor version update.
3.4.1 -> 3.5.0
The second digit changed from 4 to 5. That means the version should be backwards compatible.
Usually this means the package has features added.
You can sometimes safely update the package.json with this package.
Use your intuition if you need to check the pacakage in the app.
For example, if the package type is a dev dependency, most likely you don’t have to make changes.
The example package normalizr would fall under this case, and you can safely upgrade.
If there’s a new API or function worth exploring, make some changes and see how they work, if they apply to our application.
Major Update
In our example table above, babel-jest has a major version update.
24.9.0 -> 25.1.0
The first digit changed from 24 to 25. That means the version is not backwards compatible.
Usually this means the package API has changed.
In some cases, it may be because they have dropped support for an old version of Node. YMMV
You can never safely update the package.json with this package.
Do the following:
Check the CHANGELOG.md or releases Github page. Figure out what the change is
If there are API changes, read up on what the changes are. If they are fundamental and big, do not add. Make a task ticket to upgrade.
Sometimes the library might be popular. They may have a blog post on this. (e.g. Storybook, Apollo, React, and Styled-Components)
If it’s for dropped support for an older version of Node, go ahead and upgrade
For all other changes, upgrade locally, then see if anything in the App breaks. Also check Storybook and tests to see if anything breaks.
Be wary of major changes. When in doubt, as a teammate.
This article was written as part of our initial docs. I have many more articles about React, and I’m debating whether I should cover them in a single article, or multiple. Stay tuned.
At Clear Labs, the web app is a front-end application built on top of React. React is a javascript library that, when paired with other libraries, creates a front-end framework. In our project, we have React on the front-end and nginx serving the assets on the back-end.
If you are starting React with no previous knowledge, please start with the official docs.
Once you have familiarized yourself with the library, play around with it on Codesandbox or on your local system using Create React App. If you can build yourself a basic UI, continue reading this wiki.
Base Foundation
To build with React, each developer should hone their vanilla Javascript knowledge. Please refer to the Javascript wiki to see if you have any missing gaps in your knowledge base.
A must for each developer onboarding is a clear understanding of how React works. This includes the following:
What are React’s lifecycle functions? And how are they supplemented with React hooks?
Why would I use a React class component over a functional component? And when?
This post tries to address these questions and many more.
Newer React Functionality (React v15+)
The application uses many techniques that are worth highlighting because we’ve developer our own set of best practices to follow.
React Context
React Hooks
React Performance APIs (useMemo, useCalllback)
Supporting Libraries
Many supporting libraries help support the development of the app. Most of these supporting libraries are open source and have dedicated wiki pages. Here are the highlights:
React-Final-Form
Downshift
d3
i18next
Luxon (migrating from moment)
Components
Our project includes Storybook, an interactive UI tool to develop and document components. In each component, an extra js file is created with the stories suffix. E.g. index.stories.js. This helps with developing components on their own and reduces overhead with creating component properties.
Refactoring class components to functional components
Lifecycle functions can be replaced with useEffect. But be careful, as we mentioned in useEffect vs useLayoutEffect, useEffect is asynchronous and lifecycle functions aren’t a 1
match.
componentDidMount() {// do something}// now becomesuseEffect(() => {// do something}, []);
Building Components
While the previous section introduced us to components, this section expands on how we write components.
Class or Function?
When creating a new component, start off with a function component. What is a function component?
const FunctionComponent = (props) => <div>Here's the JSX</div>;
A function component is an easier markup to read. To React, a function component vs. a class are indistinguishable. As developers, we aim for clean code. Ask yourself the following questions of whether you might need a class.
Do we need lifecycle functions? If this is yes, evaluate whether you can use Hooks instead. If not, use a class.
Do we need a constructor? Rarely do we need a constructor. If you need one ask what special cases are you doing to state or what the justification is for other constructor needs.
Does the component need private or public methods? On a rare occassion, we may want to expose a public class method. Use a class.
Maybe there are private methods a class should have. Use a class.
In general, for most components are function components. With the introduction of hooks, function components can also have state. We have our own section about hooks too.
Component or PureComponent?
If using a class, we can further ask whether a PureComponent should be extended vs. a Component.
Compound Components
Compound components allow you to create and use components which share this state implicitly.
Other Related Articles
I’ve written a few other React articles, as shown below:
The following guide is a modified version that we use at Clear Labs dev team. It’s a starting point for team dev work and contribution.
When contributing to this repository, please first make sure a ticket is filed for the change, whichever ticketing system is used.
At Clear Labs, we use JIRA, but the same can be done for Github issues, or any other ticketing system.
Please note we have a code of conduct. Please follow it in all your interactions with the project.
How To Contribute
When beginning development, you will need to create a git branch. See Git Branches
for more information about naming your git branch.
Git Branches
The app has three main branches.
develop ➡ Maps to the Development environment
main ➡ Maps to the Production environment
release ➡ Maps to the released versions on the Production environment (we have slow release cycles, )
In development, a developer will create a feature branch, named after a ticket number, e.g. ENG-2120.
When the ticket is ready to test, the develop will create a pull request (PR) against the develop branch.
When a set of features are completed, a PR will be created between the develop branch and the master branch.
Before the PR is merged, the developer needs to tag the develop branch with the proper version tag.
QA will approve this PR when they are ready to upgrade the QA environment with the developer’s latest changes.
When a set of features are tested, a develop needs to create a PR between the main and release branch.
When QA approves this PR, the developer will tag and merge this PR.
Naming Scheme for CI
Name your branches with these prefixes. This will test and build the application in our CI.
ENG-*
hotfix-*
feature-*
Commits
All commits need to contain a ticket number. If a commit does not contain a ticket number, the push to Bitbucket will not be allowed.
Example:
git commit -m “ENG-2120 resolve breaking change from GraphQL API for test runs”
In case a commit does not contain a ticket number, you have a few strategies to resolve this:
rebase against develop. git rebase develop -i
if it is the latest commit, you can amend it. git commit --amend
Pull Request Process
Ensure any install or build dependencies are removed before the end of the layer when doing a build. Please use the .gitignore file for ignoring unnecessary files. Make sure all commit messages have a JIRA ticket tag. e.g. git commit -m "ENG-100 commit message"
Update the README.md with details of changes to the interface, this includes new environment variables, exposed ports, useful file locations and container parameters. If there are changes to development, please update the development guide.
If creating a PR to master branch, tag the develop branch with a bump in the version. The same goes for a PR to release by tagging main. For develop ➡ main branch, take the base version, add a hyphen, and concat the date (mm/dd) plus an incrementor. e.g. v1.6.0.0.1-Feb.01.1 For master ➡ release-candidate branch, give the version. e.g. v1.6.1. For additional information about versioning, please refer to the next section.
JIRA should add a list of commits going into this PR. If not, please add them with the JIRA ticket tag.
You may merge the Pull Request in once you have the sign-off of one other developer, or if you do not have permission to do that, you may request the second reviewer to merge it for you.
Preleases are used for git tagging between develop and master branches. This is denoted by an alpha-{number}, e.g. v0.9.13.alpha-1
Releases are versioned without prerelease words, e.g. v0.9.13
For hotfixes, bump the patch version. e.g. v0.9.13 -> v0.9.14
Upon later inspection, we no longer use prereleases.
🚨 Deprecation Notice
Moving forward, release-candidate will be deprecated in favor of using main without release.
Code of Conduct
Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
nationality, personal appearance, race, religion, or sexual identity and
orientation.
Our Standards
Examples of behavior that contributes to creating a positive environment
include:
Using welcoming and inclusive language
Being respectful of differing viewpoints and experiences
Gracefully accepting constructive criticism
Focusing on what is best for the community
Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
The use of sexualized language or imagery and unwelcome sexual attention or advances
Trolling, insulting/derogatory comments, and personal or political attacks
Public or private harassment
Publishing others’ private information, such as a physical or electronic address, without explicit permission
Other conduct which could reasonably be considered inappropriate in a professional setting
Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the Engineering Manager. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.
These notes are a guide I’ve written throughout coding the initial part of the application. The note starts out with fundamentals and continues with specific testing edge cases.
Philosophy
Write tests. Not too many. Mostly integration.
Guillermo Rauch
The more your tests resemble the way your software is used, the more confidence they can give you.
Kent C. Dodds
This project focuses mainly on integration tests. Why? We shouldn’t mock too much as the tests themselves become unmaintainable.
When you make any changes to the code with tests that have a lot of mocking, the tests also have to be updated.
Mostly manual. And we end up creating more work for the developer than is actually worth.
Code coverage also isn’t the best factor to aim for. Yes, we should have tests to cover our code. No, we shouldn’t aim for 100% coverage.
Pareto’s law can apply here. For most cases, we expect few test to cover most use cases. At some point, there’s diminishing returns.
Out of the box, the testing framework and its tools are installed with dependencies.
For more information, checkout the installation section of the README.
Unit tests are run before a building the Docker container.
Tests are run with Jest, that has the Expect expectations library given.
As mentioned in the testing philosophy, we try not to focus on mocking. Sometimes this is inevitable and we have included Enzyme for shallow rendering.
Use shallow sparingly. For more, read this article.
yarn test
Additional Commands
If there are any jest flags you want to add to your tests, like watch mode or coverage, you can add those flags to the command.
Watch
# Run tests in watch modeyarn test --watch
Coverage
# Run a coverage reportyarn test --coverage# This will build a `coverage` folder that can be viewed for a full coverage report
Single file or folder
# Run tests over a single fileyarn test src/path/to/file# Run tests over a folderyarn test src/path/to/folder
State Management Testing
Test all actions, sagas, and reducers.
Action tests are ensuring the action creators create the proper actions
Reducer tests are ensuring the state has been changed properly
Saga tests are more for E2E testing, making sure all side-effects are accounted for
Move data fetching code or side effects to componentDidUpdate.
If you’re updating state whenever props change, refactor your code to use memoization techniques or move it to static getDerivedStateFromProps. Learn more at: https://fb.me/react-derived-state
Rename componentWillReceiveProps to UNSAFEcomponentWillReceiveProps to suppress this warning in non-strict mode. In React 17.x, only the UNSAFE name will work. To rename all deprecated lifecycles to their new names, you can run npx react-codemod rename-unsafe-lifecycles in your project source folder.
Please update the following components: *
With a move to React v16.8 -> v16.9, componentWillMount, componentWillReceiveProps, and componentWillUpdate lifecycle methods have been renamed.
They will be deemed unsafe to use. Our library has updated already, but some libraries may still use this.
Known libraries with issues:
react-dates
react-outside-click-handler (dev dependency to react-dates)
Invariant Violation: Could not find “store” in the context of “Connect(Form(Form))”.
Either wrap the root component in a “Provider”, or pass a custom React context provider
to “Provider” and the corresponding React context consumer to Connect(Form(Form))
in connect options.
Solution
Add imports
import { Provider } from "react-redux";import configureStore from "redux-mock-store";
Create the mock store. Wrap renderer with provider.
You’ve included redux in your test, but you might get the following message.
[redux-saga-thunk] There is no thunk state on reducer
If this is the case, go back to your mock store and include thunk has a key.
it("renders a component that needs to thunk", () => { const mockStore = configureStore(); const store = mockStore({ thunk: {} }); // Be sure to include this line with the thunking const tree = renderer .create( <Provider store={store}> <TestedComponent /> </Provider> ) .toJSON(); expect(tree).toMatchSnapshot();});
i18n Error
Sometimes, an i18n provider isn’t given. The error doesn’t appear to be useful.
TypeError: Cannot read property ‘ready’ of null
Check if the component or a child component uses the Translation component. If so, Translation requires context Provider be wrapped around.
Solution
Add imports
import { I18nextProvider } from "react-i18next";import i18n from "../../../test-utils/i18n-test";
Rerun the test and check the snapshot. If the snapshot looks good, add the -u flag to update the snapshot.
Apollo Error
If the component requires an apollo component, you will want to pass in a mock provider.
Invariant Violation: Could not find “client” in the context or passed in as a prop.
Wrap the root component in an “ApolloProvider”, or pass an ApolloClient instance in via props.
Add imports
import { MockedProvider } from "@apollo/client/testing";
TypeError: Cannot read property ‘createLTR’ of undefined
Solution
Solve by adding the following to the top of the test file
import "react-dates/initialize";
As of v13.0.0 of react-dates, this project relies on react-with-styles. If you want to continue using CSS stylesheets and classes, there is a little bit of extra set-up required to get things going. As such, you need to import react-dates/initialize to set up class names on our components. This import should go at the top of your application as you won’t be able to import any react-dates components without it.
Final Form
Warning: Field must be used inside of a ReactFinalForm component
When you use the test renderer, this won’t work.
For an exhaustive way of triggering events, check out
this post.
The preliminary solution is to run act from the react-test-renderer library.
Currently, there is no documentation to this, so it’s best to
read the code.
Here’s how we use act.
it("creates component with useEffect", () => { // Create your tree const tree = renderer.create( <TestComponentWithEffect>My Effect</TestComponentWithEffect> ); // Tell the renderer to act, pushing the effect through renderer.act(() => {}); expect(tree.toJSON()).toMatchSnapshot();});// Drawbacks:// - Can't handle flushing (yet)
This will be revisited as the API matures.
Dealing with Time
If you need to mock time, you could use this implementation.
const constantDate = new Date("2019-05-16T04:00:00");/* eslint no-global-assign:off */Date = class extends Date { constructor() { super(); return constantDate; }};
I’ve been a bit fascinated by an episode of the Cortex podcast about Yearly Themes.
In the episode, Myke and Grey discuss what the over-arching theme of the year is. If I were to make-up a theme for 2018, it would be reinvention.
I’m in an inflection point of my life. One where I’m ready to let go of my past and look forward to things to come.
Recap
January-February
I was working at Inform in our new office - a co-working space that felt more like a downgrade. I wanted to line up another job so I could quit.
As a co-worker said to me later, “What are you still doing here?”. Touché.
Traveled to Las Vegas to visit friends and get my TSA pre-check.
Around mid-February, I interviewed and accepted my new place of work. Clear Labs. Hired on as a web developer.
March-May
I started my new gig. Forgot how much work it is being at a start-up, but quickly landed on my feet as we build the software from the ground up.
Traveled to Portland for a quick visit to see friends.
Ended April with a vacation to Norway and Sweden with my friend, Teagan. We saw the fjords, viking ships, and some questionable art.
The Questionable Art Pose
Visited the Color Factory & Ice Cream Museum. Watched my sister graduate college. Found out my co-workers at Inform got laid off.
June-July
Participated in my first Kubb competition. Was dating someone I thought could be good for me…
Got Life Insurance. Really didn’t think I’d need this, but after the financial incentives, I had to get it.
Reeling back from slow heart-ache.
September
The busiest month this year. Married my friends. Watched different friends get married in Colorado Springs. Started dating someone substantial. Went to St. Louis for the 2nd time for the Strangeloop Conference.
October-November
Participated in my very first triathalon. It was a relay, and I did the biking portion. We got third!