hashicorp/learn-website

Verified Publisher

By HashiCorp, Inc.

Updated about 3 years ago

a docker image for building learn.hashicorp.com

Image
Content Management System
Languages & Frameworks
Integration & Delivery

4.4K

HashiCorp Learn

The HashiCorp learning platform, found at https://learn.hashicorp.com

Table of Contents

Local Development

You can work on the website locally with hot-reloading content and styles. There are two options for local development:

  • Node.js: If you have node.js installed, you can get started initially by installing dependencies using npm install, then run the site in local development mode using the command npm run dev. This is much faster both to startup and reload, and recommended for core developers and/or frequent contributors. You can run npm run build to generate the files for the static site without starting the local server. For troubleshooting, you can run npm run static on your local machine to recreate the error locally.

  • Docker: You can run this website with only Docker, if you do not have node.js installed on your machine. You must however have Docker for Mac, Windows, etc. installed. The benefit is that you ONLY need Docker, and no other dependencies. The downside is that this approach is a bit slower. The bootup time will be a minute or two, but after that you will get hot-reloading and development will be fast. Simply running, make will pull down a pre-built Docker image and run the website within the image, exposing it at port 3000. If you make a change to the node dependencies at all, you'll need to regenerate the image using make build-image then run the website using the locally built container with make website-local.

In either scenario, you can visit the local website at http://localhost:3000. When you modify content, the website should automatically reload, you do not have to stop and restart the development environment.

NOTE: When you navigate between pages for the first time in local dev mode, the styling will not appear. Reloading the page will resolve this issue. We are aware this is not an ideal experience and it will be resolved as a part of this asana ticket: https://app.asana.com/0/1109567977155474/1123572242688266

Understanding Terminology

This project specifically is organized using a set of terminology that it's important to understand in order to be able to work with the project. We'll go through each of these terms below in order to make them clear.

  • Topic: An individual page containing a single learn guide. For example, /consul/getting-started/install.
  • Track: A collection of topics in a group. Tracks do not have their own view, but are visually grouped together on curriculum pages. For example, consul's "getting started" guides.
  • Curriculum: All of the tracks and topics for one specific product. For example, /consul.

In the filesystem, you can find curriculum-level content under pages/{product}, track-level content under pages/{product}/{trackName}. Tracks can be multiple levels deep, and anything past the first level is part of the name of the track. So for example, pages/consul/advanced/advanced-operations/autopilot.mdx would be broken down as such:

[consul/]   [advanced/advanced-operations/]   [autopilot.mdx]
^           ^                                 ^
curriculum  track                             topic

This is a bit confusing when you look at the data/{product}.yml file. You will see that the "curriculum" level is cleanly separate, each curriculum has its own data file. The "tracks" level is where things get a little difficult. It is clear that each track is an item under tracks, but beyond that things become a bit mixed up when looking at level, id, and the id on the topics array. We may take another look at this in the future, but for now, go with you gut when it comes to organizing data in that file, but when it comes to the filesystem, follow the guide above.

Deployment

At the moment, this website is configured to deploy through netlify, directly on the master branch. It can be seen at https://learn-platform-upgrade.netlify.com. If you need access to the netlify instance for any reason, please reach out the the web platform team.

Creating Content

There are a few different types of content that can be created, but the majority of contributions to content will be via markdown files in the pages/ directory.

Working with Markdown

Markdown Style Guide

Learn content should follow the HashiCorp Engineering Markdown Style Guide. Refer to this guide for information on inline styles and best practices.

Authoring Pages

To create a new page with Markdown, create a file ending in .mdx in the pages/ directory. The files should have only one extension: .mdx. Any additional extensions will cause errors, and are not necessary. The path in the pages directory will be the direct URL route. For example, pages/hello/world.mdx will be served from the /hello/world URL.

NOTE: If you are running the local dev watcher and add a new page, you will see an error that looks like "Error: Unable to find page: '{name of your page}'". If you restart the watcher (ctrl+c, npm run dev again), your new page will be present. We're aware this is not ideal and are working on a solution, which can be tracked in this asana task: https://app.asana.com/0/1126463990065400/1128564052830051

This file can be standard Markdown and also supports YAML frontmatter. The required keys in the YAML frontmatter are:

  • name(string) - This is the title of the page that will be set in the HTML title.
  • content_length(number) - The estimated number of minutes it will take to get through the lesson. This can be auto-generated based on the amount of text using npm run estimate-reading-times.
  • id(string) - Used to identify the page when adding it to a curriclum
  • products_used(array) - Identify each HashiCorp product used in the guide. This can be one or many.
  • description(string) - A short description of the article, to be used as metadata and where it appears in the curriculum list.
  • level(string) - At the moment, there are two "levels" an article can be in, getting-started and operations-and-development. Choose one of these to appriopriately categorize your content.

It is important to be accurate when authoring YAML content. If you are not familiar with YAML syntax, please take a moment to familiarize yourself via this basic guide. Additionally, these are several spots in which it's useful to understand multiline syntax with yaml, which this guide is very helpful for. In the future, we plan to add a yaml linting github check that will warn about any syntax errors, but in the meantime they will result in either an error preventing the site from building, or a mistake on your article's data or categorization.

Understanding markdown syntax is also very important. We adhere to the commonmark spec, and this is an excellent resource for learning how to author clean, functional markdown. To be more specific, our markdown is parsed by a library called MDX, which you can play with and test here. MDX also allows react components to be rendered within your document, which is a capability we look forward to taking advantage of, and we will fill out this readme further once we have started making inroads.

There are only a couple of caveats currently outside of the standard commonmark spec to take note of:

  • When you create a fenced code block, you may add a language after the three backticks get syntax highlighting, much like in github. Here's a list of valid languages you can use for highlighting. Using an invalid language name will result in your code not being highlighted.
  • We have a custom markown extension that allows the use of custom alert boxes, docs are here. Note that we do plan to deprecate this extension and replace it with react elements in the future.

If you plan to add html directly to your markdown file, please consult with the web platform team first. Html within markdown files causes a lot of issues, and there is almost always a better way. We do plan on adding a markdown linter, much like yaml, that will catch common markdown errors, and will error in the presence of any html elements. However, until that happens, it's important to be extra careful that you are writing high quality markdown!

Adding To A Curriculum

Once you have started writing an article, the next step is to add that article to a curriculum so that you can preview how it looks. Any article that has not been added to a curriculum cannot be previewed, and there should be no situation in which an article that is not in a curriculum should be merged into master. We also plan to add a github check that will fail any pull request that includes an "orphan" article, so please be mindful of this. If you are working on a draft, keep the work in a pull request until the draft is complete and it has been added to a curriculum.

To add an article to a curriculum, head on over to the data folder and select the product you'd like the article to be in the curriculum for. Then scroll down to the tracks, select the track you would like the article to appear within, and add that article's id to the topics array. If you have done this correctly, you should be able to see the article appear in that track, and click into it to see the article fully render. If this is not happening, make sure to double check and ensure that you don't have a typo on the id in either the front matter for the article, or the topics list.

Using Components

React components can be imported to an .mdx file to provide additional and content features. See the components/ directory for a full list.

To use a component within an .mdx file, insert an import statement within your .mdx file, just after the Front-Matter. Components only need to be imported one time in each file. You can then call the component inline in your file, wherever you would like to use the component.

For example:

---
content_length: 5
id: example-demo
level: Implementation
products_used:
  - Vault
name: Example Demo
description: Example
---
import ComponentName from '../../../components/component-name'

<!-- Guide content -->

<ComponentName take-some-action />

<!-- Guide content -->

Content related components include:

Pages

If you need to create a new page that is not a markdown file, please consult with the platform team #team-web-platform first. That being said, to create a page, create a TypeScript (tsx or ts) or JavaScript (jsx or js) file in the pages/ directory. The path to the file will also be the URL to the page.

TypeScript and JavaScript pages enable more complex behavior, data querying, and more. These should be used for layout files, dynamic pages, etc. For TypeScript or JavaScript files, the defaut ES6 export should be a React Component. This will be rendered for the page. For more docs on page creation and our standards around data fetching etc. please refer to the next-hashicorp docs

Sharing Topics Across Tracks

Support is built-in for sharing topics across multiple tracks within a specific product. You can see this utilized within /pages/vault/.

Reusable topics must be placed within a corresponding __shared__ folder, residing under the product's pages directory. (i.e. terraform => /pages/terraform/__shared__). All shared content will live in the __shared__ directory.

In order to create and use shared content (topic) across different tracks, follow these steps:

  1. Create __shared__ directory if it does NOT exist.
$ mkdir pages/terraform/__shared__
  1. Author the topic source file (.mdx) in the __shared__ folder.
---
id: reference-architecture
level: Implementation
products_used:
  ...
---

The topic id (in this case, reference-architecture) is used to reference this topic across multiple tracks.

NOTE: As a best practice, name your source file as <id>.mdx (e.g. reference-architecture.mdx).

  1. Reference the shared content in the appropriate .yml file's track data.
tracks:
  - name: 'Track 1'
    id: day-one
    ...
    topics:
      - topic_id_A
      - topic_id_B
      - reference-architecture

  - name: 'Track 2'
    id: operations
    ...
    topics:
      - topic_id_X
      - topic_id_Y
      - reference-architecture
  1. Create a symbolic link that targets the shared content.

IMPORTANT: The reference path of the source file should be relative to the repository, NOT absolute path to the machine.

Mac & Linux (bash)

# First change the working directory to the target location
cd pages/terraform/day-one

# Now, create a symbolic link
# ln -s <source_file> <target_file>
ln -s ../__shared__/reference-architecture.mdx reference-architecture.mdx

# Repeat the steps for all tracks
cd ../operations && ln -s ../__shared__/reference-architecture.mdx reference-architecture.mdx

Windows (PowerShell)

New-Item -ItemType SymbolicLink -Path "~\day-one\reference-architecture.mdx" -Target "..\__shared__\reference-architecture.mdx"

New-Item -ItemType SymbolicLink -Path "~\operations\reference-architecture.mdx" -Target "..\__shared__\reference-architecture.mdx"

Performing the steps above produces a url: terraform/day-one/reference-architecture and terraform/operations/reference-architecture that resolve with the contents inside terraform/__shared__/reference-architecture.mdx.

Notes for Windows users

In order to resolve symbolic links in this repo correctly on Windows 10, Git for Windows will need symbolic links enabled. The proper flags should be set upon cloning the repo:

git clone -c core.symlinks=true <URL>

More pertinent details on this process can be found here >

If the above cloning does not work properly, you can check your git config, and if all else fails you should be able to enable symbolic links on your Git for Windows install.

When cloning (to get symlink support) or creating a symlink on Windows (via PowerShell or mklink), you may need to use an elevated (admin) shell. As of a recent update though, creating symlinks should be supported if Developer Mode is enabled on your machine.

Optional Visual Diff Testing

Percy is a visual diff testing tool that is used across a number of projects. Typically it runs on every Pull Request and shows visual changes between the primary branch (usually master) and the Pull Request branch.

Note: If you need access to Percy, please reach out on #team-mktg-webdev

Percy is particularily useful when making design changes to a project as it highlights all visual changes. This is helpful for verifying expected changes and catching unintended changes.

Because of the volume of pages on Learn a Percy run can add 10 - 12 minutes to the overall GitHub checks run time. Coupled with the fact that most PRs are content changes or updates, it doesn't make sense to slow down GitHub checks with a Percy run on every Pull Request.

To make Percy runs optional, a CircleCI branch filter in in place. Percy will only run a diff check for branches that start withrun-percy (ie. run-percy.mw.update-feature or run-percy-change-something). See the PR that configured this for some additional detail.

Percy Workflow

Because the period of time between Percy runs may be large, each sequential Percy run may contain a large volume of changes. It's recommended that you first spin up a 'dummy' branch with Percy enabled (like run-percy-catchup) so that Percy will run and highlight all the changes that have occurred since its last run. You should then be able to approve those changes, and delete the dummy branch. Now, go ahead and open up your branch as normal (like run-percy.mw.thing-to-work-on) and you'll get nice, noise free diffs for what you're working on.

Docker Pull Command

docker pull hashicorp/learn-website