Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
#' # Example
-#' This example was generated DATEOFTODAY
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
#' # Example
+#' This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v0.1.0/documenter.html b/v0.1.0/documenter.html
index c7167fc..e66510a 100644
--- a/v0.1.0/documenter.html
+++ b/v0.1.0/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v0.1.0/fileformat.html b/v0.1.0/fileformat.html
index 2ea4848..892a61a 100644
--- a/v0.1.0/fileformat.html
+++ b/v0.1.0/fileformat.html
@@ -1,17 +1,58 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
The reason for using #' instead of # is that we want to be able to use # as comments, just as in a regular script. Lets look at a simple example:
#' # Rational numbers
-#'
-#' In julia rational numbers can be constructed with the `//` operator.
-#' Lets define two rational numbers, `x` and `y`:
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
The reason for using #' instead of # is that we want to be able to use # as comments, just as in a regular script. Lets look at a simple example:
#' # Rational numbers
+#'
+#' In julia rational numbers can be constructed with the `//` operator.
+#' Lets define two rational numbers, `x` and `y`:
x = 1//3
y = 2//5
-#' When adding `x` and `y` together we obtain a new rational number:
+#' When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines #' we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
+z = x + y
In the lines #' we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
+@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v0.1.0/generated/example.html b/v0.1.0/generated/example.html
index 4f683c4..6141716 100644
--- a/v0.1.0/generated/example.html
+++ b/v0.1.0/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with #' is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with #' is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,4 +51,4 @@ foo()
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generate markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, which all takes the same script file as input, but generates different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generate markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, which all takes the same script file as input, but generates different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v0.1.0/outputformats.html b/v0.1.0/outputformats.html
index d984f97..ef6f246 100644
--- a/v0.1.0/outputformats.html
+++ b/v0.1.0/outputformats.html
@@ -1,16 +1,57 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
#' # Rational numbers
-#'
-#' In julia rational numbers can be constructed with the `//` operator.
-#' Lets define two rational numbers, `x` and `y`:
+4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
#' # Rational numbers
+#'
+#' In julia rational numbers can be constructed with the `//` operator.
+#' Lets define two rational numbers, `x` and `y`:
x = 1//3
#-
y = 2//5
-#' When adding `x` and `y` together we obtain a new rational number:
+#' When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
# Rational numbers
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
@@ -27,7 +68,7 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with #' is printed as regular markdown, and the code lines have been wrapped in @example blocks.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet above is as follows
│ # Rational numbers
│
│ In julia rational numbers can be constructed with the `//` operator.
│ Lets define two rational numbers, `x` and `y`:
@@ -41,8 +82,8 @@ Out[2]: │ 2//5
│ When adding `x` and `y` together we obtain a new rational number:
In[3]: │ z = x + y
-Out[3]: │ 11/15
We note that lines starting with #' is put in markdown cells, and the code lines have been put in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
+Out[3]: │ 11/15
We note that lines starting with #' is put in markdown cells, and the code lines have been put in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with #' are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines (#') as comments in the output script. Defaults to false.
We note that lines starting with #' are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines (#') as comments in the output script. Defaults to false.
source
\ No newline at end of file
diff --git a/v0.1.0/pipeline.html b/v0.1.0/pipeline.html
index 6d11149..5bb9378 100644
--- a/v0.1.0/pipeline.html
+++ b/v0.1.0/pipeline.html
@@ -1,29 +1,70 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
#' # Rational numbers <- markdown
-#' <- markdown
-#' In julia rational numbers can be constructed with the `//` operator. <- markdown
-#' Lets define two rational numbers, `x` and `y`: <- markdown
+3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
#' # Rational numbers <- markdown
+#' <- markdown
+#' In julia rational numbers can be constructed with the `//` operator. <- markdown
+#' Lets define two rational numbers, `x` and `y`: <- markdown
<- code
x = 1 // 3 <- code
y = 2 // 5 <- code
<- code
-#' When adding `x` and `y` together we obtain a new rational number: <- markdown
+#' When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
#' # Rational numbers ┐
-#' │
-#' In julia rational numbers can be constructed with the `//` operator. │ markdown
-#' Lets define two rational numbers, `x` and `y`: ┘
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
#' # Rational numbers ┐
+#' │
+#' In julia rational numbers can be constructed with the `//` operator. │ markdown
+#' Lets define two rational numbers, `x` and `y`: ┘
┐
x = 1 // 3 │
y = 2 // 5 │ code
┘
-#' When adding `x` and `y` together we obtain a new rational number: ] markdown
+#' When adding `x` and `y` together we obtain a new rational number: ] markdown
┐
-z = x + y ┘ code
In the last parsing step all empty leading and trailing lines for each chunk are removed, but empty lines within the same block are kept. The leading #' tokens are also removed from the markdown chunks. Finally we would end up with the following 4 chunks:
Chunks #1:
# Rational numbers
+z = x + y ┘ code
In the last parsing step all empty leading and trailing lines for each chunk are removed, but empty lines within the same block are kept. The leading #' tokens are also removed from the markdown chunks. Finally we would end up with the following 4 chunks:
Chunks #1:
# Rational numbers
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. In short, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. In short, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v0.1.0/search.html b/v0.1.0/search.html
index 674f80b..e549140 100644
--- a/v0.1.0/search.html
+++ b/v0.1.0/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v0.2.0/customprocessing.html b/v0.2.0/customprocessing.html
index 96726fb..b03b48f 100644
--- a/v0.2.0/customprocessing.html
+++ b/v0.2.0/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v0.2.0/documenter.html b/v0.2.0/documenter.html
index c7167fc..e66510a 100644
--- a/v0.2.0/documenter.html
+++ b/v0.2.0/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v0.2.0/fileformat.html b/v0.2.0/fileformat.html
index c1e5b83..9b54a1e 100644
--- a/v0.2.0/fileformat.html
+++ b/v0.2.0/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v0.2.0/generated/example.html b/v0.2.0/generated/example.html
index 1882c89..18f15a9 100644
--- a/v0.2.0/generated/example.html
+++ b/v0.2.0/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,4 +51,4 @@ foo()
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v0.2.0/outputformats.html b/v0.2.0/outputformats.html
index 65e02a5..16506e8 100644
--- a/v0.2.0/outputformats.html
+++ b/v0.2.0/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
source
\ No newline at end of file
diff --git a/v0.2.0/pipeline.html b/v0.2.0/pipeline.html
index 071a080..5d994fd 100644
--- a/v0.2.0/pipeline.html
+++ b/v0.2.0/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v0.2.0/search.html b/v0.2.0/search.html
index 674f80b..e549140 100644
--- a/v0.2.0/search.html
+++ b/v0.2.0/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v0.2.1/customprocessing.html b/v0.2.1/customprocessing.html
index 96726fb..b03b48f 100644
--- a/v0.2.1/customprocessing.html
+++ b/v0.2.1/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v0.2.1/documenter.html b/v0.2.1/documenter.html
index c7167fc..e66510a 100644
--- a/v0.2.1/documenter.html
+++ b/v0.2.1/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v0.2.1/fileformat.html b/v0.2.1/fileformat.html
index c1e5b83..9b54a1e 100644
--- a/v0.2.1/fileformat.html
+++ b/v0.2.1/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v0.2.1/generated/example.html b/v0.2.1/generated/example.html
index e62813c..98179cc 100644
--- a/v0.2.1/generated/example.html
+++ b/v0.2.1/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,4 +51,4 @@ foo()
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v0.2.1/outputformats.html b/v0.2.1/outputformats.html
index 4f8174f..a738cb8 100644
--- a/v0.2.1/outputformats.html
+++ b/v0.2.1/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
source
\ No newline at end of file
diff --git a/v0.2.1/pipeline.html b/v0.2.1/pipeline.html
index 071a080..5d994fd 100644
--- a/v0.2.1/pipeline.html
+++ b/v0.2.1/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v0.2.1/search.html b/v0.2.1/search.html
index 674f80b..e549140 100644
--- a/v0.2.1/search.html
+++ b/v0.2.1/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v0.2.2/customprocessing.html b/v0.2.2/customprocessing.html
index 96726fb..b03b48f 100644
--- a/v0.2.2/customprocessing.html
+++ b/v0.2.2/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v0.2.2/documenter.html b/v0.2.2/documenter.html
index c7167fc..e66510a 100644
--- a/v0.2.2/documenter.html
+++ b/v0.2.2/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v0.2.2/fileformat.html b/v0.2.2/fileformat.html
index c1e5b83..9b54a1e 100644
--- a/v0.2.2/fileformat.html
+++ b/v0.2.2/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v0.2.2/generated/example.html b/v0.2.2/generated/example.html
index c899e2f..5a0a908 100644
--- a/v0.2.2/generated/example.html
+++ b/v0.2.2/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,121 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = linspace(0, 6π, 1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v0.2.2/outputformats.html b/v0.2.2/outputformats.html
index 5401af2..11983b8 100644
--- a/v0.2.2/outputformats.html
+++ b/v0.2.2/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
source
\ No newline at end of file
diff --git a/v0.2.2/pipeline.html b/v0.2.2/pipeline.html
index 071a080..5d994fd 100644
--- a/v0.2.2/pipeline.html
+++ b/v0.2.2/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v0.2.2/search.html b/v0.2.2/search.html
index 674f80b..e549140 100644
--- a/v0.2.2/search.html
+++ b/v0.2.2/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v0.3.0/customprocessing.html b/v0.3.0/customprocessing.html
index 96726fb..b03b48f 100644
--- a/v0.3.0/customprocessing.html
+++ b/v0.3.0/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v0.3.0/documenter.html b/v0.3.0/documenter.html
index 56f688d..fd832f3 100644
--- a/v0.3.0/documenter.html
+++ b/v0.3.0/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v0.3.0/fileformat.html b/v0.3.0/fileformat.html
index a1d454a..7d2615f 100644
--- a/v0.3.0/fileformat.html
+++ b/v0.3.0/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v0.3.0/generated/example.html b/v0.3.0/generated/example.html
index e81fa9a..f413939 100644
--- a/v0.3.0/generated/example.html
+++ b/v0.3.0/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,121 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v0.3.0/outputformats.html b/v0.3.0/outputformats.html
index 74cfd45..fc194aa 100644
--- a/v0.3.0/outputformats.html
+++ b/v0.3.0/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
source
\ No newline at end of file
diff --git a/v0.3.0/pipeline.html b/v0.3.0/pipeline.html
index f2071db..0a01b69 100644
--- a/v0.3.0/pipeline.html
+++ b/v0.3.0/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v0.3.0/search.html b/v0.3.0/search.html
index 674f80b..e549140 100644
--- a/v0.3.0/search.html
+++ b/v0.3.0/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.0.0/customprocessing.html b/v1.0.0/customprocessing.html
index 749ceca..ab618a3 100644
--- a/v1.0.0/customprocessing.html
+++ b/v1.0.0/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v1.0.0/documenter.html b/v1.0.0/documenter.html
index 7a431f3..3fabfab 100644
--- a/v1.0.0/documenter.html
+++ b/v1.0.0/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.0.0/fileformat.html b/v1.0.0/fileformat.html
index 8a24e34..a02baee 100644
--- a/v1.0.0/fileformat.html
+++ b/v1.0.0/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v1.0.0/generated/example.html b/v1.0.0/generated/example.html
index c6095ec..3216229 100644
--- a/v1.0.0/generated/example.html
+++ b/v1.0.0/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,121 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.0.0/outputformats.html b/v1.0.0/outputformats.html
index 126f2d1..1f9c633 100644
--- a/v1.0.0/outputformats.html
+++ b/v1.0.0/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
source
\ No newline at end of file
diff --git a/v1.0.0/pipeline.html b/v1.0.0/pipeline.html
index 14c7c0f..bc81974 100644
--- a/v1.0.0/pipeline.html
+++ b/v1.0.0/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.0.0/search.html b/v1.0.0/search.html
index 67a457d..0757e91 100644
--- a/v1.0.0/search.html
+++ b/v1.0.0/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.0.1/customprocessing.html b/v1.0.1/customprocessing.html
index 749ceca..ab618a3 100644
--- a/v1.0.1/customprocessing.html
+++ b/v1.0.1/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v1.0.1/documenter.html b/v1.0.1/documenter.html
index 7a431f3..3fabfab 100644
--- a/v1.0.1/documenter.html
+++ b/v1.0.1/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.0.1/fileformat.html b/v1.0.1/fileformat.html
index 8a24e34..a02baee 100644
--- a/v1.0.1/fileformat.html
+++ b/v1.0.1/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v1.0.1/generated/example.html b/v1.0.1/generated/example.html
index 51b814a..9404f9c 100644
--- a/v1.0.1/generated/example.html
+++ b/v1.0.1/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,121 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.0.1/outputformats.html b/v1.0.1/outputformats.html
index 9a9d078..a91e335 100644
--- a/v1.0.1/outputformats.html
+++ b/v1.0.1/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v1.0.1/pipeline.html b/v1.0.1/pipeline.html
index 14c7c0f..bc81974 100644
--- a/v1.0.1/pipeline.html
+++ b/v1.0.1/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.0.1/search.html b/v1.0.1/search.html
index 67a457d..0757e91 100644
--- a/v1.0.1/search.html
+++ b/v1.0.1/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.0.2/customprocessing.html b/v1.0.2/customprocessing.html
index 749ceca..ab618a3 100644
--- a/v1.0.2/customprocessing.html
+++ b/v1.0.2/customprocessing.html
@@ -1,8 +1,49 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
\ No newline at end of file
diff --git a/v1.0.2/documenter.html b/v1.0.2/documenter.html
index 7a431f3..3fabfab 100644
--- a/v1.0.2/documenter.html
+++ b/v1.0.2/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.0.2/fileformat.html b/v1.0.2/fileformat.html
index 8a24e34..a02baee 100644
--- a/v1.0.2/fileformat.html
+++ b/v1.0.2/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v1.0.2/generated/example.html b/v1.0.2/generated/example.html
index b9c6100..245736f 100644
--- a/v1.0.2/generated/example.html
+++ b/v1.0.2/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,121 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.0.2/outputformats.html b/v1.0.2/outputformats.html
index 9007a2d..93ee397 100644
--- a/v1.0.2/outputformats.html
+++ b/v1.0.2/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v1.0.2/pipeline.html b/v1.0.2/pipeline.html
index 14c7c0f..bc81974 100644
--- a/v1.0.2/pipeline.html
+++ b/v1.0.2/pipeline.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.0.2/search.html b/v1.0.2/search.html
index 67a457d..0757e91 100644
--- a/v1.0.2/search.html
+++ b/v1.0.2/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.0.3/customprocessing/index.html b/v1.0.3/customprocessing/index.html
index 48adc7c..ba1068a 100644
--- a/v1.0.3/customprocessing/index.html
+++ b/v1.0.3/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
\ No newline at end of file
diff --git a/v1.0.3/documenter/index.html b/v1.0.3/documenter/index.html
index e8b7cb6..452d252 100644
--- a/v1.0.3/documenter/index.html
+++ b/v1.0.3/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.0.3/fileformat/index.html b/v1.0.3/fileformat/index.html
index 6c9695d..0809092 100644
--- a/v1.0.3/fileformat/index.html
+++ b/v1.0.3/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v1.0.3/generated/example/index.html b/v1.0.3/generated/example/index.html
index 37a2807..52c2fc3 100644
--- a/v1.0.3/generated/example/index.html
+++ b/v1.0.3/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,126 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.0.3/outputformats/index.html b/v1.0.3/outputformats/index.html
index d897464..c92692a 100644
--- a/v1.0.3/outputformats/index.html
+++ b/v1.0.3/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v1.0.3/pipeline/index.html b/v1.0.3/pipeline/index.html
index 067ff5f..03f7ec6 100644
--- a/v1.0.3/pipeline/index.html
+++ b/v1.0.3/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.0.3/search/index.html b/v1.0.3/search/index.html
index 2a66a22..7e10223 100644
--- a/v1.0.3/search/index.html
+++ b/v1.0.3/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.0.4/customprocessing/index.html b/v1.0.4/customprocessing/index.html
index 48adc7c..ba1068a 100644
--- a/v1.0.4/customprocessing/index.html
+++ b/v1.0.4/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
\ No newline at end of file
diff --git a/v1.0.4/documenter/index.html b/v1.0.4/documenter/index.html
index e8b7cb6..452d252 100644
--- a/v1.0.4/documenter/index.html
+++ b/v1.0.4/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.0.4/fileformat/index.html b/v1.0.4/fileformat/index.html
index 6c9695d..0809092 100644
--- a/v1.0.4/fileformat/index.html
+++ b/v1.0.4/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v1.0.4/generated/example/index.html b/v1.0.4/generated/example/index.html
index 3626d8c..580045e 100644
--- a/v1.0.4/generated/example/index.html
+++ b/v1.0.4/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,126 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.0.4/outputformats/index.html b/v1.0.4/outputformats/index.html
index a956461..335a052 100644
--- a/v1.0.4/outputformats/index.html
+++ b/v1.0.4/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v1.0.4/pipeline/index.html b/v1.0.4/pipeline/index.html
index 067ff5f..03f7ec6 100644
--- a/v1.0.4/pipeline/index.html
+++ b/v1.0.4/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.0.4/search/index.html b/v1.0.4/search/index.html
index 2a66a22..7e10223 100644
--- a/v1.0.4/search/index.html
+++ b/v1.0.4/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.0.5/customprocessing/index.html b/v1.0.5/customprocessing/index.html
index 48adc7c..ba1068a 100644
--- a/v1.0.5/customprocessing/index.html
+++ b/v1.0.5/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
\ No newline at end of file
diff --git a/v1.0.5/documenter/index.html b/v1.0.5/documenter/index.html
index e8b7cb6..452d252 100644
--- a/v1.0.5/documenter/index.html
+++ b/v1.0.5/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.0.5/fileformat/index.html b/v1.0.5/fileformat/index.html
index 6c9695d..0809092 100644
--- a/v1.0.5/fileformat/index.html
+++ b/v1.0.5/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
\ No newline at end of file
diff --git a/v1.0.5/generated/example/index.html b/v1.0.5/generated/example/index.html
index 94ede41..6e9ae68 100644
--- a/v1.0.5/generated/example/index.html
+++ b/v1.0.5/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the html-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be found here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -10,126 +51,55 @@ foo()
This string is printed to std
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.0.5/outputformats/index.html b/v1.0.5/outputformats/index.html
index 8f0d2d4..f67d5c9 100644
--- a/v1.0.5/outputformats/index.html
+++ b/v1.0.5/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v1.0.5/pipeline/index.html b/v1.0.5/pipeline/index.html
index 067ff5f..03f7ec6 100644
--- a/v1.0.5/pipeline/index.html
+++ b/v1.0.5/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.0.5/search/index.html b/v1.0.5/search/index.html
index 2a66a22..7e10223 100644
--- a/v1.0.5/search/index.html
+++ b/v1.0.5/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v1.1.0/customprocessing/index.html b/v1.1.0/customprocessing/index.html
index 48adc7c..ba1068a 100644
--- a/v1.1.0/customprocessing/index.html
+++ b/v1.1.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
\ No newline at end of file
diff --git a/v1.1.0/documenter/index.html b/v1.1.0/documenter/index.html
index e8b7cb6..452d252 100644
--- a/v1.1.0/documenter/index.html
+++ b/v1.1.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v1.1.0/fileformat/index.html b/v1.1.0/fileformat/index.html
index 1737a43..c9b2186 100644
--- a/v1.1.0/fileformat/index.html
+++ b/v1.1.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder)/ where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder)/ where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
\ No newline at end of file
diff --git a/v1.1.0/generated/example/index.html b/v1.1.0/generated/example/index.html
index f12f77d..c12152a 100644
--- a/v1.1.0/generated/example/index.html
+++ b/v1.1.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v1.1.0/outputformats/index.html b/v1.1.0/outputformats/index.html
index 570e247..febce45 100644
--- a/v1.1.0/outputformats/index.html
+++ b/v1.1.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v1.1.0/pipeline/index.html b/v1.1.0/pipeline/index.html
index 067ff5f..03f7ec6 100644
--- a/v1.1.0/pipeline/index.html
+++ b/v1.1.0/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v1.1.0/search/index.html b/v1.1.0/search/index.html
index 2a66a22..7e10223 100644
--- a/v1.1.0/search/index.html
+++ b/v1.1.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v2.0.0/customprocessing/index.html b/v2.0.0/customprocessing/index.html
index c042d88..edfc29a 100644
--- a/v2.0.0/customprocessing/index.html
+++ b/v2.0.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
\ No newline at end of file
diff --git a/v2.0.0/documenter/index.html b/v2.0.0/documenter/index.html
index 5616e90..a5fa46a 100644
--- a/v2.0.0/documenter/index.html
+++ b/v2.0.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v2.0.0/fileformat/index.html b/v2.0.0/fileformat/index.html
index 26e071a..3fe5af7 100644
--- a/v2.0.0/fileformat/index.html
+++ b/v2.0.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -9,9 +50,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder)/ where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master/ and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO__ROOT_URL__src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder)/ where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder)/ where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
\ No newline at end of file
diff --git a/v2.0.0/generated/example/index.html b/v2.0.0/generated/example/index.html
index 54ee4b1..714a3fc 100644
--- a/v2.0.0/generated/example/index.html
+++ b/v2.0.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v2.0.0/outputformats/index.html b/v2.0.0/outputformats/index.html
index b65de7e..c48c001 100644
--- a/v2.0.0/outputformats/index.html
+++ b/v2.0.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v2.0.0/pipeline/index.html b/v2.0.0/pipeline/index.html
index c4a016d..b0c7a61 100644
--- a/v2.0.0/pipeline/index.html
+++ b/v2.0.0/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -9,7 +50,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -23,7 +64,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v2.0.0/search/index.html b/v2.0.0/search/index.html
index c9635ef..7b689b4 100644
--- a/v2.0.0/search/index.html
+++ b/v2.0.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v2.0.2/customprocessing/index.html b/v2.0.2/customprocessing/index.html
index c042d88..edfc29a 100644
--- a/v2.0.2/customprocessing/index.html
+++ b/v2.0.2/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
\ No newline at end of file
diff --git a/v2.0.2/documenter/index.html b/v2.0.2/documenter/index.html
index 5616e90..a5fa46a 100644
--- a/v2.0.2/documenter/index.html
+++ b/v2.0.2/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v2.0.2/fileformat/index.html b/v2.0.2/fileformat/index.html
index effedf6..e3d434f 100644
--- a/v2.0.2/fileformat/index.html
+++ b/v2.0.2/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
\ No newline at end of file
diff --git a/v2.0.2/generated/example/index.html b/v2.0.2/generated/example/index.html
index 908f42f..435383d 100644
--- a/v2.0.2/generated/example/index.html
+++ b/v2.0.2/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v2.0.2/outputformats/index.html b/v2.0.2/outputformats/index.html
index 6dd6fa7..03d46f7 100644
--- a/v2.0.2/outputformats/index.html
+++ b/v2.0.2/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v2.0.2/pipeline/index.html b/v2.0.2/pipeline/index.html
index 2712f81..ee291b8 100644
--- a/v2.0.2/pipeline/index.html
+++ b/v2.0.2/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v2.0.2/search/index.html b/v2.0.2/search/index.html
index c9635ef..7b689b4 100644
--- a/v2.0.2/search/index.html
+++ b/v2.0.2/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v2.0.3/customprocessing/index.html b/v2.0.3/customprocessing/index.html
index c042d88..edfc29a 100644
--- a/v2.0.3/customprocessing/index.html
+++ b/v2.0.3/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
\ No newline at end of file
diff --git a/v2.0.3/documenter/index.html b/v2.0.3/documenter/index.html
index 4dd45f7..caf5e65 100644
--- a/v2.0.3/documenter/index.html
+++ b/v2.0.3/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v2.0.3/fileformat/index.html b/v2.0.3/fileformat/index.html
index effedf6..e3d434f 100644
--- a/v2.0.3/fileformat/index.html
+++ b/v2.0.3/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
\ No newline at end of file
diff --git a/v2.0.3/generated/example/index.html b/v2.0.3/generated/example/index.html
index eb97c46..3dc582e 100644
--- a/v2.0.3/generated/example/index.html
+++ b/v2.0.3/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v2.0.3/outputformats/index.html b/v2.0.3/outputformats/index.html
index c4c7eaf..6fd49df 100644
--- a/v2.0.3/outputformats/index.html
+++ b/v2.0.3/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v2.0.3/pipeline/index.html b/v2.0.3/pipeline/index.html
index 2712f81..ee291b8 100644
--- a/v2.0.3/pipeline/index.html
+++ b/v2.0.3/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v2.0.3/search/index.html b/v2.0.3/search/index.html
index c9635ef..7b689b4 100644
--- a/v2.0.3/search/index.html
+++ b/v2.0.3/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v2.0.4/customprocessing/index.html b/v2.0.4/customprocessing/index.html
index c042d88..edfc29a 100644
--- a/v2.0.4/customprocessing/index.html
+++ b/v2.0.4/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
\ No newline at end of file
diff --git a/v2.0.4/documenter/index.html b/v2.0.4/documenter/index.html
index 4dd45f7..caf5e65 100644
--- a/v2.0.4/documenter/index.html
+++ b/v2.0.4/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
\ No newline at end of file
diff --git a/v2.0.4/fileformat/index.html b/v2.0.4/fileformat/index.html
index effedf6..e3d434f 100644
--- a/v2.0.4/fileformat/index.html
+++ b/v2.0.4/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
\ No newline at end of file
diff --git a/v2.0.4/generated/example/index.html b/v2.0.4/generated/example/index.html
index b488ea0..977cd44 100644
--- a/v2.0.4/generated/example/index.html
+++ b/v2.0.4/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
\ No newline at end of file
diff --git a/v2.0.4/outputformats/index.html b/v2.0.4/outputformats/index.html
index ab95b5b..63b2f11 100644
--- a/v2.0.4/outputformats/index.html
+++ b/v2.0.4/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,8 +51,8 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
and see how this is rendered in each of the output formats.
The (default) markdown output of the source snippet above is as follows
```@meta
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directoryoutputdir.
Keyword arguments:
name: name of the output file, excluding .md. name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
source
\ No newline at end of file
diff --git a/v2.0.4/pipeline/index.html b/v2.0.4/pipeline/index.html
index 2712f81..ee291b8 100644
--- a/v2.0.4/pipeline/index.html
+++ b/v2.0.4/pipeline/index.html
@@ -1,5 +1,46 @@
-
-3. Processing pipeline · Literate.jl
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
\ No newline at end of file
diff --git a/v2.0.4/search/index.html b/v2.0.4/search/index.html
index c9635ef..7b689b4 100644
--- a/v2.0.4/search/index.html
+++ b/v2.0.4/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Search
Search
Number of results: loading...
+Search · Literate.jl
Search
Search
Number of results: loading...
\ No newline at end of file
diff --git a/v2.1.0/customprocessing.html b/v2.1.0/customprocessing.html
index ff53385..724adda 100644
--- a/v2.1.0/customprocessing.html
+++ b/v2.1.0/customprocessing.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
Example: Adding current date
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
Example: Adding current date
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Example: Replacing include calls with included code
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Example: Replacing include calls with included code
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.0/documenter.html b/v2.1.0/documenter.html
index 34cb052..33fc836 100644
--- a/v2.1.0/documenter.html
+++ b/v2.1.0/documenter.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.0/fileformat.html b/v2.1.0/fileformat.html
index 2ef3b97..999a154 100644
--- a/v2.1.0/fileformat.html
+++ b/v2.1.0/fileformat.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
2.1. Syntax
The basic syntax is simple:
lines starting with # are treated as markdown,
all other lines are treated as julia code.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
2.1. Syntax
The basic syntax is simple:
lines starting with # are treated as markdown,
all other lines are treated as julia code.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
2.2. Filtering Lines
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
2.2. Filtering Lines
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
2.3. Default Replacements
The following convenience "macros" are always expanded:
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
+@test result == expected_result #src
2.3. Default Replacements
The following convenience "macros" are always expanded:
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.0/generated/example.html b/v2.1.0/generated/example.html
index c86597c..2d372d2 100644
--- a/v2.1.0/generated/example.html
+++ b/v2.1.0/generated/example.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
Basic syntax
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
x + y
11//15
x * y
2//15
Output Capturing
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
Basic syntax
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
x + y
11//15
x * y
2//15
Output Capturing
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
-
-
Custom processing
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
z = 1.0 + 2.0im
1.0 + 2.0im
Documenter.jl interaction
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
z = 1.0 + 2.0im
1.0 + 2.0im
Documenter.jl interaction
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Welcome to the documentation for Literate – a simplistic package for Literate Programming.
What?
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Literate.script: generates a plain script file, removing all metadata and special syntax.
Why?
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
Welcome to the documentation for Literate – a simplistic package for Literate Programming.
What?
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Literate.script: generates a plain script file, removing all metadata and special syntax.
Why?
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.0/outputformats.html b/v2.1.0/outputformats.html
index 584296d..41a5956 100644
--- a/v2.1.0/outputformats.html
+++ b/v2.1.0/outputformats.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
4.1. Markdown Output
The (default) markdown output of the source snippet above is as follows
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directory outputdir.
Keyword arguments:
name: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
4.3. Script Output
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directory outputdir.
Keyword arguments:
name: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
4.3. Script Output
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
+z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
3.2. Parsing
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
3.2. Parsing
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Custom control over chunk splits
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Custom control over chunk splits
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
3.3. Document generation
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
3.4. Post-processing
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
3.5. Writing to file
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
3.3. Document generation
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
3.4. Post-processing
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
3.5. Writing to file
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.0/search.html b/v2.1.0/search.html
index 44dcd3e..dcee4c9 100644
--- a/v2.1.0/search.html
+++ b/v2.1.0/search.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Wednesday 30 October 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.1/customprocessing/index.html b/v2.1.1/customprocessing/index.html
index e6ed22d..38247b2 100644
--- a/v2.1.1/customprocessing/index.html
+++ b/v2.1.1/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
Example: Adding current date
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
Example: Adding current date
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Example: Replacing include calls with included code
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Example: Replacing include calls with included code
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.1/documenter/index.html b/v2.1.1/documenter/index.html
index 52ea7e8..0c59338 100644
--- a/v2.1.1/documenter/index.html
+++ b/v2.1.1/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.1/fileformat/index.html b/v2.1.1/fileformat/index.html
index be0ed7b..74ee57f 100644
--- a/v2.1.1/fileformat/index.html
+++ b/v2.1.1/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
2.1. Syntax
The basic syntax is simple:
lines starting with # are treated as markdown,
all other lines are treated as julia code.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
2.1. Syntax
The basic syntax is simple:
lines starting with # are treated as markdown,
all other lines are treated as julia code.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
2.2. Filtering Lines
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
2.2. Filtering Lines
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
2.3. Default Replacements
The following convenience "macros" are always expanded:
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
+@test result == expected_result #src
2.3. Default Replacements
The following convenience "macros" are always expanded:
expands to https://github.com/$(ENV["TRAVIS_REPO_SLUG"])/blob/master and is a convenient way to use when you want to link to files outside the doc-build directory. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module.
@__NBVIEWER_ROOT_URL__
expands to https://nbviewer.jupyter.org/github/$(ENV["TRAVIS_REPO_SLUG"])/blob/gh-pages/$(folder) where folder is the folder that Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/.
@__BINDER_ROOT_URL__
expands to https://mybinder.org/v2/gh/$(ENV["TRAVIS_REPO_SLUG"])/$(branch)?filepath=$(folder) where branch/folder is the branch and folder where Documenter.deploydocs deploys too. This can be used if you want a link that opens the generated notebook in https://mybinder.org/. To add a binder-badge in e.g. the HTML output you can use:
@__REPO_ROOT_URL__ and @__NBVIEWER_ROOT_URL__ works for documentation built with DocumentationGenerator.jl but @__BINDER_ROOT_URL__ does not, since mybinder.org requires the files to be located inside a git repository.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.1/generated/example/index.html b/v2.1.1/generated/example/index.html
index f881570..98144a6 100644
--- a/v2.1.1/generated/example/index.html
+++ b/v2.1.1/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
Basic syntax
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
x + y
11//15
x * y
2//15
Output Capturing
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
Basic syntax
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
x + y
11//15
x * y
2//15
Output Capturing
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
-
-
Custom processing
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
z = 1.0 + 2.0im
1.0 + 2.0im
Documenter.jl interaction
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
z = 1.0 + 2.0im
1.0 + 2.0im
Documenter.jl interaction
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Welcome to the documentation for Literate – a simplistic package for Literate Programming.
What?
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Literate.script: generates a plain script file, removing all metadata and special syntax.
Why?
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
Welcome to the documentation for Literate – a simplistic package for Literate Programming.
What?
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Literate.script: generates a plain script file, removing all metadata and special syntax.
Why?
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.1/outputformats/index.html b/v2.1.1/outputformats/index.html
index bbdc373..fd3af56 100644
--- a/v2.1.1/outputformats/index.html
+++ b/v2.1.1/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
4.1. Markdown Output
The (default) markdown output of the source snippet above is as follows
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directory outputdir.
Keyword arguments:
name: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
4.3. Script Output
The (default) script output of the source snippet above is as follows
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Some of the output rendering can be controlled with keyword arguments to Literate.markdown:
Generate a markdown file from inputfile and write the result to the directory outputdir.
Keyword arguments:
name: name of the output file, excluding .md; name is also used to name all the @example blocks, and to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that tells if the output is intended to use with Documenter.jl. Defaults to true. See the manual section on Interaction with Documenter.
codefence: A Pair of opening and closing code fence. Defaults to
"```@example $(name)" => "```"
if documenter = true and
"```julia" => "```"
if documenter = false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed. Some of the output rendering can be controlled with keyword arguments to Literate.notebook:
Generate a notebook from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .ipynb. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
execute: a boolean deciding if the generated notebook should also be executed or not. Defaults to true. The current working directory is set to outputdir when executing the notebook.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during notebook generation. Defaults to true. See the the manual section on Interaction with Documenter.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
4.3. Script Output
The (default) script output of the source snippet above is as follows
x = 1//3
y = 2//5
-z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
+z = x + y
We note that lines starting with # are removed and only the code lines have been kept. Some of the output rendering can be controlled with keyword arguments to Literate.script:
Generate a plain script file from inputfile and write the result to outputdir.
Keyword arguments:
name: name of the output file, excluding .jl. name is also used to replace @__NAME__. Defaults to the filename of inputfile.
preprocess, postprocess: custom pre- and post-processing functions, see the Custom pre- and post-processing section of the manual. Defaults to identity.
documenter: boolean that says if the source contains Documenter.jl specific things to filter out during script generation. Defaults to true. See the the manual section on Interaction with Documenter.
keep_comments: boolean that, if set to true, keeps markdown lines as comments in the output script. Defaults to false.
credit: boolean that controls the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this to the default, which is true.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
3.2. Parsing
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
3.2. Parsing
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Custom control over chunk splits
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Custom control over chunk splits
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
3.3. Document generation
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
3.4. Post-processing
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
3.5. Writing to file
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
3.3. Document generation
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
3.4. Post-processing
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
3.5. Writing to file
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.1.1/search/index.html b/v2.1.1/search/index.html
index 6212583..c97a31d 100644
--- a/v2.1.1/search/index.html
+++ b/v2.1.1/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Thursday 21 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.0/customprocessing/index.html b/v2.2.0/customprocessing/index.html
index 5dc28eb..0410b95 100644
--- a/v2.2.0/customprocessing/index.html
+++ b/v2.2.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.0/documenter/index.html b/v2.2.0/documenter/index.html
index 861b244..6151510 100644
--- a/v2.2.0/documenter/index.html
+++ b/v2.2.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.0/fileformat/index.html b/v2.2.0/fileformat/index.html
index 00acb7b..a58c0d2 100644
--- a/v2.2.0/fileformat/index.html
+++ b/v2.2.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.0/generated/example/index.html b/v2.2.0/generated/example/index.html
index b834c32..eb4df42 100644
--- a/v2.2.0/generated/example/index.html
+++ b/v2.2.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.0/outputformats/index.html b/v2.2.0/outputformats/index.html
index efee599..0776322 100644
--- a/v2.2.0/outputformats/index.html
+++ b/v2.2.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
Script output is generated by Literate.script. The (default) script output of the source snippet above is as follows:
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.0/search/index.html b/v2.2.0/search/index.html
index 62e9727..67547a9 100644
--- a/v2.2.0/search/index.html
+++ b/v2.2.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 26 November 2019. Using Julia version 1.2.0.
\ No newline at end of file
diff --git a/v2.2.1/customprocessing/index.html b/v2.2.1/customprocessing/index.html
index fc5cb58..7f0af9d 100644
--- a/v2.2.1/customprocessing/index.html
+++ b/v2.2.1/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
\ No newline at end of file
diff --git a/v2.2.1/documenter/index.html b/v2.2.1/documenter/index.html
index 779f230..bb4680c 100644
--- a/v2.2.1/documenter/index.html
+++ b/v2.2.1/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
\ No newline at end of file
diff --git a/v2.2.1/fileformat/index.html b/v2.2.1/fileformat/index.html
index 3dfaae2..a0c6308 100644
--- a/v2.2.1/fileformat/index.html
+++ b/v2.2.1/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
\ No newline at end of file
diff --git a/v2.2.1/generated/example/index.html b/v2.2.1/generated/example/index.html
index aebb500..a3975e7 100644
--- a/v2.2.1/generated/example/index.html
+++ b/v2.2.1/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
\ No newline at end of file
diff --git a/v2.2.1/outputformats/index.html b/v2.2.1/outputformats/index.html
index ee9c39b..1107e6f 100644
--- a/v2.2.1/outputformats/index.html
+++ b/v2.2.1/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
Script output is generated by Literate.script. The (default) script output of the source snippet above is as follows:
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
\ No newline at end of file
diff --git a/v2.2.1/search/index.html b/v2.2.1/search/index.html
index 3f3dd12..1bb22a8 100644
--- a/v2.2.1/search/index.html
+++ b/v2.2.1/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 3 December 2019. Using Julia version 1.3.0.
\ No newline at end of file
diff --git a/v2.3.0/customprocessing/index.html b/v2.3.0/customprocessing/index.html
index 7a28ac4..06cc8c3 100644
--- a/v2.3.0/customprocessing/index.html
+++ b/v2.3.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
\ No newline at end of file
diff --git a/v2.3.0/documenter/index.html b/v2.3.0/documenter/index.html
index 26f8a8f..c195d64 100644
--- a/v2.3.0/documenter/index.html
+++ b/v2.3.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
\ No newline at end of file
diff --git a/v2.3.0/fileformat/index.html b/v2.3.0/fileformat/index.html
index 318e223..dbe703e 100644
--- a/v2.3.0/fileformat/index.html
+++ b/v2.3.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
\ No newline at end of file
diff --git a/v2.3.0/generated/example/index.html b/v2.3.0/generated/example/index.html
index 776bbef..b928b44 100644
--- a/v2.3.0/generated/example/index.html
+++ b/v2.3.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+7. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert two placeholders, which we will replace with something else at time of generation. We have here replaced our placeholders with z and 1.0 + 2.0im:
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
\ No newline at end of file
diff --git a/v2.3.0/outputformats/index.html b/v2.3.0/outputformats/index.html
index b6669e0..6463227 100644
--- a/v2.3.0/outputformats/index.html
+++ b/v2.3.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
Script output is generated by Literate.script. The (default) script output of the source snippet above is as follows:
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
\ No newline at end of file
diff --git a/v2.3.0/search/index.html b/v2.3.0/search/index.html
index ad10d0a..ab3641f 100644
--- a/v2.3.0/search/index.html
+++ b/v2.3.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 3 March 2020. Using Julia version 1.3.1.
\ No newline at end of file
diff --git a/v2.3.1/customprocessing/index.html b/v2.3.1/customprocessing/index.html
index 801543b..00ede74 100644
--- a/v2.3.1/customprocessing/index.html
+++ b/v2.3.1/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.3.1/documenter/index.html b/v2.3.1/documenter/index.html
index b0325b3..5bfddb8 100644
--- a/v2.3.1/documenter/index.html
+++ b/v2.3.1/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.3.1/fileformat/index.html b/v2.3.1/fileformat/index.html
index 80be548..61b59b7 100644
--- a/v2.3.1/fileformat/index.html
+++ b/v2.3.1/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.3.1/generated/example/index.html b/v2.3.1/generated/example/index.html
index a5d9c6b..fa23a35 100644
--- a/v2.3.1/generated/example/index.html
+++ b/v2.3.1/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,121 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">y2
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists mainly of three functions, all of which take the same script file as input, but generate different output:
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.3.1/outputformats/index.html b/v2.3.1/outputformats/index.html
index 0b8743d..40ee0a4 100644
--- a/v2.3.1/outputformats/index.html
+++ b/v2.3.1/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,8 +72,8 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
Script output is generated by Literate.script. The (default) script output of the source snippet above is as follows:
x = 1//3
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
execute
Whether to execute and capture the output.
true
Only applicable for Literate.notebook.
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is describe in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.3.1/search/index.html b/v2.3.1/search/index.html
index 4902759..b6cc78b 100644
--- a/v2.3.1/search/index.html
+++ b/v2.3.1/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.3.1/tips/index.html b/v2.3.1/tips/index.html
index 2c7978d..721d869 100644
--- a/v2.3.1/tips/index.html
+++ b/v2.3.1/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" and image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" and image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
+img = DisplayAs.Text(DisplayAs.Text(img))
Settings
This document was generated with Documenter.jl on Sunday 29 March 2020. Using Julia version 1.4.0.
\ No newline at end of file
diff --git a/v2.4.0/customprocessing/index.html b/v2.4.0/customprocessing/index.html
index 04517bb..7894434 100644
--- a/v2.4.0/customprocessing/index.html
+++ b/v2.4.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.4.0/documenter/index.html b/v2.4.0/documenter/index.html
index d0ef534..33f1515 100644
--- a/v2.4.0/documenter/index.html
+++ b/v2.4.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.4.0/fileformat/index.html b/v2.4.0/fileformat/index.html
index c1f7ec5..ff49acd 100644
--- a/v2.4.0/fileformat/index.html
+++ b/v2.4.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Literate 2.1
GitHub Actions support for the macros above requires at least Literate version 2.1.
Literate 2.2
GitLab CI support for the macros above requires at least Literate version 2.2.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.4.0/generated/example/index.html b/v2.4.0/generated/example/index.html
index 2dfff8b..f1dddb6 100644
--- a/v2.4.0/generated/example/index.html
+++ b/v2.4.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,94 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.4.0/outputformats/index.html b/v2.4.0/outputformats/index.html
index d7a781c..a13c787 100644
--- a/v2.4.0/outputformats/index.html
+++ b/v2.4.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,23 +72,23 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
x = 1//3
```
```
1//3
-```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
+```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
# # Some title
#
-# We're using `#nb` so the metadata is only included in notebook output
+# We're using `#nb` so the metadata is only included in notebook output
-#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
+#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
x = 1//3
y = 2//5
-#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
+#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
# For more information about RISE, see [the docs](https://rise.readthedocs.io/en/stable/usage.html)
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.4.0/search/index.html b/v2.4.0/search/index.html
index 7af2b5d..6b858d8 100644
--- a/v2.4.0/search/index.html
+++ b/v2.4.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.4.0/tips/index.html b/v2.4.0/tips/index.html
index c2d8610..ee8e45e 100644
--- a/v2.4.0/tips/index.html
+++ b/v2.4.0/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" and image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" and image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
+img = DisplayAs.Text(DisplayAs.PNG(img))
Settings
This document was generated with Documenter.jl on Thursday 23 April 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/customprocessing/index.html b/v2.5.0/customprocessing/index.html
index 00e4959..098b89a 100644
--- a/v2.5.0/customprocessing/index.html
+++ b/v2.5.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/documenter/index.html b/v2.5.0/documenter/index.html
index cdbcc42..d406dc0 100644
--- a/v2.5.0/documenter/index.html
+++ b/v2.5.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So lets take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/fileformat/index.html b/v2.5.0/fileformat/index.html
index 59ddb02..990dae1 100644
--- a/v2.5.0/fileformat/index.html
+++ b/v2.5.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Literate 2.1
GitHub Actions support for the macros above requires at least Literate version 2.1.
Literate 2.2
GitLab CI support for the macros above requires at least Literate version 2.2.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/generated/example/index.html b/v2.5.0/generated/example/index.html
index 85815ba..d32f5d6 100644
--- a/v2.5.0/generated/example/index.html
+++ b/v2.5.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,94 +50,55 @@ foo()
4-element Array{Int64,1}:
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/outputformats/index.html b/v2.5.0/outputformats/index.html
index a8593e9..c43cf46 100644
--- a/v2.5.0/outputformats/index.html
+++ b/v2.5.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,23 +72,23 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
x = 1//3
```
```
1//3
-```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
+```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
# # Some title
#
-# We're using `#nb` so the metadata is only included in notebook output
+# We're using `#nb` so the metadata is only included in notebook output
-#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
+#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
x = 1//3
y = 2//5
-#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
+#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
# For more information about RISE, see [the docs](https://rise.readthedocs.io/en/stable/usage.html)
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Default configuration for Literate.markdown, Literate.notebook and [Literate.script] which is used for everything not specified by the user. See the manual section about Configuration for more information.
Configuration key
Description
Default value
Comment
name
Name of the output file (excluding file extension).
filename(inputfile)
preprocess
Custom preprocessing function mapping String to String.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/search/index.html b/v2.5.0/search/index.html
index c96920e..ef26354 100644
--- a/v2.5.0/search/index.html
+++ b/v2.5.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.0/tips/index.html b/v2.5.0/tips/index.html
index f2aad1b..65f603c 100644
--- a/v2.5.0/tips/index.html
+++ b/v2.5.0/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" and image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" and image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
+img = DisplayAs.Text(DisplayAs.PNG(img))
Settings
This document was generated with Documenter.jl on Thursday 14 May 2020. Using Julia version 1.4.1.
\ No newline at end of file
diff --git a/v2.5.1/customprocessing/index.html b/v2.5.1/customprocessing/index.html
index 1ca2f22..42b9b33 100644
--- a/v2.5.1/customprocessing/index.html
+++ b/v2.5.1/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.5.1/documenter/index.html b/v2.5.1/documenter/index.html
index 25485ed..d87ee51 100644
--- a/v2.5.1/documenter/index.html
+++ b/v2.5.1/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.5.1/fileformat/index.html b/v2.5.1/fileformat/index.html
index bba0818..dd84954 100644
--- a/v2.5.1/fileformat/index.html
+++ b/v2.5.1/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Literate 2.1
GitHub Actions support for the macros above requires at least Literate version 2.1.
Literate 2.2
GitLab CI support for the macros above requires at least Literate version 2.2.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.5.1/generated/example/index.html b/v2.5.1/generated/example/index.html
index 1be4a1e..811d908 100644
--- a/v2.5.1/generated/example/index.html
+++ b/v2.5.1/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,94 +50,55 @@ foo()
4-element Array{Int64,1
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.5.1/outputformats/index.html b/v2.5.1/outputformats/index.html
index cf9a896..59a1bf1 100644
--- a/v2.5.1/outputformats/index.html
+++ b/v2.5.1/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,23 +72,23 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
x = 1//3
```
```
1//3
-```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
+```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
# # Some title
#
-# We're using `#nb` so the metadata is only included in notebook output
+# We're using `#nb` so the metadata is only included in notebook output
-#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
+#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
x = 1//3
y = 2//5
-#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
+#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
# For more information about RISE, see [the docs](https://rise.readthedocs.io/en/stable/usage.html)
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.5.1/search/index.html b/v2.5.1/search/index.html
index ab7a70a..889d2b7 100644
--- a/v2.5.1/search/index.html
+++ b/v2.5.1/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.5.1/tips/index.html b/v2.5.1/tips/index.html
index d8e9430..de38ec0 100644
--- a/v2.5.1/tips/index.html
+++ b/v2.5.1/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
+img = DisplayAs.Text(DisplayAs.PNG(img))
Settings
This document was generated with Documenter.jl on Monday 3 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/customprocessing/index.html b/v2.6.0/customprocessing/index.html
index 4f46631..d73d4fe 100644
--- a/v2.6.0/customprocessing/index.html
+++ b/v2.6.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/documenter/index.html b/v2.6.0/documenter/index.html
index f21b0c0..393469b 100644
--- a/v2.6.0/documenter/index.html
+++ b/v2.6.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/fileformat/index.html b/v2.6.0/fileformat/index.html
index b09be25..d1a0a8e 100644
--- a/v2.6.0/fileformat/index.html
+++ b/v2.6.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,9 +51,9 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step. In addition, for markdown output, lines ending with #hide are filtered out similar to Documenter.jl.
Difference between `#src` and `#hide`
#src and #hide are quite similar. The only difference is that #src lines are filtered out before execution (if execute=true) and #hide lines are filtered out after execution.
Literate 2.6
The #hide token requires at least Literate version 2.6.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
#md # ```@docs
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step. In addition, for markdown output, lines ending with #hide are filtered out similar to Documenter.jl.
Difference between `#src` and `#hide`
#src and #hide are quite similar. The only difference is that #src lines are filtered out before execution (if execute=true) and #hide lines are filtered out after execution.
Literate 2.6
The #hide token requires at least Literate version 2.6.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Literate 2.1
GitHub Actions support for the macros above requires at least Literate version 2.1.
Literate 2.2
GitLab CI support for the macros above requires at least Literate version 2.2.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/generated/example/index.html b/v2.6.0/generated/example/index.html
index eb33c62..7a69ef1 100644
--- a/v2.6.0/generated/example/index.html
+++ b/v2.6.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,94 +50,55 @@ foo()
4-element Array{Int64,1
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/outputformats/index.html b/v2.6.0/outputformats/index.html
index a9775d4..9c0a8c7 100644
--- a/v2.6.0/outputformats/index.html
+++ b/v2.6.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,23 +72,23 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
x = 1//3
```
```
1//3
-```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
+```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
# # Some title
#
-# We're using `#nb` so the metadata is only included in notebook output
+# We're using `#nb` so the metadata is only included in notebook output
-#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
+#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
x = 1//3
y = 2//5
-#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
+#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
# For more information about RISE, see [the docs](https://rise.readthedocs.io/en/stable/usage.html)
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/search/index.html b/v2.6.0/search/index.html
index 38cee19..3c4637d 100644
--- a/v2.6.0/search/index.html
+++ b/v2.6.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.6.0/tips/index.html b/v2.6.0/tips/index.html
index 1082002..cbc4518 100644
--- a/v2.6.0/tips/index.html
+++ b/v2.6.0/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
+img = DisplayAs.Text(DisplayAs.PNG(img))
Settings
This document was generated with Documenter.jl on Friday 14 August 2020. Using Julia version 1.5.0.
\ No newline at end of file
diff --git a/v2.7.0/customprocessing/index.html b/v2.7.0/customprocessing/index.html
index b10af72..cc5cc15 100644
--- a/v2.7.0/customprocessing/index.html
+++ b/v2.7.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.7.0/documenter/index.html b/v2.7.0/documenter/index.html
index 807c07a..6a54295 100644
--- a/v2.7.0/documenter/index.html
+++ b/v2.7.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.7.0/fileformat/index.html b/v2.7.0/fileformat/index.html
index 2f1e359..169b531 100644
--- a/v2.7.0/fileformat/index.html
+++ b/v2.7.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,7 +51,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
Multiline comments in Literate 2.7
Literate version 2.7 adds support for Julia multiline comments for markdown input. All multiline comments in the input are rewritten to regular comments as part of the preprocessing step, before any other processing is performed. For Literate to recognize multiline comments it is required that the start token (#=) and end token (=#) are placed on their own lines. Note also that it is allowed to have more than one = in the tokens, for example
#=
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
Multiline comments in Literate 2.7
Literate version 2.7 adds support for Julia multiline comments for markdown input. All multiline comments in the input are rewritten to regular comments as part of the preprocessing step, before any other processing is performed. For Literate to recognize multiline comments it is required that the start token (#=) and end token (=#) are placed on their own lines. Note also that it is allowed to have more than one = in the tokens, for example
#=
This multiline comment
is treated as markdown.
=#
@@ -20,9 +61,9 @@ This is also markdown.
=====================#
is rewritten to
# This multiline comment
# is treated as markdown.
-# This is also markdown.
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step. In addition, for markdown output, lines ending with #hide are filtered out similar to Documenter.jl.
Difference between `#src` and `#hide`
#src and #hide are quite similar. The only difference is that #src lines are filtered out before execution (if execute=true) and #hide lines are filtered out after execution.
Literate 2.6
The #hide token requires at least Literate version 2.6.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step. In addition, for markdown output, lines ending with #hide are filtered out similar to Documenter.jl.
Difference between `#src` and `#hide`
#src and #hide are quite similar. The only difference is that #src lines are filtered out before execution (if execute=true) and #hide lines are filtered out after execution.
Literate 2.6
The #hide token requires at least Literate version 2.6.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Literate 2.1
GitHub Actions support for the macros above requires at least Literate version 2.1.
Literate 2.2
GitLab CI support for the macros above requires at least Literate version 2.2.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.7.0/generated/example/index.html b/v2.7.0/generated/example/index.html
index dee6409..338abfa 100644
--- a/v2.7.0/generated/example/index.html
+++ b/v2.7.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,94 +50,55 @@ foo()
4-element Array{Int64,1
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.7.0/outputformats/index.html b/v2.7.0/outputformats/index.html
index 7883afa..c429671 100644
--- a/v2.7.0/outputformats/index.html
+++ b/v2.7.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,23 +72,23 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
x = 1//3
```
```
1//3
-```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
+```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
# # Some title
#
-# We're using `#nb` so the metadata is only included in notebook output
+# We're using `#nb` so the metadata is only included in notebook output
-#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
+#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
x = 1//3
y = 2//5
-#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
+#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
# For more information about RISE, see [the docs](https://rise.readthedocs.io/en/stable/usage.html)
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.7.0/search/index.html b/v2.7.0/search/index.html
index 5a9ea2b..45155d3 100644
--- a/v2.7.0/search/index.html
+++ b/v2.7.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.7.0/tips/index.html b/v2.7.0/tips/index.html
index f511a80..eb93f46 100644
--- a/v2.7.0/tips/index.html
+++ b/v2.7.0/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
+img = DisplayAs.Text(DisplayAs.PNG(img))
Settings
This document was generated with Documenter.jl on Saturday 12 September 2020. Using Julia version 1.5.1.
\ No newline at end of file
diff --git a/v2.8.0/customprocessing/index.html b/v2.8.0/customprocessing/index.html
index f3469da..e6e5597 100644
--- a/v2.8.0/customprocessing/index.html
+++ b/v2.8.0/customprocessing/index.html
@@ -1,29 +1,70 @@
-
-5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
+5. Custom pre- and post-processing · Literate.jl
Since all packages are different, and may have different demands on how to create a nice example for the documentation it is important that the package maintainer does not feel limited by the by default provided syntax that this package offers. While you can generally come a long way by utilizing line filtering there might be situations where you need to manually hook into the generation and change things. In Literate this is done by letting the user supply custom pre- and post-processing functions that may do transformation of the content.
All of the generators (Literate.markdown, Literate.notebook and Literate.script) accepts preprocess and postprocess keyword arguments. The default "transformation" is the identity function. The input to the transformation functions is a String, and the output should be the transformed String.
preprocess is sent the raw input that is read from the source file (modulo the default line ending transformation). postprocess is given different things depending on the output: For markdown and script output postprocess is given the content String just before writing it to the output file, but for notebook output postprocess is given the dictionary representing the notebook, since, in general, this is more useful.
As an example, lets say we want to splice the date of generation into the output. We could of course update our source file before generating the docs, but we could instead use a preprocess function that splices the date into the source for us. Consider the following source file:
# # Example
# This example was generated DATEOFTODAY
x = 1 // 3
where DATEOFTODAY is a placeholder, to make it easier for our preprocess function to find the location. Now, lets define the preprocess function, for example
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
+end
which would replace every occurrence of "DATEOFTODAY" with the current date. We would now simply give this function to the generator, for example:
Let's say that we have some individual example files file1, file2, ... etc. that are runnable and also following the style of Literate. These files could be for example used in the test suite of your package.
We want to group them all into a single page in our documentation, but we do not want to copy paste the content of file1, ... for robustness: the files are included in the test suite and some changes may occur to them. We want these changes to also be reflected in the documentation.
A very easy way to do this is using preprocess to interchange include statements with file content. First, create a runnable .jl following the format of Literate
# # Replace includes
# This is an example to replace `include` calls with the actual file content.
-include("file1.jl")
+include("file1.jl")
# Cool, we just saw the result of the above code snippet. Here is one more:
-include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
+include("file2.jl")
Let's say we have saved this file as examples.jl. Then, you want to properly define a pre-processing function:
function replace_includes(str)
- included = ["file1.jl", "file2.jl"]
+ included = ["file1.jl", "file2.jl"]
# Here the path loads the files from their proper directory,
# which may not be the directory of the `examples.jl` file!
- path = "directory/to/example/files/"
+ path = "directory/to/example/files/"
for ex in included
content = read(path*ex, String)
- str = replace(str, "include(\"$(ex)\")" => content)
+ str = replace(str, "include(\"$(ex)\")" => content)
end
return str
-end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
- name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
+end
(of course replace included with your respective files)
Literate.markdown("examples.jl", "path/to/save/markdown";
+ name = "markdown_file_name", preprocess = replace_includes)
and you will see that in the final output file (here markdown_file_name.md) the include statements are replaced with the actual code to be included!
This approach is used for generating the examples in the documentation of the TimeseriesPrediction.jl package. The example files, included together in the stexamples.jl file, are processed by literate via this make.jl file to generate the markdown and code cells of the documentation.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
\ No newline at end of file
diff --git a/v2.8.0/documenter/index.html b/v2.8.0/documenter/index.html
index 495cabf..8acc0ec 100644
--- a/v2.8.0/documenter/index.html
+++ b/v2.8.0/documenter/index.html
@@ -1,12 +1,53 @@
-
-6. Interaction with Documenter.jl · Literate.jl
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
Literate can be used for any purpose, it spits out regular markdown files, and notebooks. Typically, though, these files will be used to render documentation for your package. The generators (Literate.markdown, Literate.notebook and Literate.script) supports a keyword argument documenter that lets the generator perform some extra things, keeping in mind that the source code have been written with Documenter.jl in mind. So let's take a look at what will happen if we set documenter = true:
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the notebook.
Documenter style markdown math
```math
+```
The following @meta block will be added to the top of the markdown page, which redirects the "Edit on GitHub" link on the top of the page to the source file rather than the generated .md file:
Documenter style @refs and @id will be removed. This means that you can use @ref and @id in the source file without them leaking to the script.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
\ No newline at end of file
diff --git a/v2.8.0/fileformat/index.html b/v2.8.0/fileformat/index.html
index d54e6ae..544b978 100644
--- a/v2.8.0/fileformat/index.html
+++ b/v2.8.0/fileformat/index.html
@@ -1,5 +1,46 @@
-
-2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
+2. File Format · Literate.jl
The source file format for Literate is a regular, commented, julia (.jl) scripts. The idea is that the scripts also serve as documentation on their own and it is also simple to include them in the test-suite, with e.g. include, to make sure the examples stay up do date with other changes in your package.
Leading whitespace is allowed before #, but it will be removed when generating the output. Since #-lines is treated as markdown we can not use that for regular julia comments, for this you can instead use ##, which will render as # in the output.
Lets look at a simple example:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -10,7 +51,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
-z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
Multiline comments in Literate 2.7
Literate version 2.7 adds support for Julia multiline comments for markdown input. All multiline comments in the input are rewritten to regular comments as part of the preprocessing step, before any other processing is performed. For Literate to recognize multiline comments it is required that the start token (#=) and end token (=#) are placed on their own lines. Note also that it is allowed to have more than one = in the tokens, for example
#=
+z = x + y
In the lines starting with # we can use regular markdown syntax, for example the # used for the heading and the backticks for formatting code. The other lines are regular julia code. We note a couple of things:
The script is valid julia, which means that we can include it and the example will run (for example in the test/runtests.jl script, to include the example in the test suite).
The script is "self-explanatory", i.e. the markdown lines works as comments and thus serve as good documentation on its own.
For simple use this is all you need to know. The following additional special syntax can also be used:
#md, #nb, #jl, #src: tags to filter lines, see Filtering Lines,
There is also some default convenience replacements that will always be performed, see Default Replacements.
Multiline comments in Literate 2.7
Literate version 2.7 adds support for Julia multiline comments for markdown input. All multiline comments in the input are rewritten to regular comments as part of the preprocessing step, before any other processing is performed. For Literate to recognize multiline comments it is required that the start token (#=) and end token (=#) are placed on their own lines. Note also that it is allowed to have more than one = in the tokens, for example
#=
This multiline comment
is treated as markdown.
=#
@@ -20,9 +61,9 @@ This is also markdown.
=====================#
is rewritten to
# This multiline comment
# is treated as markdown.
-# This is also markdown.
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step. In addition, for markdown output, lines ending with #hide are filtered out similar to Documenter.jl.
Difference between `#src` and `#hide`
#src and #hide are quite similar. The only difference is that #src lines are filtered out before execution (if execute=true) and #hide lines are filtered out after execution.
Literate 2.6
The #hide token requires at least Literate version 2.6.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
It is often useful to filter out lines in the source depending on the output format. For this purpose there are a number of "tokens" that can be used to mark the purpose of certain lines:
#md: line exclusive to markdown output,
#nb: line exclusive to notebook output,
#jl: line exclusive to script output,
#src: line exclusive to the source code and thus filtered out unconditionally.
Lines starting or ending with one of these tokens are filtered out in the preprocessing step. In addition, for markdown output, lines ending with #hide are filtered out similar to Documenter.jl.
Difference between `#src` and `#hide`
#src and #hide are quite similar. The only difference is that #src lines are filtered out before execution (if execute=true) and #hide lines are filtered out after execution.
Literate 2.6
The #hide token requires at least Literate version 2.6.
Literate 2.3
Filter tokens at the end of the line requires at least Literate version 2.3.
Tip
The tokens can also be negated, for example a line starting with #!nb would be included in markdown and script output, but filtered out for notebook output.
Suppose, for example, that we want to include a docstring within a @docs block using Documenter. Obviously we don't want to include this in the notebook, since @docs is Documenter syntax that the notebook will not understand. This is a case where we can prepend #md to those lines:
The lines in the example above would be filtered out in the preprocessing step, unless we are generating a markdown file. When generating a markdown file we would simple remove the leading #md from the lines. Beware that the space after the tag is also removed.
The #src token can also be placed at the end of a line. This is to make it possible to have code lines exclusive to the source code, and not just comment lines. For example, if the source file is included in the test suite we might want to add a @test at the end without this showing up in the outputs:
using Test #src
-@test result == expected_result #src
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
Can be used to link to files in the repository. For example @__REPO_ROOT_URL__/src/Literate.jl would link to the source of the Literate module. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__NBVIEWER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in http://nbviewer.jupyter.org/. This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
@__BINDER_ROOT_URL__:
Can be used if you want a link that opens the generated notebook in https://mybinder.org/. For example, to add a binder-badge in e.g. the HTML output you can use:
This variable is automatically determined on Travis CI, GitHub Actions and GitLab CI, but can be configured, see Configuration.
Literate 2.1
GitHub Actions support for the macros above requires at least Literate version 2.1.
Literate 2.2
GitLab CI support for the macros above requires at least Literate version 2.2.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
\ No newline at end of file
diff --git a/v2.8.0/generated/example/index.html b/v2.8.0/generated/example/index.html
index 6d84a76..574c27b 100644
--- a/v2.8.0/generated/example/index.html
+++ b/v2.8.0/generated/example/index.html
@@ -1,7 +1,48 @@
-
-8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
-y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
- println("This string is printed to stdout.")
+8. Example · Literate.jl
This is an example generated with Literate based on this source file: example.jl. You are seeing the HTML-output which Documenter have generated based on a markdown file generated with Literate. The corresponding notebook can be viewed in nbviewer here: example.ipynb, and opened in binder here: example.ipynb, and the plain script output can be found here: example.jl.
It is recommended to have the source file available when reading this, to better understand how the syntax in the source file corresponds to the output you are seeing.
The basic syntax for Literate is simple, lines starting with # is interpreted as markdown, and all the other lines are interpreted as code. Here is some code:
x = 1//3
+y = 2//5
2//5
In markdown sections we can use markdown syntax. For example, we can write text in italic font, text in bold font and use links.
It is possible to filter out lines depending on the output using the #md, #nb, #jl and #src tags (see Filtering Lines):
This line starts with #md and is thus only visible in the markdown output.
The source file is parsed in chunks of markdown and code. Starting a line with #- manually inserts a chunk break. For example, if we want to display the output of the following operations we may insert #- in between. These two code blocks will now end up in different @example-blocks in the markdown output, and two different notebook cells in the notebook output.
Code chunks are by default placed in Documenter @example blocks in the generated markdown. This means that the output will be captured in a block when Documenter is building the docs. In notebooks the output is captured in output cells, if the execute keyword argument is set to true. Output to stdout/stderr is also captured.
Note
Note that Documenter currently only displays output to stdout/stderr if there is no other result to show. Since the vector [1, 2, 3, 4] is returned from foo, the printing of "This string is printed to stdout." is hidden.
function foo()
+ println("This string is printed to stdout.")
return [1, 2, 3, 4]
end
@@ -9,94 +50,55 @@ foo()
4-element Array{Int64,1
1
2
3
- 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
+ 4
Just like in the REPL, outputs ending with a semicolon hides the output:
1 + 1;
Both Documenter's @example block and notebooks can display images. Here is an example where we generate a simple plot using the Plots.jl package
using Plots
x = range(0, stop=6π, length=1000)
y1 = sin.(x)
y2 = cos.(x)
-plot(x, [y1, y2])
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
- s = replace(s, "x = 123" => "y = 321")
+ " style="stroke:#e26f46; stroke-width:4; stroke-opacity:1; fill:none">
It is possible to give Literate custom pre- and post-processing functions. For example, here we insert a placeholder value y = 321 in the source, and use a preprocessing function that replaces it with y = 321 in the rendered output.
x = 123
123
In this case the preprocessing function is defined by
function pre(s::String)
+ s = replace(s, "x = 123" => "y = 321")
return s
-end
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
In the source file it is possible to use Documenter.jl style references, such as @ref and @id. These will be filtered out in the notebook output. For example, here is a link, but it is only visible as a link if you are reading the markdown output. We can also use equations:
\[\int_\Omega \nabla v \cdot \nabla u\ \mathrm{d}\Omega = \int_\Omega v f\ \mathrm{d}\Omega\]
using Documenters math syntax. Documenters syntax is automatically changed to \begin{equation} ... \end{equation} in the notebook output to display correctly.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
Literate is a package that generates markdown pages (for e.g. Documenter.jl), and Jupyter notebooks, from the same source file. There is also an option to "clean" the source from all metadata, and produce a pure Julia script.
The main design goal is simplicity. It should be simple to use, and the syntax should be simple. In short, all you have to do is to write a commented julia script!
The public interface consists of three functions, all of which take the same script file as input, but generate different output:
Literate.markdown generates a markdown file. Code snippets can be executed and the results included in the output.
Literate.notebook generates a notebook. Code snippets can be executed and the results included in the output.
Literate.script generates a plain script file scrubbed from all metadata and special syntax.
Examples are (probably) the best way to showcase your awesome package, and examples are often the best way for a new user to learn how to use it. It is therefore important that the documentation of your package contains examples for users to read and study. However, people are different, and we all prefer different ways of trying out a new package. Some people wants to RTFM, others want to explore the package interactively in, for example, a notebook, and some people wants to study the source code. The aim of Literate is to make it easy to give the user all of these options, while still keeping maintenance to a minimum.
It is quite common that packages have "example notebooks" to showcase the package. Notebooks are great for showcasing a package, but they are not so great with version control, like git. The reason being that a notebook is a very "rich" format since it contains output and other metadata. Changes to the notebook thus result in large diffs, which makes it harder to review the actual changes.
It is also common that packages include examples in the documentation, for example by using Documenter.jl@example-blocks. This is also great, but it is not quite as interactive as a notebook, for the users who prefer that.
Literate tries to solve the problems above by creating the output as a part of the doc build. Literate generates the output based on a single source file which makes it easier to maintain, test, and keep the manual and your example notebooks in sync.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
\ No newline at end of file
diff --git a/v2.8.0/outputformats/index.html b/v2.8.0/outputformats/index.html
index 878327d..a20b80b 100644
--- a/v2.8.0/outputformats/index.html
+++ b/v2.8.0/outputformats/index.html
@@ -1,5 +1,46 @@
-
-4. Output Formats · Literate.jl
When the source is parsed, and have been processed it is time to render the output. We will consider the following source snippet:
# # Rational numbers
#
# In julia rational numbers can be constructed with the `//` operator.
# Lets define two rational numbers, `x` and `y`:
@@ -11,7 +52,7 @@ y = 2//5
# When adding `x` and `y` together we obtain a new rational number:
z = x + y
and see how this is rendered in each of the output formats.
Markdown output is generated by Literate.markdown. The (default) markdown output of the source snippet above is as follows:
```@meta
-EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
+EditURL = "https://github.com/fredrikekre/Literate.jl/blob/master/docs/src/outputformats.jl"
```
# Rational numbers
@@ -31,23 +72,23 @@ When adding `x` and `y` together we obtain a new rational number:
```@example name
z = x + y
-```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
+```
We note that lines starting with # are printed as regular markdown, and the code lines have been wrapped in @example blocks. We also note that an @meta block have been added, that sets the EditURL variable. This is used by Documenter to redirect the "Edit on GitHub" link for the page, see Interaction with Documenter.
It possible to configure Literate.markdown to also evaluate code snippets, capture the result and include it in the output, by passing execute=true as a keyword argument. The result of the first code-block in the example above would then become
```julia
x = 1//3
```
```
1//3
-```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
+```
In this example the output is just plain text. However, if the resulting value of the code block can be displayed as an image (png or jpeg) Literate will include the image representation of the output.
Note
Since Documenter executes and captures results of @example block it is not necessary to use execute=true for markdown output that is meant to be used as input to Documenter.
Literate 2.4
Code execution of markdown output requires at least Literate version 2.4.
See the section about Configuration for more information about how to configure the behavior and resulting output of Literate.markdown.
Notebook output is generated by Literate.notebook. The (default) notebook output of the source snippet can be seen here: notebook.ipynb.
We note that lines starting with # are placed in markdown cells, and the code lines have been placed in code cells. By default the notebook is also executed and output cells populated. The current working directory is set to the specified output directory the notebook is executed.
Jupyter notebook cells (both code cells and markdown cells) can contain metadata. This is enabled in Literate by the %% token, similar to Jupytext. The format is as follows
%% optional ignored text [type] {optional metadata JSON}
Cell metadata can, for example, be used for nbgrader and the reveal.js notebook extension RISE.
The following would create a 3 slide deck with RISE:
#nb # %% A slide [markdown] {"slideshow": {"slide_type": "slide"}}
# # Some title
#
-# We're using `#nb` so the metadata is only included in notebook output
+# We're using `#nb` so the metadata is only included in notebook output
-#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
+#nb %% A slide [code] {"slideshow": {"slide_type": "fragment"}}
x = 1//3
y = 2//5
-#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
+#nb # %% A slide [markdown] {"slideshow": {"slide_type": "subslide"}}
# For more information about RISE, see [the docs](https://rise.readthedocs.io/en/stable/usage.html)
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The behavior of Literate.markdown, Literate.notebook and Literate.script can be configured by keyword arguments. There are two ways to do this; pass config::Dict as a keyword argument, or pass individual keyword arguments.
Literate 2.2
Passing configuration as a dictionary requires at least Literate version 2.2.
Configuration precedence
Individual keyword arguments takes precedence over the config dictionary, so for e.g. Literate.markdown(...; config = Dict("name" => "hello"), name = "world") the resulting configuration for name will be "world". Both individual keyword arguments and the config dictionary takes precedence over the default.
Available configurations with description and default values are given in the reference for Literate.DEFAULT_CONFIGURATION just below.
Boolean for controlling the addition of This file was generated with Literate.jl ... to the bottom of the page. If you find Literate.jl useful then feel free to keep this.
true
keep_comments
When true, keeps markdown lines as comments in the output script.
false
Only applicable for Literate.script.
execute
Whether to execute and capture the output.
true (notebook), false (markdown)
Only applicable for Literate.notebook and Literate.markdown. For markdown this requires at least Literate 2.4.
codefence
Pair containing opening and closing fence for wrapping code blocks.
"```julia" => "```"
If documenter is true the default is "```@example"=>"```".
devurl
URL for "in-development" docs.
"dev"
See Documenter docs. Unused if repo_root_url/nbviewer_root_url/binder_root_url are set.
repo_root_url
URL to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__REPO_ROOT_URL__.
nbviewer_root_url
URL to the root of the repository as seen on nbviewer.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__NBVIEWER_ROOT_URL__.
binder_root_url
URL to the root of the repository as seen on mybinder.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for @__BINDER_ROOT_URL__.
repo_root_path
Filepath to the root of the repository.
-
Determined automatically on Travis CI, GitHub Actions and GitLab CI. Used for computing Documenters EditURL.
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
The first step is pre-processing of the input file. The file is read to a String. The first processing step is to apply the user specified pre-processing function, see Custom pre- and post-processing.
The next step is to perform all of the built-in default replacements. CRLF style line endings ("\r\n") are replaced with LF line endings ("\n") to simplify internal processing. Next, line filtering is performed, see Filtering Lines, meaning that lines starting with #md, #nb or #jl are handled (either just the token itself is removed, or the full line, depending on the output target). The last pre-processing step is to expand the convenience "macros" described in Default Replacements is expanded.
After the preprocessing the file is parsed. The first step is to categorize each line and mark them as either markdown or code according to the rules described in the Syntax section. Lets consider the example from the previous section with each line categorized:
# # Rational numbers <- markdown
# <- markdown
# In julia rational numbers can be constructed with the `//` operator. <- markdown
# Lets define two rational numbers, `x` and `y`: <- markdown
@@ -10,7 +51,7 @@ y = 2 // 5 <- c
<- code
# When adding `x` and `y` together we obtain a new rational number: <- markdown
<- code
-z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
+z = x + y <- code
In the next step the lines are grouped into "chunks" of markdown and code. This is done by simply collecting adjacent lines of the same "type" into chunks:
# # Rational numbers ┐
# │
# In julia rational numbers can be constructed with the `//` operator. │ markdown
# Lets define two rational numbers, `x` and `y`: ┘
@@ -26,7 +67,7 @@ z = x + y ┘ cod
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
Chunk #2:
# Define variable x and y
x = 1 // 3
-y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
+y = 2 // 5
Chunk #3:
When adding `x` and `y` together we obtain a new rational number:
Chunk #4:
z = x + y
It is then up to the Document generation step to decide how these chunks should be treated.
Sometimes it is convenient to be able to manually control how the chunks are split. For example, if you want to split a block of code into two, such that they end up in two different @example blocks or notebook cells. The #- token can be used for this purpose. All lines starting with #- are used as "chunk-splitters":
x = 1 // 3
y = 2 // 5
#-
-z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
+z = x + y
The example above would result in two consecutive code-chunks.
Tip
The rest of the line, after #-, is discarded, so it is possible to use e.g. #------------- as a chunk splitter, which may make the source code more readable.
It is also possible to use #+ as a chunk splitter. The difference between #+ and #- is that #+ enables Documenter's "continued"-blocks, see the Documenter manual.
After the parsing it is time to generate the output. What is done in this step is very different depending on the output target, and it is described in more detail in the Output format sections: Markdown Output, Notebook Output and Script Output. Using the default settings, the following is happening:
Markdown output: markdown chunks are printed as-is, code chunks are put inside a code fence (defaults to @example-blocks),
Notebook output: markdown chunks are printed in markdown cells, code chunks are put in code cells,
Script output: markdown chunks are discarded, code chunks are printed as-is.
When the document is generated the user, again, has the option to hook-into the generation with a custom post-processing function. The reason is that one might want to change things that are only visible in the rendered document. See Custom pre- and post-processing.
The last step of the generation is writing to file. The result is written to $(outputdir)/$(name)(.md|.ipynb|.jl) where outputdir is the output directory supplied by the user (for example docs/generated), and name is a user supplied filename. It is recommended to add the output directory to .gitignore since the idea is that the generated documents will be generated as part of the build process rather than beeing files in the repo.
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
\ No newline at end of file
diff --git a/v2.8.0/search/index.html b/v2.8.0/search/index.html
index 7b2680a..f302a1e 100644
--- a/v2.8.0/search/index.html
+++ b/v2.8.0/search/index.html
@@ -1,2 +1,43 @@
-
-Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
+Search · Literate.jl
Loading search...
Settings
This document was generated with Documenter.jl on Tuesday 19 January 2021. Using Julia version 1.5.3.
\ No newline at end of file
diff --git a/v2.8.0/tips/index.html b/v2.8.0/tips/index.html
index e4a6f9b..77da78c 100644
--- a/v2.8.0/tips/index.html
+++ b/v2.8.0/tips/index.html
@@ -1,5 +1,46 @@
-
-7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
import DisplayAs
+7. Tips and Tricks · Literate.jl
When Literate executes a notebook the return value, i.e. the result of the last Julia expression in each cell is captured. By default Literate generates multiple renderings of the result in different output formats or MIMEs, just like IJulia.jl does. All of these renderings are embedded in the notebook and it is up to the notebook frontend viewer to select the most appropriate format to show to the user.
A common example is images, which can often be displayed in multiple formats, e.g. PNG (image/png), SVG (image/svg+xml) and HTML (text/html). As a result, the filesize of the generated notebook can become large.
In order to remedy this you can use the clever Julia package DisplayAs to limit the output capabilities of and object. For example, to "force" an image to be captures as image/png only, you can use
This can save some memory, since the image is never captured in e.g. SVG or HTML formats.
Note
It is best to always let the object be showable as text/plain. This can be achieved by nested calls to DisplayAs output types. For example, to limit an image img to be showable as just image/png and text/plain you can use