@ -4,8 +4,8 @@ Since all packages are different, and may have different demands on how
@@ -4,8 +4,8 @@ Since all packages are different, and may have different demands on how
to create a nice example for the documentation it is important that
the package maintainer does not feel limited by the by default provided syntax
that this package offers. While you can generally come a long way by utilizing
[line filtering](@ref Filtering-lines) there might be situations where you need
to manually hook into the generation and change things. In `Literate.jl` this
[line filtering](@ref Filtering-Lines) there might be situations where you need
to manually hook into the generation and change things. In Literate this
is done by letting the user supply custom pre- and post-processing functions
The source file format for `Literate.jl` is a regular, commented, julia (`.jl`) scripts.
The source file format for Literate is a regular, commented, julia (`.jl`) scripts.
The idea is that the scripts also serve as documentation on their own and it is also
simple to include them in the test-suite, with e.g. `include`, to make sure the examples
stay up do date with other changes in your package.
@ -30,61 +30,19 @@ In the lines `#'` we can use regular markdown syntax, for example the `#`
@@ -30,61 +30,19 @@ In the lines `#'` we can use regular markdown syntax, for example the `#`
used for the heading and the backticks for formatting code. The other lines are regular
julia code. We note a couple of things:
- The script is valid julia, which means that we can `include` it and the example will run
(for example in the `test/runtests.jl` script, to include the example in the test suite).
- The script is "self-explanatory", i.e. the markdown lines works as comments and
thus serve as good documentation on its own.
For simple use this is all you need to know, the script above is valid. Let's take a look
at what the above snippet would generate, with default settings:
For simple use this is all you need to know. The following additional special syntax can also be used:
- `#md`, `#nb`, `#jl`: tags for filtering of lines, see [Filtering Lines](@ref Filtering-Lines).
- `#-`: tag for manually controlling chunk-splits, see [Custom control over chunk splits](@ref).
- [`Literate.markdown`](@ref): leading `#'` are removed, and code lines are wrapped in
`@example`-blocks:
````markdown
# Rational numbers
There is also some default convenience replacements that will always be performed, see
[Default Replacements](@ref).
In julia rational numbers can be constructed with the `//` operator.
Lets define two rational numbers, `x` and `y`:
```@example filename
x = 1//3
y = 2//5
```
When adding `x` and `y` together we obtain a new rational number:
```@example filename
z = x + y
```
````
- [`Literate.notebook`](@ref): leading `#'` are removed, markdown lines are placed in
`"markdown"` cells, and code lines in `"code"` cells:
```
│ # Rational numbers
│
│ In julia rational numbers can be constructed with the `//` operator.
│ Lets define two rational numbers, `x` and `y`:
In [1]: │ x = 1//3
│ y = 2//5
Out [1]: │ 2//5
│ When adding `x` and `y` together we obtain a new rational number:
In [2]: │ z = x + y
Out [2]: │ 11//15
```
- [`Literate.script`](@ref): all lines starting with `#'` are removed:
@ -10,14 +10,17 @@ The generation of output follows the same pipeline for all output formats:
@@ -10,14 +10,17 @@ The generation of output follows the same pipeline for all output formats:
## [**3.1.** Pre-processing](@id Pre-processing)
The first step is pre-processing of the input file. The file is read to a `String`
and CRLF style line endings (`"\r\n"`) are replaced with LF line endings (`"\n"`) to simplify
internal processing. The next step is to apply the user specified pre-processing function.
See [Custom pre- and post-processing](@ref Custom-pre-and-post-processing).
The first step is pre-processing of the input file. The file is read to a `String`.
The first processing step is to apply the user specified pre-processing function,
see [Custom pre- and post-processing](@ref Custom-pre-and-post-processing).
Next the line filtering is performed, see [Filtering lines](@ref), meaning that lines
starting with `#md `, `#nb ` or `#jl ` are handled (either just the token itself is removed,
or the full line, depending on the output target).
The next step is to perform all of the built-in default replacements.
CRLF style line endings (`"\r\n"`) are replaced with LF line endings (`"\n"`) to simplify
internal processing. Next, line filtering is performed, see [Filtering Lines](@ref),
meaning that lines starting with `#md `, `#nb ` or `#jl ` are handled (either just
the token itself is removed, or the full line, depending on the output target).
The last pre-processing step is to expand the convenience "macros" described
in [Default Replacements](@ref) is expanded.
## [**3.2.** Parsing](@id Parsing)
@ -108,8 +111,8 @@ The example above would result in two consecutive code-chunks.
@@ -108,8 +111,8 @@ The example above would result in two consecutive code-chunks.
After the parsing it is time to generate the output. What is done in this step is
very different depending on the output target, and it is describe in more detail in
the Output format sections: [Markdown output](@ref), [Notebook output](@ref) and
[Script output](@ref). In short, the following is happening:
the Output format sections: [Markdown Output](@ref), [Notebook Output](@ref) and
[Script Output](@ref). In short, the following is happening:
* Markdown output: markdown chunks are printed as-is, code chunks are put inside