Jsonnet itself does not know traditional packages, classes or similar.
For documentation and distribution purposes however, it seems reasonable to introduce a concept of **loose packages**, defined as a single importable file, holding all of your **public API**.
As an example, a hypothetical `url` library could define it's package like above example does.
Packages are defined by including assigning a `d.pkg` call to a key literally named `#` (hash). All fields, including nested packages, of the same object having the `#` key belong to that package.
### Functions
Most common part of an API will be functions. These are annotated in a similar fashion:
Again, the naming rule `#` joined with the fields name must be followed, so the `docsonnet` utility can automatically join together the contents of your object with it's annotated description.
## FAQ
#### Do my projects need to have `doc-util` installed to vendor/?
No! The `docsonnet` binary comes included with it, and during normal Jsonnet use the docsonnet keys will never be accessed, so your Jsonnet runs just fine without.
> Linters like [jsonnet-lint](https://pkg.go.dev/github.com/google/go-jsonnet/linter) or `tk lint` require the imports to be resolvable, so you should add `doc-util` to `vendor/` when using these linters.
#### What's wrong with comments? Why not parse regular comments?
I had some attempts on this, especially because it feels more natural. However, the language properties of Jsonnet make this quite challenging:
- AST parsing is insufficient:
https://github.com/grafana/tanka/issues/223#issuecomment-590569198. Just by
parsing the syntax tree of Jsonnet, we only receive a representation of the
file contents, not the logical ones a human might infer
- No effective view on things: Jsonnet is a lazily evaluated, highly dynamic
language. Just by looking at a single file, we might not even see what ends up
at the user when importing the library, because during evaluation things can
be heavily overwritten.
Because of that, we would need to perform a slimmed down evaluation on the AST before getting our information out of it. This is a lot of work, especially when we can just use the real Jsonnet compiler to do this for us. That's docsonnet.
#### But docsonnet is ugly. And verbose
I know. Think of docsonnet as a proof of concept and a technology preview. Only _what_ you specify is a fixed thing, not the way you do.
Of course nobody wants these ugly function calls as docs. But they are incredibly powerful, because we can use Jsonnet merging and patching on the generated docsonnet fields, and the Jsonnet compiler handles that for us.
In case this idea works out well, we might very well consider adding docsonnet as language sugar to Jsonnet, which might look like this:
```jsonnet
{
## myFunc greets you
## @params:
## who: string
myFunc(who):: "hello %s!" % who
}
```
Note the double hash `##` as a special indicator for the compiler, so it can desugar above to:
This will all happen transparently, without any user interaction
#### What else can it do?
Because the Docsonnet gives you the missing logical representation of your Jsonnet library, it enables straight forward implementation of other language tooling, such as **code-completion**.
Instead of inferring what fields are available for a library, we can _just_ look at it's docsonnet and provide the fields specified there, along with nice descriptions and argument types.