1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
|
* Single-node remote execution service: ~just execute~
~just execute~ starts a single-node remote build execution service in
the environment in which the command has been issued. Having the
possibility to easily create a remote build execution service can
improve the developing experience where the build environment (and the
cache) can/should be shared among the developers. For example (and
certainly not limited to)
- when developers build on the same machine. It will allow multiple
users to build in the same environment and share the cache, thus
avoiding duplicated work.
- quickly set up a testing environment that can be used by other
developers.
For the sake of completeness, these are the files used to compile the
examples
#+BEGIN_SRC
latex-hello-world/
+--hello.tex
+--repos.json
+--TARGETS
#+END_SRC
They read as follows
File ~repos.json~:
#+SRCNAME: repos.json
#+BEGIN_SRC js
{ "main": "tutorial"
, "repositories":
{ "latex-rules":
{ "repository":
{ "type": "git"
, "branch": "master"
, "commit": "ffa07d6f3b536f1a4b111c3bf5850484bb9bf3dc"
, "repository": "https://github.com/just-buildsystem/rules-typesetting"
}
}
, "tutorial":
{ "repository": {"type": "file", "path": "."}
, "bindings": {"latex-rules": "latex-rules"}
}
}
}
#+END_SRC
File ~TARGETS~:
#+SRCNAME: TARGETS
#+BEGIN_SRC js
{ "tutorial":
{ "type": ["@", "latex-rules", "latex", "latexmk"]
, "main": ["hello"]
, "srcs": ["hello.tex"]
}
}
#+END_SRC
File ~hello.tex~:
#+SRCNAME: hello.tex
#+BEGIN_SRC tex
\documentclass[a4paper]{article}
\author{JustBuild developers}
\date{}
\title {just execute}
\begin{document}
\maketitle
Hello from \LaTeX!
\end{document}
#+END_SRC
** Simple usage of ~just execute~ in the same environment
In this first example, we simply call ~just execute~ and the
environment of the caller is made available. We therefore recommend to
have a dedicated non-privileged ~build~ user to run the execution
service. In the following, we will use ~%~ to indicate the prompt of
the ~build~ user, ~$~ for a /normal/ user.
To enable such a single-node execution service, it is sufficient to
type on one shell (as ~build~ user)
#+BEGIN_SRC bash
% just execute -p <N>
#+END_SRC
Where ~<N>~ is a port number which is supposed to be available.
By default, the native ~git~-based protocol will be used, but it
is also possible to use the original protocol with ~sha256~ hashes
by providing the ~--compatible~ option.
#+BEGIN_SRC bash
% just execute --compatible -p <N>
#+END_SRC
This is particularly useful when providing the remote-execution service
to a different build tool.
To use it, as a /normal/ user, on a different shell type
#+BEGIN_SRC bash
$ just [...] -r localhost:<N>
#+END_SRC
Let's run these commands to understand the output.
#+BEGIN_SRC bash
% just execute -p 8080
INFO: execution service started: {"interface":"127.0.0.1","pid":4911,"port":8080}
#+END_SRC
Once the execution service is started, it logs out three essential
data:
- which interface is used (in this case, the default one, which is the
loopback device)
- the pid number (number will always change)
- the used port
To exploit the execution service, run from a different shell
#+BEGIN_SRC bash
$ just [...] -r localhost:8080
#+END_SRC
*** Use a random port
If we don't need (or know) a fixed port number, we can simply omit the
~-p~ option. In this case, ~just execute~ will listen to a random free
port.
#+BEGIN_SRC bash
% just execute
INFO: execution service started: {"interface":"127.0.0.1","pid":7217,"port":33841}
#+END_SRC
The port number can be different each time we invoke the
above command.
Finally, to connect to the remote endpoint, type
#+BEGIN_SRC bash
$ just [...] -r localhost:33841
#+END_SRC
*** Info file
Copying and pasting port numbers and pids can be
error-prone/unfeasible if we manage several/many execution service
instances. Therefore, the invocation of ~just execute~ can be
decorated with the option ~--info-file <PATH>~, which will store, in
JSON format, in ~<PATH>~ the interface, pid, and port bound to the
running instance. The user can then easily parse this file to extract
the required information.
For example
#+BEGIN_SRC bash
% just execute --info-file /tmp/foo.json
INFO: execution service started: {"interface":"127.0.0.1","pid":7680,"port":44115}
#+END_SRC
#+BEGIN_SRC bash
$ cat /tmp/foo.json
{"interface":"127.0.0.1","pid":7680,"port":44115}
#+END_SRC
Please note that the info file will /not be automatically deleted/
when the user terminates the service. The user is responsible for
eventually removing it from the file system.
*** Enable mTLS
It is worth mentioning that mTLS must be enabled when the execution
service starts, and it cannot be activated (or deactivated) while the
instance runs.
#+BEGIN_SRC bash
% just execute [...] --tls-ca-cert <path_to_CA_cert> --tls-server-cert <path_to_server_cert> --tls-server-key <path_to_server_key>
#+END_SRC
When a client connects, it must pass the same ~CA certificate~ and
its pair of certificate and private key, which the used certified
authority has signed.
#+BEGIN_SRC bash
$ just [...] --tls-ca-cert <path_to_CA_cert> --tls-client-cert <path_to_client_cert> --tls-client-key <path_to_client_key>
#+END_SRC
**** How to generate self-signed certificates
This section does not pretend to be an exhaustive guide to the
generation and management of certificates, which is well beyond the
aim of this tutorial. We just want to provide a minimal reference for
let users start using mTLS and having the benefits of mutual
authentication.
***** Certification Authority certificate
As a first step, we need a Certification Authority certificate (~ca.crt~)
#+BEGIN_SRC bash
% openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout ca.key -out ca.crt
#+END_SRC
***** Server certificate and key
If the clients will connect using the loopback device, i.e., the users
are logged in the same machine where ~just execute~ will run, the
/server certificates/ can be generate with the following instructions
#+BEGIN_SRC bash
% openssl req -new -nodes -newkey rsa:4096 -keyout server.key -out server.csr -subj "/CN=localhost"
% openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 0 -out server.crt
% rm server.csr
#+END_SRC bash
On the other hand, if the clients will connect from a different
machine, and ~just execute~ will use a different interface (see [[Expose
a particular interface]] below), the steps are a bit more involved. We
need an additional configuration file where we state the ip address of
the used interface. For example, if the interface ip address is
~192.168.1.14~, we will write
#+BEGIN_SRC bash
% cat << EOF > ssl-ext-x509.cnf
[v3_ca]
subjectAltName = IP.1:192.168.1.14
EOF
#+END_SRC
Then, the pair of certificate and pair can be obtained with
#+BEGIN_SRC bash
% openssl req -new -nodes -newkey rsa:4096 -keyout server.key -out server.csr -subj "/CN=localhost"
% openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 0 -out server.crt -extensions v3_ca -extfile ssl-ext-x509.cnf
% rm server.csr
#+END_SRC bash
***** Client certificate and key
The client, which needs the ~ca.crt~ and ~ca.key~ files, can run the
following
#+BEGIN_SRC bash
$ openssl req -new -nodes -newkey rsa:4096 -keyout client.key -out client.csr
$ openssl x509 -req -days 365 -signkey client.key -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt
$ rm client.csr
#+END_SRC
*** Expose a particular interface
To use an interface different from the loopback one, we have to list
it with the ~-i~ option
#+BEGIN_SRC bash
$ just execute -i 192.168.1.14 -p 8080 --tls-ca-cert <path_to_CA_cert> --tls-server-cert <path_to_server_cert> --tls-server-key <path_to_server_key>
INFO: execution service started: {"interface":"192.168.1.14","pid":7917,"port":8080}
#+END_SRC
If the interface is accessible from another machine, it is also
recommended to enable mutual TLS (mTLS) authentication.
** Managing multiple build environments
Since multiple instances of ~just execute~ can run in parallel
(listening at different ports), the same machine can be the worker for
various projects. However, to avoid conflicts between the dependencies
and to guarantee a clean environment for each project, it is
recommended that ~just execute~ is invoked from within a container or
a chroot environment.
In the following sections, we will set up, step by step, a dedicated
execution service for compiling latex documents in these two
scenarios.
*** How to run ~just execute~ inside a chroot environment
**** TL;DR
- create a suitable chroot environment
- chroot into it
- run ~just execute~ from there
- in a different shell, ~just build -r <interface>:<port>~
**** Full latex chroot: walkthrough
This short tutorial will use ~debootstrap~ and ~schroot~ to create and
enter the chroot environment. Of course, different strategies/programs
can be used.
***** Prepare the root file system
Install debian bullseye in directory ~/chroot/bullseye-latex~
#+BEGIN_SRC bash
sudo debootstrap bullseye /chroot/bullseye-latex
#+END_SRC
***** Create a configuration file
~schroot~ needs a proper configuration file, which can be generated as
follows
#+BEGIN_SRC bash
$ echo "[bullseye-latex]
description=bullseye latex env
directory=/chroot/bullseye-latex
root-users=$(whoami)
users=$(whoami)
type=directory" | sudo tee /etc/schroot/chroot.d/bullseye-latex
#+END_SRC
Note that ~type=directory~, apart from performing the necessary
bindings, will make ~$HOME~ shared between the host and chroot
environment. While this can be useful for sharing artifacts, the user
should specify a ~--local-build-root~ (aka, the cache root) different
from the default one to avoid conflicts between the host and the
chroot environment.
***** Install required packages in the chroot environment
~schroot~ also allows running commands inside the environment by
stating it after the ~--~
#+BEGIN_SRC bash
$ schroot -c bullseye-latex -u root -- sh -c 'apt update && apt install -y texlive-full'
#+END_SRC
***** Start the execution service
To start the execution service inside the chroot environment run
#+BEGIN_SRC bash
$ schroot -c bullseye-latex -- /bin/just execute --local-build-root ~/.cache/chroot/bullseye-latex -p 8080
#+END_SRC
We assumed that the binary ~just~ is available in the chroot
environment at the path ~/bin/just~. If you don't know how to make
~just~ available in the chroot environment, read the section [[How to
have the binary just inside the chroot environment]] below.
Since the ~$HOME~ is shared, specifying a local build root (aka, cache
root) different from the default is highly recommended. For
convenience, we also set a port (using the flag ~-p~) that the
execution service will listen to.
If the chosen port is available, the following output should be
produced (note that the pid number might be different).
#+BEGIN_SRC bash
INFO: execution service started: {"interface":"127.0.0.1","pid":48880,"port":8080}
#+END_SRC
For example, let's compile the example listed in the introduction
#+BEGIN_SRC bash
$ just-mr -C repos.json install -o . -r localhost:8080
#+END_SRC
which should report
#+BEGIN_SRC bash
INFO: Performing repositories setup
INFO: Found 2 repositories to set up
INFO: Setup finished, exec ["just","install","-C","...","-o",".","-r","localhost:8080"]
INFO: Requested target is [["@","tutorial","doc/just-execute/latex-hello-world","tutorial"],{}]
INFO: Analysed target [["@","tutorial","doc/just-execute/latex-hello-world","tutorial"],{}]
INFO: Discovered 1 actions, 0 trees, 1 blobs
INFO: Building [["@","tutorial","doc/just-execute/latex-hello-world","tutorial"],{}].
INFO: Processed 1 actions, 0 cache hits.
INFO: Artifacts can be found in:
/tmp/work/doc/just-execute/latex-hello-world/hello.pdf [25e05d3560e344b0180097f21a8074ecb0d9f343:37614:f]
#+END_SRC
In the shell where ~just execute~ is running, this line should have
appeared, witnessing that the compilation happened on the remote side
#+BEGIN_SRC bash
INFO (execution-service): Execute 6237d87faed1ec239512ad952eeb412cdfab372562
#+END_SRC
*** How to start ~just execute~ inside a docker container
Building inside a container is another strategy to ensure no
undeclared dependencies are pulled and to build in a fixed
environment.
We will replicate what we did for the chroot environment and create a
suitable docker image.
**** Build a suitable docker image
Let's write a ~Dockerfile~ that has ~just execute~ as ~ENTRYPOINT~. We
assume the binary ~just~ is available inside the container at path
~/bin/just~. The easiest way is to use a
[[https://github.com/just-buildsystem/justbuild-static-binaries][static just binary]]
and copy it into the container.
#+SRCNAME: Dockerfile
#+BEGIN_SRC docker
FROM debian:bullseye-slim
COPY ./just /bin/just
RUN apt update
RUN apt install -y --no-install-recommends texlive-full
ENTRYPOINT ["/bin/just", "execute"]
#+END_SRC
We build the image with
#+BEGIN_SRC bash
$ sudo docker image build -t bullseye-latex .
#+END_SRC
Finally, we can start the execution service
#+BEGIN_SRC bash
$ docker run --network host --name execute-latex -p 8080
#+END_SRC
From a different shell, we can build the latex hello world example
listed in the introduction running
#+BEGIN_SRC bash
$ just-mr -C repos.json install -o . -r localhost:8080
#+END_SRC
Note that the cache that ~just execute~ populates is confined within
the container. The cache is gone if the container is restarted (or the
pc rebooted). If you want the cache to survive the container life
cycle, you can bind a "host directory" within the container as
follows
#+BEGIN_SRC bash
$ docker run --network host --name execute-latex --mount type=bind,source="${HOME}/.cache",target=/cache bullseye-latex -p 8080 --local-build-root /cache/docker/latex
#+END_SRC
|