aboutsummaryrefslogtreecommitdiffstats
+-----------------+
+ Release process +
+-----------------+

Key management and associated commits:
--------------------------------------
Replicant releases use several different set of keys:
- The images being built are at the end signed with a gpg key. That gpg
  key needs to be defined in the RELEASE_KEY variable in the releasevars.sh
  file. Users are expected to manually verify at least the recovery with the
  associated public key as part of the installation instructions.
- In addition, during the build process, additional keys and certificates are
  generated. These are for instance used to sign the Android applications that
  are built as part of the Replicant images. They are also used by the recovery
  to check if the zip being installed is built with the same keys. If not it
  will simply refuse to install it. As the users already verified the gpg
  signature of the recovery, this can also be used to simplify the installation
  instructions by making users check the signature only for the recovery.

When the developer doing the release changes, none of these keys are passed
around to the new developer:
- Having the developer's personal GPG keys enable people to more easily verify
  the developer's key as people can check that directly instead of checking
  the keys of the people who signed a Replicant release key. In addition, there
  is more security as the key don't need to be transferred from person to person.
- As for the keys and certificates generated build process, they don't need to
  be passed around either.

So when a developer is doing a release, that developer gpg public key is
published automatically as part of the release process, and the installation
instructions already contain instructions on how to download and use that public
key, so no extra steps are needed during the release.

However the key (or subkey) signing the release needs to be kept valid (not
expired) for as long as it is relevant for less technical users to install
any of the releases signed by that key.

As for the keys generated during the build, the release instructions already
take care of making users install the recovery associated with a given release,
so that part is being taken care of.

However if the keys signing system application changed between one release
and another, without any key migration procedure, users won't be able to
upgrade to the new release without wiping their data. As this would be extremely
inconvenient for users (which potentially includes yourself as well), we can
generate a shell script that will take care of the migration from the old set
of keys to the new one during the first run of the new Replicant image.

In vendor/replicant-scripts/images/gen_key_migration_script, there is a python
script (gen_key_migration_script.py) that can generate that shell script.

Once generated, that shell script needs to be copied in
vendor/replicant/prebuilt/common/bin/key-migration.sh and a commit needs to be
made in vendor/replicant to add it to the Replicant source code.

Once the commit is made, it doesn't necessarily need to be pushed to a branch:
if it is in the Replicant source code that will be tagged on later, it will then
be pushed to the tag. The next sections have more more details on that.

As the generated keys for a given release are in vendor/replicant-security/,
it is also possible to keep the same keys from one release to the next.

The downside of keeping the same keys is that it slightly increase the security
risk as these keys are also used by the recovery to check the zip. Still, since
Replicant 6.0 0004, the release version is now part of the image and can be found
in Settings->About phone to mitigate the fact that the same set of keys could
also end up signing images that are not part of a release.

Changing keys too often currently have huge downsides given how the migration
script is designed:
- If the script would check if it needs to run at each boot, then it would need
  to handle cases like upgrade->downgrade->upgrade to not corrupt users data.
- If the script replaces the old key (if found) with the new ones at each boot,
  then it increases a lot the risk of data loss over time. In addition, at the
  time of writing, running that script takes about 5 seconds, even if no keys
  need to be changed. The more old keys we need to migrate from, the more time
  it will take to run.

So to keep the code simple and robust we want to make sure that the script is
only run at the first boot after an installation, without using any complicated
logic however:
- /etc/init.d scripts cannot delete themselves. Allowing that triggers some
  neverallow errors during the compilation of the selinux policies.
- Importing other *.rc init files from /system (once it is mounted) results
  in a phone that doesn't boot.

So the solution that has been chosen is to do 2 releases at the same time:
- The first release would migrate the keys automatically.
- The second release will keep the script but not run it automatically
  anymore.

This way, once the migration is done, it doesn't increase the risk of data
loss anymore. In that regard it is similar to the unofficial LineageOS data
migration procedure (https://blunden.se/migration) that is done only once in
the recovery, through an update zip. However here it also handles encrypted
partitions as our script run in Replicant. In addition users can more easily
generate scripts to downgrade/upgrade between different keys.

Adding the last commits before the release:
--------------------------------------------
Before a release, we need to make some additional changes in several
repositories.

In this repository:
- We need to edit releasevars.sh and change the RELEASE field to reflect the
  release version we will have, as it is used by the scripts. Additionally we
  might want to fix bugs and/or improve things along the way.

In the vendor/replicant repository:
- We need need to modify REPLICANT_VERSION in config/common.mk and RELEASE
  in sign-build.sh to add the new release version.
- We might also need to modify the ChangeLog or not depending on the type
  of release.

In the manifest repository:
- We need to change all the revisions to use the new tag.

However it's best not to push these vendor/replicant and manifest changes
yet. Since Replicant 4.2, every repository used to produce the Replicant
images being released are tagged with the specific version of that release
(like replicant-6.0-0004-rc2). In addition it's also possible to build the
latest version of a major Replicant version by using a branch instead
(like replicant-6.0).

As the branches need to be buildable at any moment, if we push the manifest
and vendor/replicant modifications described above in the branch of a major
Replicant version (like replicant-6.0), we will end up with images that will
indicate that they are coming from a release (like replicant-6.0-0003-rc2)
instead.

In some cases, this could become a serious issue: If you are using keys that
have been or will be used in a release, and build an image from a branch,
if that branch has the Replicant version of a release, people could be
(accidentally) mislead to think that they are installing a release while they
are not.

In turn this could lead to a huge number of hours being lost on trying to
track down regressions due to making the wrong assumption about which image
is being run.

As we will need to tag all the source code later on, we can just add the
commits that have the modifications described above in the source code, without
pushing them: they will then be picked up by the tagging process and be pushed
in the tag.

Preparing the source code for tagging:
--------------------------------------
First you need to make sure that all the commits with the changes mentioned
above are in the replicant directory before proceeding because otherwise the
tagging script will tag revisions that don't contain any of these commits.

You also need to make sure that you have no other local modifications that
aren't intended to go in the release for the same reason.

Here's a procedure you can follow to do that very fast:
- save the commits you have on top, like the "last commits before the release"
  which were discussed previously.
- move the .repo directory outside of the replicant directory
- delete the replicant directory
- re-create the replicant directory
- move the .repo directory in it
- re-run the repo init and repo sync commands. This will re-use the .repo cache
  which will speed up a lot the download time.
- re-import all the commits you need to add on top of the existing source code.

Make sure that the manifest commits are added in the git repository that is
checked out in manifest/ as the tagging script will use that to push the tags
for the manifest.

If you also want to check if all the source code fetches fine (for instance if
you recently migrated some repositories to the Replicant mirrors system) you can
also re-download all the source code from scratch instead.

Theses are probably not the only ways to do it. If you have other ideas, feel
free to send patches to add them in this README as well.

Tagging the source code:
------------------------
Once this is done we can tag the various repositories with releasetag.sh.

This script will tag all the repositories but the manifest, so it expects
all repositories to be mirrored on the Replicant git server.

If you build Replicant on a machine that cannot push the code, for instance
because it doesn't have access to your ssh keys, you can simply export the
Replicant source directory with sshfs to your main computer which has all the
access and run the releasetag.sh from it.

Building the source tarball:
----------------------------
Once the tagging is done, you need to re-download the source code again from the
new manifest tag as tools like release.sh expect the manifest to have the
release tag.

Doing it from scratch will make sure that the repositories are fetchable and
that nothing went wrong.

The make_source_tarball.sh script can do that in an efficient way: it will not
download the full history but only the last commit and the state at the last
commit of each git repository, and it will then generate a tarball out of it.

The Replicant release will have to be built in the directory
make_source_tarball.sh will have created. This way we can simply publish that
tarball along with the images to make sure that we are providing complete and
corresponding source code in the same location than the images.

For Replicant 6.0 0004 RC5, the replicant-6.0-0004-rc5.tar.lz tarball is about
6.5GiB (6969502733 bytes) and once the images for all the supported devices are
built, the replicant-6.0-0004-rc5 directory takes about 211GiB (223219395971
bytes).

So you also need to make sure to have at least about 220 GiB of free space
before starting the build.

Preventing fetching unknown code:
--------------------------------
In Replicant 6 we found that some nonfree code was downloaded through maven
during the build.

While that has been removed as part of the removal of the nonfree ambient SDK
removal, it is a good practice to prevent the builder from fetching code while
building to be able to more easily detect issues like that in the future.
Since the builders had network access here we only found out by all that after
being notified of the issue  through the Replicant mailing list by someone who
also removed it in another Android distribution.

If you need to access your builder through SSH, and that you have it on a local
network, you could simply get the default route and remove it:
# ip route
default via 192.168.1.1 dev eth0
# ip route del default via 192.168.1.1 dev eth0

Or you could simply disconnect it from the network completely and use a
keyboard, mouse and monitor to interact with it to launch and monitor the build.

Building the images:
--------------------
At this point, you can finally start building all the images with
build_release.sh.

The build process can many hours on an older computer, so it's a good idea to
plan that ahead. For instance, if your build computer is noisy and that its
noise can prevent you from sleeping, you might want considering launching the
build as soon as you wake up the morning to have it finished before you need to
go to sleep, or if it's possible you might also want to consider moving that
computer before starting the build.

At the time of writing, around the end of the build of the first target, it
will ask you some information for a certificate, so make sure to be around
as that part is interactive. Alternatively you can try to make it fill-in the
defaults and make the build proceed by pressing enter multiple times right after
having launched the build.

Then it will proceed for many hours uninterrupted if everything goes fine.

You can checks the build logs in logs/ to see if the build has finished.

Releasing the images:
---------------------
When the build is done, you can finally use the the release.sh script to
populate the final directory with  all the files to release.

For instance for releasing Replicant 6.0 0004 RC2 I used the following:
$ release.sh replicant-6.0 replicant-6.0-0004-rc2

The replicant-6.0 directory contained the Replicant source code and
replicant-6.0-0004-rc2 was an empty directory.

This script can take a long time if the connection to your builder is slow, as
this downloads all the images.

You need to sign the images with the release.sh script. I used the following
for the replicant-6.0-0004-rc2 release:
$ release.sh replicant-6.0 replicant-6.0-0004-rc2 signatures

The signature is done as a separate process to enable use cases where the
connection to the build server is slow and gpg is setup to be run interactively
(for instance by typing a password).

You can then push that directory to the FTP server and run the FTP server
trigger script when all the files have been copied.

Announcing the release:
-----------------------
As releases also need to be announced, it might be a good idea to work on the
release text in parallel.

We typically inform people through both the mailing list and the blog system we
use. At the time of writing, we use WordPress for our blog, which isn't very
adapted to our needs as the drafts are not public. However we wrote a script to
make it easier to review drafts and to publish them on the mailing list.

To use the script, you need to save the html page that contains the blog post,
and use the release_notes.py on it. If the blog post is a draft, you can simply
preview the draft, and save the resulting html page.

In addition to the blog post and the mailing list:
- We might also need to modify the Replicant IRC channel(s) topic(s) to indicate
  the last Replicant version.
- We also need to push a patch to the replicant website (website.git) to do the
  same: the last Replicant version is mentioned on the main page of the
  replicant.us website.

Copyright:
----------
This README is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.