README.md 5.61 KB
Newer Older
1
2
3
Chromebook Firmware Utilities
-----------------------------

4
# Installing
5

6
7
8
This repository contains tools to build and flash Chromebook firmwares.  It
uses Git LFS in order to manage binary files, so please follow these steps to
be able to use it:
9
10

```
11
sudo apt install git-lfs  # on Debian...
12
13
git clone https://gitlab.collabora.com/chromium/firmware-tools.git
cd firmware-tools
14
15
16
17
git lfs fetch
git lfs checkout
```

18
19
# Flashing

20
The [`servoflash.py`](https://gitlab.collabora.com/chromium/firmware-tools/-/blob/master/servoflash.py) tool can be used to flash firmware images onto Chromebook
21
22
23
devices using a Servo interface.  It uses a specific `flashrom` binary with its
library dependencies copied from the Chromium OS SDK.  They can be found in the
local `bin` and `lib` directories in this repository.
24

25
26
27
28
29
30
31
To flash a firmware image, the target device needs to be in the Google Servo
config file which is typically `/etc/google-servo.conf`.  This is where the
device serial numbers are associated with their names.  There also needs to be
a `servod` service running to be able to access the device.

Here's a sample command to flash a firmware, given all the preconditions
mentioned above are met:
32
33
34
35
36
37
38
39
40

```
./servoflash.py \
  --device=rk3399-gru-kevin-cbg-0 \
  --firmware=depthcharge-rk3399-gru-kevin-20180806.dev.bin
```

It can take a few minutes.  There should be these messages around the end,
which can vary depending on the type of Chromebook:
41

42
43
44
45
```
Erasing and writing flash chip... Verifying flash... VERIFIED.
SUCCESS
```
46
47
48
49

# Building

The Chromebook firmware needs to be rebuilt and flashed with extra patches and
50
51
52
53
54
55
56
57
58
59
60
61
62
configuration options turned on in order to enable the serial console and boot
with a kernel image and ramdisk supplied over TFTP interactively.  The standard
firmware shipped with the products is only configured to boot the Chrome OS
image present on the device, with no serial console and no way to override it.
This is not suitable for automating the Chromebook in a test lab such as LAVA,
or for doing low-level kernel development.

Collabora maintains a set of
[`Depthcharge`](https://gitlab.collabora.com/chromium/depthcharge/) branches
with such changes.  The tools in this repository make use of them to build the
firmware images used in LAVA.

## Docker containers
63
64

Building the firmware for a Chromebook can be non-trivial.  To make things
65
66
67
68
69
70
71
72
73
easier, this repository provides tools to create Docker containers and some
helper scripts to build firmware images for some known device types.  All this is kept under the [`cros-build`](https://gitlab.collabora.com/chromium/firmware-tools/-/blob/master/cros-build/) directory.

The containers are using the following local directories on the host:

* cache: used by the SDK tools, for example to store SDK tarballs
* firmware: where the firmware binaries are kept and new ones are placed
* chroot-*: dedicated chroot directory for each device type

74
## Chromebook device types
75
76
77
78
79

Each device type uses a different revision of the Chromium OS source tree,
which means a different Docker image and a different Docker container with a
different chroot directory.  Each device type will also have a different build
script, as the steps can vary slightly for each of them.
80
81

The
82
[`cros-build/bootstrap.sh`](https://gitlab.collabora.com/chromium/firmware-tools/-/blob/master/cros-build/bootstrap.sh)
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
script can be used to create a Docker container for a particular device type.
It will download the Chromium OS source code necessary to build a Chromebook
firmware and set up the Chromium OS SDK chroot.  Then it's possible to run some
build scripts provided in this repository to build a new image.

All these device-specific things can be found in the
[`setup`](https://gitlab.collabora.com/chromium/firmware-tools/-/blob/master/cros-build/setup/)
sub-directory.  See the example below for the `octopus` device type.

## Example: octopus

For example, the `octopus` device type has a
[`octopus.env`](https://gitlab.collabora.com/chromium/firmware-tools/-/blob/master/cros-build/setup/octopus.env)
file with environment variables defining the parameters for the Docker
container and a
[`octopus.sh`](https://gitlab.collabora.com/chromium/firmware-tools/-/blob/master/cros-build/setup/octopus.sh)
build script which will get copied in the container.

To set up a container for `octopus`:
102
103

```
104
cd cros-build
105
./bootstrap.sh octopus
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
Using environment file: setup/octopus.env
------------------------
CROS_DEVICE=octopus
CROS_SDK_BRANCH=firmware-octopus-11297.83.B
------------------------
[...]
(cr) (firmware-octopus-11297.83.B/(8c78090...)) cros-build@4d71a209fc9f ~/trunk/src/scripts $
```

This can take a while the first time.  Once it has completed, exiting from the
container and running the `bootstrap.sh` script again should only take a few
seconds as everything is kept in the `cache` and `chroot-octopus` directories.

Then to start building the firmware, the `octopus.sh` script is available

```
./octopus.sh setup    # to configure the chroot for "octopus"
./octopus.sh checkout # to check out the Depthcharge branch
./octopus.sh build    # to build Depthcharge
./octopus.sh image    # to create a new firmware image
```

Likewise, the `setup` and `build` steps can take a while the first time but
should be very quick when run again in the same chroot.

If everything went well, there should be a new firmware image:
```
-rw-r--r-- 1 cros-build chronos 16777216 Oct 21 10:20 firmware/octopus-new.bin
134
```
135
This can be accessed from the host, in the `firmware` directory.
136
137
138
139

A more in-depth walkthrough of this process, including Depthcharge
customization and the FW image generation details can be found in the
[building_firmware_images.md document](building_firmware_images.md).