LFS documentation and example updates (#2458)

Updates to LFS documentation and the LFS HTTP_OTA module before release to master
This commit is contained in:
Terry Ellison 2018-08-22 11:09:04 +01:00 committed by GitHub
parent fe40323ec4
commit add0938d81
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 205 additions and 184 deletions

View File

@ -1,205 +1,138 @@
# Lua Flash Store (LFS) # Lua Flash Store (LFS)
## LFS Quick Start
LFS is a way how to execute your Lua code out of ESP8266 flash memory so more RAM is available for variables and data structures. This way large Lua files can be run on ESP8266. There would not be enough RAM to execute such files in a "normal" (out of SPIFFS) way.
The tutorial assumes that you are able to build a NodeMCU firmware on a Windows 10 host.
This is a simple step-by-step recipe how to use the LFS feature.
### Get and flash LFS enabled firmware
Get the firmware from [NodeMCU Build](https://nodemcu-build.com/). In the section "LFS options (currently just for dev)" choose the size of the LFS partition (64 KB should be fine). SPIFFS default settings should be fine.
Another possibility is to compile your own firmware and enable LFS in [user_config.h](../../app/include/user_config.h), setting `#define LUA_FLASH_STORE 0x10000`. This file includes explanations for how to configure LFS in its comments.
For details see section [Selecting the firmware](#selecting-the-firmware) of the LFS documentation below.
[Flash the firmware](flash.md) to ESP8266.
### Select Lua files to be run from LFS
The easiest way is to maintain Lua files of your project in its own directory tree on your host. Project files will be compiled by `luac.cross` to build the LFS image in next step.
A nice example is to run Telnet and FTP servers from LFS. In order to do this put the following files in one directory:
* [lua_examples/lfs/_init.lua](../../lua_examples/lfs/_init.lua)
* [lua_examples/lfs/dummy_strings.lua](../../lua_examples/lfs/dummy_strings.lua)
* [lua_examples/telnet/telnet.lua](../../lua_examples/telnet/telnet.lua)
* [lua_modules/ftp/ftpserver.lua](../../lua_modules/ftp/ftpserver.lua)
### Build the LFS image
Windows 10 users can use the Windows Subsystem for Linux. From the directory with Lua files from the previous section run `bash` and execute the command (adjust the path as needed):
```bash
/mnt/c/GitHub/nodemcu-firmware/luac.cross -f *.lua
```
Or the following command which includes all Lua files except `init.lua` (`init.lua` needs to be in SPIFFS so it does not make sense to include it in the LFS image)
```bash
/mnt/c/GitHub/nodemcu-firmware/luac.cross -f `find *.lua -not -name 'init.lua'`
```
As a result the `luac.out` file is created.
### Upload the LFS image
Now upload the generate LFS image file (`luac.out`) as a normal SPIFFS file. Several tools can be used to upload the files from host to SPIFFS.
One way is to use the [ESPlorer](https://github.com/4refr0nt/ESPlorer) tool, button "Upload...".
### Flash the LFS image to LFS partition
Run the following command on ESP
```lua
node.flashreload("luac.out")
```
The firmware will reboot the ESP8266 and modules will be available after reboot.
### Adjust the `init.lua` file
`init.lua` is the file that is first executed by the NodeMCU firmware. Usually it sets up the wifi connection and executes the main Lua file.
Add the following lines:
```lua
-- Execute the LFS init
node.flashindex("_init")()
-- Start Telnet server
require("telnet"):open()
-- Start FTP serer
require("ftpserver").createServer('user', 'password')
```
See [lua_examples/lfs/lfs_fragments.lua](../../lua_examples/lfs/lfs_fragments.lua) for another example.
## Background ## Background
An IoT device such as the ESP8266 has very different processor characteristics from the CPU in a typical PC: Lua was originally designed as a general purpose embedded extension language for use in applications run on a conventional computer such as a PC, where the processor is mounted on a motherboard together with multiple Gb of RAM and a lot of other chips providing CPU and I/O support to connect to other devices.
- Conventional CPUs have a lot of RAM, typically more than 1 Gb, that is used to store both code and data. IoT processors like the ESP variants use a [modified Harvard architecture](https://en.wikipedia.org/wiki/Modified_Harvard_architecture) where code can also be executed out of flash memory that is mapped into a address region separate from the limited RAM. ESP8266 modules are on a very different scale: they cost a few dollars; they are postage stamp-sized and only mount two main components, an ESP [SoC](https://en.wikipedia.org/wiki/System_on_a_chip) and a flash memory chip. The SoC includes limited on-chip RAM, but also provides hardware support to map part of the external flash memory into a separate memory address region so that firmware can be executed directly out of this flash memory — a type of [modified Harvard architecture](https://en.wikipedia.org/wiki/Modified_Harvard_architecture) found on many [IoT](https://en.wikipedia.org/wiki/Internet_of_things) devices. Even so, Lua's design goals of speed, portability, small kernel size, extensibility and ease-of-use have made it a good choice for embedded use on an IoT platform, but with one major limitaton: the standard Lua core runtime system (**RTS**) assumes that both Lua data _and_ code are stored in RAM; this isn't a material constraint with a conventional computer, but it can be if your system only has some 48Kb RAM available for application use.
- Conventional CPU motherboards include RAM and a lot of support chips. ESP modules are postage stamp-sized and typically comprise one ESP [SoC](https://en.wikipedia.org/wiki/System_on_a_chip) and a flash memory chip used to store firmware and a limited file system. The Lua Flash Store (**LFS**) patch modifies the Lua RTS to support a modified Harvard architecture by allowing the Lua code and its associated constant data to be executed directly out of flash-memory (just as the NoceMCU firmware is itself executed). This now allows NodeMCU Lua developers to create Lua applications with up to 256Kb Lua code and read-only (**RO**) constants executing out of flash, with all of the RAM is available for read-write (**RW**) data.
Lua was originally designed as a general embeddable extension language for applications that would typically run on systems such as a PC, but its design goals of speed, portability, small kernel size, extensibility and ease-of-use also make Lua a good choice for embedded use on an IoT platform. Our NodeMCU firmware implementation was therefore constrained by the standard Lua core runtime system (**RTS**) that assumes a conventional CPU architecture with both Lua code and data in RAM; however ESP8266 modules only have approximately 48Kb RAM for application use, even though the firmware itself executes out of the larger flash-based program memory. Unfortunately, the ESP architecture provides very restricted write operations to flash memory (writing to NAND flash involves bulk erasing complete 4Kb memory pages, before overwriting each erased page with any new content). Whilst it is possible to develop a R/W file system within this constraint (as SPIFFS demonstrates), this makes impractical to modify Lua code pages on the fly. Hence the LFS patch works within a reflash-and-restart paradigm for reloading the LFS, and does this by adding two API new calls: one to reflash the LFS and restart the processor, and one to access LFS stored functions. The patch also addresses all of the technical issues 'under the hood' to make this magic happen.
This Lua Flash Store (**LFS**) patch modifies the NodeMCU Lua RTS to allow Lua code and its associated constant data to be executed directly out of flash-memory, just as the firmware itself is executed. This now enables NodeMCU Lua developers to create Lua applications with up to 256Kb Lua code and read-only (**RO**) constants executing out of flash, so that all of the RAM is available for read-write (**RW**) data. The remainder of this paper is split into two parts. The first provides an overview for Lua developers wanting to use LFS effectively at an application programming level, and as most of our developers use a Windows platform, it gives a quick start for these developers before covering some of application issues in more detail. The second part is for those who want to understand a little of how this magic happens, and gives more details on the technical issues that were addressed in order to implement the patch.
Though the ESP architecture does allow RW operations to flash, these are constrained by the write limitations of NAND flash architecture, as writing involves the block erasing of 4Kb pages and then overwriting each pach with new content. Whilst it is possible (as with SPIFFS) to develop R/W file systems working within this constraint, memory-mapped read access to flash is cached through a RAM cache in order to accelerate code execution, and this makes it practically impossible to modify executable code pages on the fly. Hence the LFS patch must work within a reflash-and-restart paradigm for reloading the LFS.
The LFS patch does this by adding two API new calls to the `node` module: one to reflash the LFS and restart the processor, and one to access the LFS store once loaded. Under the hood, it also addresses all of the technical issues to make this magic happen.
The remainder of this paper is split into two sections:
- The first section provides an overview the issues that a Lua developer needs to understand at an application level to use LFS effectively.
- The second gives more details on the technical issues that were addressed in order to implement the patch. This is a good overview for those that are interested, but many application programmers won't care how the magic happens, just that it does.
## Using LFS ## Using LFS
### A Quick Start for Windows Developers
This is a simple step-by-step guide for ESP8266 Lua developers who want to start to use `luac.cross` on a Windows development environment.
#### Get and flash LFS enabled firmware
Use the [NodeMCU Build](https://nodemcu-build.com/) service to build the firmware. In the "LFS options" section choose the size of the LFS partition (64 KB should be fine). The SPIFFS default settings will be fine. See section [selecting the firmware](#selecting-the-firmware) for further details. Once you have received the build confirmation email and downloaded the flash image, you can then [flash the firmware](flash.md) to your ESP8266.
#### Build or download your local copy of `luac.cross`
Windows 10 users can install and use the Windows Subsystem for Linux (WSL). Alternatively all Windows users can [install Cygwin](https://www.cygwin.com/install.html). (You will only need the Cygwin core). Either way, you will need a copy of the `luac.cross` compiler:
- You can either download this from my fileserver. The [ELF variant](http://files.ellisons.org.uk/esp8266/luac.cross) is used for all recent Linux and WSL flavours, or the [cygwin binary](http://files.ellisons.org.uk/esp8266/luac.cross.cygwin)) for the Cygwin environment.
- Or you can compile it yourself by downloading the current NodeMCU sources (this [ZIPfile](https://github.com/nodemcu/nodemcu-firmware/archive/master.zip)); edit the `app/includes/user_config.h` file and the `cd` to the `app/lua/luac_cross` and run make to build the compiler in the NodeMCU firmware root directory. Note that the `luac.cross` make only needs the host toolchain which is installed by default in WSL, but in Cygwin you will need to tick the _gcc-core_ + _gnu make_ options during setup.
#### Select Lua files to be run from LFS
The easiest approach is to maintain all the Lua files for your project in a single directory on your host. (These files will be compiled by `luac.cross` to build the LFS image in next step.)
For example to run the Telnet and FTP servers from LFS, put the following files in your project directory:
* [lua_examples/lfs/_init.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/lfs/_init.lua). LFS helper routines and functions.
* [lua_examples/lfs/dummy_strings.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/lfs/dummy_strings.lua). Moving common strings into LFS.
* [lua_examples/telnet/telnet.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/telnet/telnet.lua). A simple **telnet** server.
* [lua_modules/ftp/ftpserver.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_modules/ftp/ftpserver.lua). A simple **FTP** server.
You should always include the first two modules, but the remaining files would normally be replaced by your own project files. Also remember that these are examples and that you are entirely free to modify or to replace them for your own application needs.
#### Compile the LFS image
You will do this from within the WSL or Cygwin command window, both of which use the `bash` shell; do a `cd` to the project directory, and execute the command:
```bash
luac.cross -o lfs.img -f *.lua
```
You will need to adjust the `img` and `lua` paths according to their location, and ensure that `luac.cross` is in your `$PATH` search list. For example if you are using WSL and your project files are in `D:\myproject` then the Lua path would be `/mnt/d/myproject/*.lua` (For cygwin replace `mnt` by `cygwin`). This will create the `lfs.img` file if there are no Lua compile errors (again specify an explicit directory path if needed).
You might also want to add a simple one-line script file to your ~/bin directory to wrap this command up.
#### Upload the LFS image and flash it to the LFS region
Now upload the compiled LFS image file (`lfs.img` in this case) as a normal SPIFFS file. Several tools are available to upload the files from host to SPIFFS, including [ESPlorer](https://github.com/4refr0nt/ESPlorer). There is also a new example, [HTTP_OTA.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/lfs/HTTP_OTA.lua), in `lua_examples` that can retrieve images from a standard web service.
Once the LFS image file is on SPIFFS, you can execute the [node.flashreload()](/en/dev/en/modules/node/#nodeflashreload) command and the loader will then load it into flash and immediately restart the ESP module with the new LFS loaded, if the image file is valid. However, the call will return with an error _if_ the file is found to be invalid, so your reflash code should include logic to handle such an error return.
#### Edit your `init.lua` file
`init.lua` is the file that is first executed by the NodeMCU firmware. Usually it setups the WiFi connection and executes the main Lua application. Assuming that you have included the `_init` file discussed above, then executing this will add a simple API for LFS module access:
- Individual functions can be executed directly, e.g. `LFS.myfunc(a,b)`
- LFS is now in the require path, so `require 'myModule'` works as expected.
Do a protected call of this `_init` code: `pcall(node.flashindex("_init"))` and check the error status. See [Programming Techniques and Approachs](#programming-techniques-and-approachs) below for a more detailed description.
The remainder of this section describes how to use LFS in further detail.
### Selecting the firmware ### Selecting the firmware
Power developers might want to use Docker or their own build environment as per our [Building the firmware](https://nodemcu.readthedocs.io/en/master/en/build/) documentation, and so `app/include/user_config.h` has now been updated to include the necessary documentation on how to select the configuration options to make an LFS firmware build. Power developers might want to use Docker or their own build environment as per our [Building the firmware](https://nodemcu.readthedocs.io/en/master/en/build/) documentation, and so `app/include/user_config.h` has now been updated to include the necessary documentation on how to select the configuration options to make an LFS firmware build.
However, most Lua developers seem to prefer the convenience of our [Cloud Build Service](https://nodemcu-build.com/), so we will add two extra menu options to facilitate building LFS images: However, most Lua developers seem to prefer the convenience of our [Cloud Build Service](https://nodemcu-build.com/), so we have added extra LFS menu options to facilitate building LFS images:
Variable | Option Variable | Option
---------|------------ ---------|------------
LFS size | (none, 32Kb, 64Kb, 94Kb) The default is none. Selecting a numeric value builds in the corresponding LFS. LFS size | (none, 32, 64, 96 or 128Kb) The default is none. The default is none, in which case LFS is disabled. Selecting a numeric value enables LFS with the LFS region sized at this value.
SPIFFS size | (default or a multiple of 64Kb) The cloud build will base the SPIFFS at 1Mb if an explicit size is specified. SPIFFS base | If you have a 4Mb flash module then I suggest you choose the 1024Kb option as this will preserve the SPIFFS even if you reflash with a larger firmware image; otherwise leave this at the default 0.
SPIFFS size | (default or various multiples of 64Kb) Choose the size that you need. Larger FS require more time to format on first boot.
You must choose an explicit (non-default) LFS size to enable the use of LFS. Whilst you can use a default (maximal) SPIFFS configuration, most developers find it more useful to work with a fixed SPIFFS that has been sized to match their application reqirements. You must choose an explicit (non-default) LFS size to enable the use of LFS. Most developers find it more useful to work with a fixed SPIFFS size matched to their application requirements.
### Choosing your development lifecycle ### Choosing your development life-cycle
The build environment for generating the firmware images is Linux-based, but as you can use our cloud build service to generate these, you can develop NodeMCU applications on pretty much any platform including Windows and MacOS. Unfortunately LFS images must be built off-ESP on a host platform, so you must be able to run the `luac.cross` cross compiler on your development machine to build LFS images. The build environment for generating the firmware images is Linux-based, but you can still develop NodeMCU applications on pretty much any platform including Windows and MacOS, as you can use our cloud build service to generate these images. Unfortunately LFS images must be built off-ESP on a host platform, so you must be able to run the `luac.cross` cross compiler on your development machine to build LFS images.
- For Windows 10 developers, the easiest method of achieving this is to install the [Windows Subsystem for Linux](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux). Most WSL users install the Ubuntu Bash shell as well; note that this is just a shell and some core GNU utilities (somewhat similar to Cygwin) rather than a full Ubuntu OS, as WSL extends the NT kernel to support the direct execution of Linux ELF images. WSL can directly run the `luac.cross` and `spiffsimg` that are build as part of the firmware. You will also need the `esptool.py` tool but `python.org` already provides Python releases for Windows. - For Windows 10 developers, one method of achieving this is to install the [Windows Subsystem for Linux](https://en.wikipedia.org/wiki/Windows_Subsystem_for_Linux). The default installation uses the GNU `bash` shell and includes the core GNU utilities. WSL extends the NT kernel to support the direct execution of Linux ELF images, and it can directly run the `luac.cross` and `spiffsimg` that are build as part of the firmware. You will also need the `esptool.py` tool but `python.org` already provides Python releases for Windows. Of course all Windows developers can use the Cygwin environment as this runs on all Windows versions and it also takes up less than ½Gb HDD (WSL takes up around 5Gb).
- Linux users can just use these tools natively. - Linux users can just use these tools natively. Windows users can also to do this in a linux VM or use our standard Docker image. Another alternaive is to get yourself a Raspberry Pi or equivalent SBC and use a package like [DietPi](http://www.dietpi.com/) which makes it easy to install the OS, a Webserver and Samba and make the RPi look like a NAS to your PC. It is also straightforward to write a script to automatically recompile a Samba folder after updates and to make the LFS image available on the webservice so that your ESP modules can update themselves OTA using the new `HTTP_OTA.lua` example.
- In principle, only the build environment components needed to support `luac.cross` and `spiffsimg` are the `app/lua/lua_cross` and `tools/spifsimg` subdirectory makes. It should be straight forward to get these working under any environment which provides POSIX runtime support, including MacOS and Cygwin (for windows versions prior to Win10), but suitable developer effort is required to generate suitable executables; any volunteers? - In principle, only the environment component needed to support applicatin development is `luac.cross`, built by the `app/lua/lua_cross` make. (Some developers might also use the `spiffsimg` exectable, made in the `tools/spifsimg` subdirectory). Both of these components use the host toolchain (that is the compiler and associated utilities), rather than the Xtensa cross-compiler toolchain, so it is therefore straightforward to make under any environment which provides POSIX runtime support, including WSL, MacOS and Cygwin.
Most Lua developers seem to start with the [ESPlorer](https://github.com/4refr0nt/ESPlorer) tool, a 'simple to use' IDE that enables beginning Lua developers to get started. However, ESPlorer relies on a UART connection and this can be slow and cumbersome, and it doesn't scale well for larger ESP application. Most Lua developers seem to start with the [ESPlorer](https://github.com/4refr0nt/ESPlorer) tool, a 'simple to use' IDE that enables beginning Lua developers to get started. ESPlorer can be slow cumbersome for larger ESP application, and it requires a direct UART connection. So many experienced Lua developers switch to a rapid development cycle where they use a development machine to maintain your master Lua source. Going this route will allow you use your favourite program editor and source control, with one of various techniques for compiling the lua on-host and downloading the compiled code to the ESP:
So many experienced Lua developers switch to a rapid development cycle where they use a development machine to maintain your master Lua source. Going this route will allow you use your favourite program editor and source control, with one of various techniques for compiling the lua on-host and downloading the compiled code to the ESP: - If you use a fixed SPIFFS image (I find 128Kb is enough for most of my applications) and are developing on a UART-attached ESP module, then you can also recompile any LC files and LFS image, then rebuild a SPIFFS file system image before loading it onto the ESP using `esptool.py`; if you script this you will find that this cycle takes less than a minute. You can either embed the LFS.img in the SPIFFS. You can also use the `luac.cross -a` option to build an absolute address format image that you can directly flash into the LFS region within the firmware.
- If you use a fixed SPIFFS image (I find 128Kb is enough for most of my applications), then you can script recompiling your LC files, rebuilding a SPIFFS image and loading it onto the ESP using `esptool.py` in less than 60 sec. You can either embed the LFS.img in the SPIFFS, or you can use the `luac.cross -a` option to directly load the new LFS image into the LFS region within the firmware. - If you only need to update the Lua components, then you can work over-the-air (OTA). For example see my
[HTTP_OTA.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/lfs/HTTP_OTA.lua), which pulls a new LFS image from a webservice and reloads it into the LFS region. This only takes seconds, so I often use this in preference to UART-attached loading.
- I now have an LFS aware version of my LuaOTA provisioning system (see `lua_examples/luaOTA`). This handles all of the incremental compiling and LFS reloads transparently. This is typically integrated into the ESP application.
- Another option would be to include the FTP and Telnet modules in the base LFS image and to use telnet and FTP to update your system. (Given that a 64Kb LFS can store thousands of lines of Lua, doing this isn't much of an issue.) - Another option would be to include the FTP and Telnet modules in the base LFS image and to use telnet and FTP to update your system. (Given that a 64Kb LFS can store thousands of lines of Lua, doing this isn't much of an issue.)
My current practice is to use a small bootstrap `init.lua` file in SPIFFS to load the `_init` module from LFS, and this does all of the actual application initialisation. My `init.lua`: My current practice is to use a small bootstrap `init.lua` file in SPIFFS to connect to WiFi, and also load the `_init` module from LFS to do all of the actual application initialisation. There is a few sec delay whilst connecting to the Wifi, and this delay also acts as a "just in case" when I am developing, as it is enough to allow me to paste a `file.remove('init.lua')` into the UART if my test applicaiton is stuck into a panic loop, or set up a different development path for debugging.
- Is really a Lua binary (`.lc`) file renamed to a `.lua` extension. Using a binary init file avoids loading the Lua compiler. This works because even though the firmware looks for `init.lua`, the file extension itself is a just a convention; it is treated as a Lua binary if it has the correct Lua binary header. Under rare circumstances, for example a power fail during the flashing process, the flash can be left in a part-written state following a `flashreload()`. The Lua RTS start-up sequence will detect this and take the failsafe opton of resetting the LFS to empty, and if this happens then the LFS `_init` function will be unavailable. Your `init.lua` should therefore not assume that the LFS contains any modules (such as `_init`), and should contain logic to detect if LFS reset has occurred and if necessary reload the LFS again. Calling `node.flashindex("_init")()` directly will result in a panic loop in these circumstances. Therefore first check that `node.flashindex("_init")` returns a function or protect the call, `pcall(node.flashindex("_init"))`, and decode the error status to validate that initialisation was successful.
- Includes a 1 sec delay before connecting to the Wifi. This is a "just in case" when I am developing. This is enough to allow me to paste a `file.remove'init.lua'` into the UART if I want to do different development paths.
No doubt some standard usecase / templates will be developed by the community over the next six months. No doubt some standard usecase / templates will be developed by the community over the next six months.
### Programming Techniques and approachs ### Programming Techniques and approachs
I have found that moving code into LFS has changed my coding style, as I tend to use larger modules and I don't worry about in-memory code size. This facilitates a more 'keep it simple stupid' coding style, so my ESP Lua code now looks more similar to host-based Lua code. I still prefer to keep the module that I am currently testing in SPIFFS, and only move modules into LFS once they are stable. However if you use `require` to load modules then this can all be handled by the require loader. I have found that moving code into LFS has changed my coding style, as I tend to use larger modules and I don't worry about in-memory code size. This make it a lot easier to adopt a clearer coding style, so my ESP Lua code now looks more similar to host-based Lua code. Lua code can still be loaded from SPIFFS, so you still have the option to keep code under test in SPIFFS, and only move modules into LFS once they are stable.
Here is the code fragment that I use in my `_init` module to do this magic: #### Accessing LFS functions and loading LFS modules
See [lua_examples/lfs/_init.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/lfs/_init.lua) for the code that I use in my `_init` module to do create a simple access API for LFS. There are two parts to this.
The first sets up a table in the global variable `LFS` with the `__index` and `__newindex` metamethods. The main purpose of the `__index()` is to resolve any names against the LFS using a `node.flashindex()` call, so that `LFS.someFunc(params)` does exactly what you would expect it to do: this will call `someFunc` with the specified parameters, if it exists in in the LFS. The LFS properties `_time`, `_config` and `_list` can be used to access the other LFS metadata that you need. See the code to understand what they do, but `LFS._list` is the array of all module names in the LFS. The `__newindex` method makes `LFS` readonly.
The second part uses standard Lua functionality to add the LFS to the require [package.loaders](http://pgl.yoyo.org/luai/i/package.loaders) list. (Read the link if you want more detail). There are four standard loaders, which the require loader searches in turn. NodeMCU only uses the second of these (the Lua loader from the file system), and since loaders 1,3 and 4 aren't used, we can simply replace the 1st or the 3rd by code to use `node.flashindex()` to return the LFS module. The supplied `_init` puts the LFS loader at entry 3, so if the module is in both SPIFFS and LFS, then the SPIFFS version will be loaded. One result of this has burnt me during development: if there is an out of date version in SPIFFS, then it will still get loaded instead of the one if LFS.
If you want to swap this search order so that the LFS is searched first, then SET `package.loaders[1] = loader_flash` in your `_init` code. If you need to swap the search order temporarily for development or debugging, then do this after you've run the `_init` code:
```Lua ```Lua
do do local pl = package.loaders; pl[1],pl[3] = pl[3],pl[1]; end
local index = node.flashindex
-- Setup the LFS object
local lfs_t = {
__index = function(_, name)
local fn_ut, ba, ma, size, modules = index(name)
if not ba then
return fn_ut
elseif name == '_time' then
return fn_ut
elseif name == '_config' then
local fs_ma, fs_size = file.fscfg()
return {ba, ma, fs_ma, size, fs_size}
elseif name == '_list' then
return modules
else
return nil
end
end,
__newindex = function(_, name, value)
error("LFS is readonly. Invalid write to LFS." .. name, 2)
end
}
rawset(getfenv(),'LFS', setmetatable(lfs_t,lfs_t))
-- And add LFS to the require path list
local function loader_flash(module)
local fn, ba = index(module)
return ba and "Module not in LFS" or fn
end
table.insert(package.loaders, loader_flash)
end
``` ```
This code is also conveniently packaged in [lua_examples/lfs/_init.lua](../../lua_examples/lfs/_init.lua). #### Moving common string constants into LFS
Once this has been executed, if you have a function module `func1` in LFS, then `LFS.func1(x,y,z)` just works as you would expect. The LFS properties `_time`, `_config` and `_list` can be used to access the other LFS metadata that you need. LFS is mainly used to store compiled modules, but it also includes its own string table and any strings loaded into this can be used in your Lua application without taking any space in RAM. Hence, you might also want to preload any other frequently used strings into LFS as this will both save RAM use and reduced the Lua Garbage Collector (**LGC**) overheads.
Of course, if you use Lua modules to build your application then `require "some_module"` will automatically path in and load your modules from LFS. Note that SPIFFS is still ahead of LFS in this search list, so if you have a dev version in SPIFFS, say, then this will be loaded first. However, if you want to want to swap this search order so that the LFS is searched first, then run `table.insert(package.loaders,2,loader_flash)` in your `_init` code. If you need to swap the search order temporarily for development or debugging, then do this after you've run the `_init` code:
```Lua
do local pl = package.loaders; pl[2],pl[5] = pl[5],pl[2]; end
```
(The indexes 2 and 5 come from the value of the `loaders` array in [app/lua/loadlib.c](../../app/lua/loadlib.c)).
Whilst LFS is primarily used to store compiled modules, it also includes its own string table and any strings loaded into this can be used in your Lua application without taking any space in RAM. Hence, you might also want to preload any other frequently used strings into LFS as this will both save RAM use and reduced the Lua-custom Garbage Collector (**LGC**) overheads.
The patch adds an extra debug function `getstrings()` function to help you determine what strings are worth adding to LFS. This takes an optional string argument `'RAM'` (the default) or `'ROM'`, and returns a list of the strings in the corresponding table. So the following example can be used to get a listing of the strings in RAM. You can enter the following Lua at the interactive prompt or call it as a debug function during a running application in order to generate this string list.
The new debug function `debug.getstrings()` can help you determine what strings are worth adding to LFS. It takes an optional string argument `'RAM'` (the default) or `'ROM'`, and returns a list of the strings in the corresponding table. So the following example can be used to get a listing of the strings in RAM.
```Lua ```Lua
do do
local a=debug.getstrings'RAM' local a=debug.getstrings'RAM'
@ -208,12 +141,18 @@ do
end end
``` ```
If you then create a file, say `LFS_dummy_strings.lua`, and put these `local preload` lines in it, and include this file in your `luac.cross -f`, then the cross compiler will generate a ROM string table that includes all strings referenced in this dummy module. You never need to call this module; just it's inclusion in the LFS build is enough to add the strings to the ROM table. Once in the ROM table, then you can use them subsequently in your application without incurring any RAM or LGC overhead. You can do this at the interactive prompt or call it as a debug function during a running application in order to generate this string list, (but note that calling this still creates the overhead of an array in RAM, so you do need to have enough "head room" to do the call).
A useful starting point may be found in [lua_examples/lfs/dummy_strings.lua](../../lua_examples/lfs/dummy_strings.lua).
## Technical Issues You can then create a file, say `LFS_dummy_strings.lua`, and insert these `local preload` lines into it. By including this file in your `luac.cross` compile, then the cross compiler will also include all strings referenced in this dummy module in the generated ROM string table. Note that you don''t need to call this module; it's inclusion in the LFS build is enough to add the strings to the ROM table. Once in the ROM table, then you can use them subsequently in your application without incurring any RAM or LGC overhead.
Whilst memory capacity isn't a material constraint on most conventional machines, the Lua RTS still embeds some features to minimise overall memory usage. In particular: A useful starting point may be found in [lua_examples/lfs/dummy_strings.lua](https://github.com/nodemcu/nodemcu-firmware/tree/dev/lua_examples/lfs/dummy_strings.lua); this saves about 4Kb of RAM by moving a lot of common compiler and Lua VM strings into ROM.
Another good use of this technique is when you have resources such as CSS, HTML and JS fragments that you want to output over the internet. Instead of having lots of small resource files, you can just use string assignments in an LFS module and this will keep these constants in LFS instead.
## Technical issues
Whilst memory capacity isn't a material constraint on most conventional machines, the Lua RTS still includes some features to minimise overall memory usage. In particular:
- The more resource intensive data types are know as _collectable objects_, and the RTS includes a LGC which regularly scans these collectable resources to determine which are no longer in use, so that their associated memory can be reclaimed and reused. - The more resource intensive data types are know as _collectable objects_, and the RTS includes a LGC which regularly scans these collectable resources to determine which are no longer in use, so that their associated memory can be reclaimed and reused.
@ -221,22 +160,24 @@ Whilst memory capacity isn't a material constraint on most conventional machines
The compiled code, as executed by Lua RTS, internally comprises one or more function prototypes (which use a `Proto` structure type) plus their associated vectors (constants, instructions and meta data for debug). Most of these compiled constant types are basic (e.g. numbers) and the only collectable constant data type are strings. The other collectable types such as arrays are actually created at runtime by executing Lua compiled instructions to build each resource dynamically. The compiled code, as executed by Lua RTS, internally comprises one or more function prototypes (which use a `Proto` structure type) plus their associated vectors (constants, instructions and meta data for debug). Most of these compiled constant types are basic (e.g. numbers) and the only collectable constant data type are strings. The other collectable types such as arrays are actually created at runtime by executing Lua compiled instructions to build each resource dynamically.
Currently, when any Lua file is loaded into an ESP application, the RTS loads the corresponding compiled version into RAM. Each compiled function has its own Proto structure hierarchy, but this hierarchy is not exposed directly to the running application; instead the compiler generate `CLOSURE` instruction which is executed at runtime to bind the Proto to a Lua function value thus creating a [closure](https://en.wikipedia.org/wiki/Closure_(computer_programming)). Since this occurs at runtime, any `Proto` can be bound to multiple closures. A Lua closure can have multiple RW [Upvalues](https://www.lua.org/pil/27.3.3.html) bound to it, and so function value is much like a Lua object in that it is refering to something that can contain RW state, even though the `Proto` hierarchy itself is intrinsically RO. When any Lua file is loaded without LFS into an ESP application, the RTS loads the corresponding compiled version into RAM. Each compiled function has its own Proto structure hierarchy, but this hierarchy is not exposed directly to the running application; instead the compiler generates `CLOSURE` instruction which is executed at runtime to bind the `Proto` to a Lua function value thus creating a [closure](https://en.wikipedia.org/wiki/Closure_(computer_programming)). Since this occurs at runtime, any `Proto` can be bound to multiple closures. A Lua closure can also have multiple RW [Upvalues](https://www.lua.org/pil/27.3.3.html) bound to it, and so function value is a Lua RW object in that it is referring to something that can contain RW state, even though the `Proto` hierarchy itself is intrinsically RO.
Whilst advanced ESP Lua programmers can use overlay techniques to ensure that only active functions are loaded into RAM and thus increase the effective application size, this adds to runtime and program complexity. Moving Lua "program" resources into ESP Flash addressable memory typically doubles the effective RAM available, and largley removes the need to complicate applications code to facilitate overlaying. Whilst advanced ESP Lua programmers can use overlay techniques to ensure that only active functions are loaded into RAM and thus increase the effective application size, this adds to runtime and program complexity. Moving Lua "program" resources into ESP Flash addressable memory typically at least doubles the effective RAM available, and removes the need to complicate applications code by implementing overlaying.
Any RO resources that are relocated to a flash address space: Any RO resources that are relocated to a flash address space:
- Must not be collected. Also RW references to RO resources must be robustly handled by the LGC. - Must not be collected. Also RW references to RO resources must be robustly handled by the LGC.
- Cannot reference to any volatile RW data elements (though RW resources can refer to RO resources). - Cannot reference to any volatile RW data elements (though RW resources can refer to RO resources).
All strings in Lua are [interned](https://en.wikipedia.org/wiki/String_interning), so that only one copy of any string is kept in memory and most string manipulation uses the address of this single copy as a unique reference. This uniqueness and the LGC of strings is facilitated by using a global string table that is hooked into the Lua Global State. Under standard Lua, any new string is first resolved against RAM string table, with only the string-misses being added to the string table. The LFS patch adds a second RO string table in flash and which contains all strings used in LFS Protos. Maintaining integrity across the two RAM and RO string tables is simple and low-cost, with LFS resolution process extended across both the RAM and ROM string tables. Hence any strings already in the ROM string table will generate a unique string reference without the need to add an additional entry in the RAM table. This both significantly reduces the size of the RAM string table, and removes a lot of strings from the LCG scanning. All strings in Lua are [interned](https://en.wikipedia.org/wiki/String_interning), so that only one copy of any string is kept in memory, and most string manipulation uses the address of this single copy as a unique reference. This uniqueness and the LGC of strings is facilitated by using a global string table that is hooked into the Lua global state. Within standard Lua VM, any new string is first resolved against RAM string table, so that only the string-misses are added to the string table.
Note that early development implementations of the LFS build process allowed on-target ESP builds. Unfortunately, we found in practice that the Lua compiler was so resource hungry that it was impractical to get this to scale to usable application sizes, and we therefore abandoned this approach, moving the LFS build process onto the development host machine by embedding this into `luac.cross`. This approach avoids all of the update integrity issues involved in building a new LFS which might require RO resources already referenced in the RW ones. The LFS patch adds a second RO string table in flash and this contains all strings used in the LFS Protos. Maintaining integrity across the two string tables is simple and low-cost, with LFS resolution process extended across both the RAM and ROM string tables. Hence any strings already in the ROM string table already have a unique string reference avoiding the need to add an additional entry in the RAM table. This both significantly reduces the size of the RAM string table, and removes a lot of strings from the LCG scanning.
Any LFS image can be loaded in the LFS store by one of two mechanisms: Note that my early development implementations of the LFS build process allowed on-target ESP builds, but I found that the Lua compiler was too resource hungry for usable application sizes, and it was impractical to get this approach to scale. So we abandoned this approach and moved the LFS build process onto the development host machine by embedding this into `luac.cross`. This approach also avoids all of the update integrity issues involved in building a new LFS which might require RO resources already referenced in the RW ones.
- The image can be build on the host and then copied into SPIFFS. Calling the `node.flashreload()` API with this filename will load the image, and then schedule a restart to leave the ESP in normal application mode, but with an updated flash block. This sequence is essentially atomic. Once called, the only exit is the reboot. A LFS image can be loaded in the LFS store by one of two mechanisms:
- The image can be build on the host and then copied into SPIFFS. Calling the `node.flashreload()` API with this filename will load the image, and then schedule a restart to leave the ESP in normal application mode, but with an updated flash block. This sequence is essentially atomic. Once called, and the format of the LFS image has been valiated, then the only exit is the reboot.
- The second option is to build the LFS image using the `-a` option to base it at the correct absolute address of the LFS store for a given firmware image. The LFS can then be flashed to the ESP along with the firmware image. - The second option is to build the LFS image using the `-a` option to base it at the correct absolute address of the LFS store for a given firmware image. The LFS can then be flashed to the ESP along with the firmware image.
@ -244,7 +185,7 @@ The LFS store is a fixed size for any given firmware build (configurable by the
A separate `node.flashindex()` function creates a new Lua closure based on a module loaded into LFS and more specfically its flash-based prototype; whilst this access function is not transparent at a coding level, this is no different functionally than already having to handle `lua` and `lc` files and the existing range of load functions (`load`,`loadfile`, `loadstring`). Either way, creating a closure on flash-based prototype is _fast_ in terms of runtime. (It is basically a single instruction rather than a compile, and it has minimal RAM impact.) A separate `node.flashindex()` function creates a new Lua closure based on a module loaded into LFS and more specfically its flash-based prototype; whilst this access function is not transparent at a coding level, this is no different functionally than already having to handle `lua` and `lc` files and the existing range of load functions (`load`,`loadfile`, `loadstring`). Either way, creating a closure on flash-based prototype is _fast_ in terms of runtime. (It is basically a single instruction rather than a compile, and it has minimal RAM impact.)
### Basic approach ### Implementation details
This **LFS** patch uses two string tables: the standard Lua RAM-based table (`RWstrt`) and a second RO flash-based one (`ROstrt`). The `RWstrt` is searched first when resolving new string requests, and then the `ROstrt`. Any string not already in either table is then added to the `RWstrt`, so this means that the RAM-based string table only contains application strings that are not already defined in the `ROstrt`. This **LFS** patch uses two string tables: the standard Lua RAM-based table (`RWstrt`) and a second RO flash-based one (`ROstrt`). The `RWstrt` is searched first when resolving new string requests, and then the `ROstrt`. Any string not already in either table is then added to the `RWstrt`, so this means that the RAM-based string table only contains application strings that are not already defined in the `ROstrt`.
@ -260,7 +201,7 @@ Any Lua file compiled into the LFS image includes its main function prototype an
TString *source String name associated with source file TString *source String name associated with source file
``` ```
Such LFS images are created by `luac.cross` using the `-f` option, and this builds a flash image based on the list of modules provided but with a master "main" function of the form: Such LFS images are created by `luac.cross` using the `-f` option, and this builds a flash image using the list of modules provided but with a master "main" function of the form:
```Lua ```Lua
local n = ...,1518283691 -- The Unix Time of the compile local n = ...,1518283691 -- The Unix Time of the compile
@ -271,26 +212,28 @@ if n == "moduleN" then return module2 end
return 1518283691,"module1","module2", --[[ ... ]] ""moduleN" return 1518283691,"module1","module2", --[[ ... ]] ""moduleN"
``` ```
You can't actually code this Lua because the modules are in separate compilation units, but the compiler being a compiler can just emit the compiled code directly. (See `app/lua/luac_cross/luac.c` for the details.) Note that you can't actually code this Lua because the modules are in separate compilation units, but the compiler being a compiler can just emit the compiled code directly. (See `app/lua/luac_cross/luac.c` for the details.)
The deep cross-copy of the `Proto` hierarchy is also complicated because current hosts are typically 64bit whereas the ESPs are 32bit, so the structures need repacking. (See `app/lua/luac_cross/luac.c` for the details.) The deep cross-copy of the compiled `Proto` hierarchy is also complicated because current hosts are typically 64bit whereas the ESPs are 32bit, so the structures need repacking. (See `app/lua/luac_cross/luac.c` for the details.)
With this patch, the `luac.cross` build has been moved into the overall application hierarchy and is now simply a part of the NodeMCU make. The old Lua script has been removed from the `tools` directory, together with the need to have Lua preinstalled on the host. This patch moves the `luac.cross` build into the overall application make hierarchy and so it is now simply a part of the NodeMCU make. The old Lua script has been removed from the `tools` directory, together with the need to have Lua pre-installed on the host.
The LFS image is by default position independent, so is independent of the actual NodeMCU target image. You just have to copy it to the target file system and execute a `reload` to copy this to the correct location, relocating all address to the correct base. (See `app/lua/lflash.c` for the details.) This process is fast. However, `luac.cross -a` also allows absolute address images to be built for direct flashing into the LFS store during provisioning. The LFS image is by default position independent, so is independent of the actual NodeMCU target image. You just have to copy it to the target file system and execute a `flashreload` and this copies the image from SPIFSS to the correct flash location, relocating all address to the correct base. (See `app/lua/lflash.c` for the details.) This process is fast.
A `luac.cross -a` option also allows absolute address images to be built for direct flashing the LFS store onto the module during provisioning.
### Impact of the Lua Garbage Collector ### Impact of the Lua Garbage Collector
The LGC applies to what the Lua VM classifies as collectable objects (strings, tables, functions, userdata, threads -- known collectively as `GCObjects`). A simple two "colour" LGC was used in previous Lua versions, but Lua 5.1 introduced the Dijkstra's 3-colour (*white*, *grey*, *black*) variant that enabled the LGC to operate in an incremental mode. This permits smaller LGC steps interspersed by LGC pause, and is very useful for larger scale Lua implementations. Whilst this is probably not really needed for IoT devices, NodeMCU follows this standard Lua 5.1 implementation, albeit with the `elua` EGC changes. The LGC applies to what the Lua VM classifies as collectable objects (strings, tables, functions, userdata, threads -- known collectively as `GCObjects`). A simple two "colour" LGC was used in previous Lua versions, but Lua 5.1 introduced the Dijkstra's 3-colour (*white*, *grey*, *black*) variant that enabled the LGC to operate in an incremental mode. This permits smaller LGC steps interspersed by LGC pause, and is very useful for larger scale Lua implementations. Whilst this is probably not really needed for IoT devices, NodeMCU follows this standard Lua 5.1 implementation, albeit with the `elua` EGC changes.
In fact, two *white* flavours are used to support incremental working (so this 3-colour algorithm really uses 4). All newly allocated collectable objects are marked as the current *white*, and include a link in their header to enable scanning through all such Lua objects. They may also be referenced directly or indirectly via one of the Lua application's *roots*: the global environment, the Lua registry and the stack. The LGC operates two broad phases: **mark** and **sweep**. In fact, two *white* flavours are used to support incremental working (so this 3-colour algorithm really uses 4). All newly allocated collectable objects are marked as the current *white*, and a link in `GCObject` header enables scanning through all such Lua objects. Collectable objects can be referenced directly or indirectly via one of the Lua application's *roots*: the global environment, the Lua registry and the stack.
The LGC algorithm is quite complex and assumes that all GCObjects are RW so that a flag byte within each object can be updated during the mark and sweep processing. LFS introduces GCObjects that are actually stored in RO memory and are therefore truly RO. Any attempt to update their content during LGC will result in the firmware crashing with a memory exception, so the LFS patch must therefore modify the LGC processing to avoid such potential updates whilst maintaining its integrity, and the remainder of this The standard LGC algorithm is quite complex and assumes that all GCObjects are RW so that a flag byte within each object can be updated during the mark and sweep processing. LFS introduces GCObjects that are stored in RO memory and are therefore truly RO.
section provides further detail on how this was achieved. The LFS patch therefore modifies the LGC processing to avoid such updates to GCObjects in RO memory, whilst still maintaining overall object integrity, as any attempt to update their content during LGC will result in the firmware crashing with a memory exception; the remainder of this section provides further detail on how this was achieved. The LGC operates two broad phases: **mark** and **sweep**
The **mark** phase walks collectable objects by a recursive walk starting at at the LGC roots. (This is referred to as _traverse_.) Any object that is visited in this walk has its colour flipped from *white* to *grey* to denote that it is in use, and it is relinked into a grey list. The grey list is iteratively processed, removing one grey object at a time. Such objects can reference other objects (e.g. a table has many keys and values which can also be collectable objects), so each one is then also traversed and all objects reachable from it are marked, as above. After an object has been traversed, it's turned from grey to black. The LGC will walks all RW collectable objects, traversing the dependents of each in turn. As RW objects can now refer to RO ones, the traverse routines has additinal tests to skip trying to mark any RO LFS references. - The **mark** phase walks collectable objects by a recursive walk starting at at the LGC roots. (This is referred to as _traverse_.) Any object that is visited in this walk has its colour flipped from *white* to *grey* to denote that it is in use, and it is relinked into a grey list. The grey list is iteratively processed, removing one grey object at a time. Such objects can reference other objects (e.g. a table has many keys and values which can also be collectable objects), so each one is then also traversed and all objects reachable from it are marked, as above. After an object has been traversed, it's turned from grey to black. The LGC will walks all RW collectable objects, traversing the dependents of each in turn. As RW objects can now refer to RO ones, the traverse routines has additinal tests to skip trying to mark any RO LFS references.
The white flavour is flipped just before entering the **sweep** phase. This phase then loops over all collectable objects. Any objects found with previous white are no longer in use, and so can be freed. The 'current' white are kept; this prevents any new objects created during a paused sweep from being accidentally collected before being marked, but this means that it takes two sweeps to free all unused objects. There are other subtleties introduced in this 3-colour algorithm such as barriers and back-tracking to maintain integrity of the LGC, and these also needed extra rules to handle RO GCObjects correclty, but detailed explanation of these is really outside the scope of this paper. - The white flavour is flipped just before entering the **sweep** phase. This phase then loops over all collectable objects. Any objects found with previous white are no longer in use, and so can be freed. The 'current' white are kept; this prevents any new objects created during a paused sweep from being accidentally collected before being marked, but this means that it takes two sweeps to free all unused objects. There are other subtleties introduced in this 3-colour algorithm such as barriers and back-tracking to maintain integrity of the LGC, and these also needed extra rules to handle RO GCObjects correclty, but detailed explanation of these is really outside the scope of this paper.
As well as standard collectable GCOobjets: As well as standard collectable GCOobjets:
@ -306,12 +249,13 @@ As far as the LGC algorithm is concerned, encountering any _flash_ object in a s
### General comments ### General comments
- **Reboot implementation**. Whilst the application initiated LFS reload might seem an overhead, it typically only adds a few seconds per reboot. We may also consider the future enhancement of the `esptool.py` to enable the inclusion of an LFS image into the unified application flash image. - **Reboot implementation**. Whilst the application initiated LFS reload might seem an overhead, it typically only adds a few seconds per reboot.
- **LGC reduction**. Since the cost of LGC is directly related to the size of the LGC sweep lists, moving RO resources into LFS memory removes them from the LGC scope and therefore reduces LGC runtime accordingly. - **LGC reduction**. Since the cost of LGC is directly related to the size of the LGC sweep lists, moving RO resources into LFS memory removes them from the LGC scope and therefore reduces LGC runtime accordingly.
- **Typical Usecase**. The rebuilding of a store is an occasional step in the development cycle. (Say up to 10 times a day in a typical intensive development process). Modules and source files under development would typically be executed from SPIFFS in `.lua` format. The developer is free to reorder the `package.loaders` and load any SPIFFS files in preference to Flash ones. And if stable code is moved into Flash, then there is little to be gained in storing development Lua code in SPIFFS in `lc` compiled format. - **Typical Usecase**. The rebuilding of a store is an occasional step in the development cycle. (Say up to 10-20 times a day in a typical intensive development process). Modules and source files under development can also be executed from SPIFFS in `.lua` format. The developer is free to reorder the `package.loaders` and load any SPIFFS files in preference to Flash ones. And if stable code is moved into Flash, then there is little to be gained in storing development Lua code in SPIFFS in `lc` compiled format.
- **Flash caching coherency**. The ESP chipset employs hardware enabled caching of the `ICACHE_FLASH` address space, and writing to the flash does not flush this cache. However, in this restart model, the CPU is always restarted before any updates are read programmatically, so this (lack of) coherence isn't an issue. - **Flash caching coherency**. The ESP chipset employs hardware enabled caching of the `ICACHE_FLASH` address space, and writing to the flash does not flush this cache. However, in this restart model, the CPU is always restarted before any updates are read programmatically, so this (lack of) coherence isn't an issue.
- **Failsafe reversion**. Since the entire image is precompiled, the chances of failure during reload are small. The loader uses the Flash NAND rules to write the flash header flag in two parts: one at start of the load and again at the end. If on reboot, the flag in on incostent state, then the LFS is cleared and disabled until the next reload. - **Failsafe reversion**. Since the entire image is precompiled and validated before loading into LFS, the chances of failure during reload are small. The loader uses the Flash NAND rules to write the flash header flag in two parts: one at start of the load and again at the end. If on reboot, the flag in on incostent state, then the LFS is cleared and disabled until the next reload.

View File

@ -0,0 +1,77 @@
--
-- If you have the LFS _init loaded then you invoke the provision by
-- executing LFS.HTTP_OTA('your server','directory','image name'). Note
-- that is unencrypted and unsigned. But the loader does validate that
-- the image file is a valid and complete LFS image before loading.
--
local host, dir, image = ...
local doRequest, firstRec, subsRec, finalise
local n, total, size = 0, 0
doRequest = function(sk,hostIP)
if hostIP then
local con = net.createConnection(net.TCP,0)
con:connect(80,hostIP)
-- Note that the current dev version can only accept uncompressed LFS images
con:on("connection",function(sck)
local request = table.concat( {
"GET "..dir..image.." HTTP/1.1",
"User-Agent: ESP8266 app (linux-gnu)",
"Accept: application/octet-stream",
"Accept-Encoding: identity",
"Host: "..host,
"Connection: close",
"", "", }, "\r\n")
print(request)
sck:send(request)
sck:on("receive",firstRec)
end)
end
end
firstRec = function (sck,rec)
-- Process the headers; only interested in content length
local i = rec:find('\r\n\r\n',1,true) or 1
local header = rec:sub(1,i+1):lower()
size = tonumber(header:match('\ncontent%-length: *(%d+)\r') or 0)
print(rec:sub(1, i+1))
if size > 0 then
sck:on("receive",subsRec)
file.open(image, 'w')
subsRec(sck, rec:sub(i+4))
else
sck:on("receive", nil)
sck:close()
print("GET failed")
end
end
subsRec = function(sck,rec)
total, n = total + #rec, n + 1
if n % 4 == 1 then
sck:hold()
node.task.post(0, function() sck:unhold() end)
end
uart.write(0,('%u of %u, '):format(total, size))
file.write(rec)
if total == size then finalise(sck) end
end
finalise = function(sck)
file.close()
sck:on("receive", nil)
sck:close()
local s = file.stat(image)
if (s and size == s.size) then
wifi.setmode(wifi.NULLMODE)
collectgarbage();collectgarbage()
-- run as separate task to maximise RAM available
node.task.post(function() node.flashreload(image) end)
else
print"Invalid save of image file"
end
end
net.dns.resolve(host, doRequest)

View File

@ -66,10 +66,10 @@ G.LFS = setmetatable(lfs_t,lfs_t)
---------------------------------------------------------------------------------]] ---------------------------------------------------------------------------------]]
table.insert(package.loaders,function(module) -- loader_flash package.loaders[3] = function(module) -- loader_flash
local fn, ba = index(module) local fn, ba = index(module)
return ba and "Module not in LFS" or fn return ba and "Module not in LFS" or fn
end) end
--[[------------------------------------------------------------------------------- --[[-------------------------------------------------------------------------------
You can add any other initialisation here, for example a couple of the globals You can add any other initialisation here, for example a couple of the globals