README 'spissimg' tool: fix broken link (#3430)

Fixing several typos
This commit is contained in:
Andreas Deininger 2021-05-10 22:18:11 +02:00 committed by GitHub
parent 136e09739b
commit d4ae3c364b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 14 additions and 14 deletions

View File

@ -11,7 +11,7 @@
* under the standard NodeMCU MIT licence, but is available to the other
* contributors to this source under any permissive licence.
*
* My primary algorthmic reference is RFC 1951: "DEFLATE Compressed Data
* My primary algorithmic reference is RFC 1951: "DEFLATE Compressed Data
* Format Specification version 1.3", dated May 1996.
*
* Also because the code in this module is drawn from different sources,
@ -161,7 +161,7 @@ struct outputBuf {
* Set up the constant tables used to drive the compression
*
* Constants are stored in flash memory on the ESP8266 NodeMCU firmware
* builds, but only word aligned data access are supported in hardare so
* builds, but only word aligned data access are supported in hardware so
* short and byte accesses are handled by a S/W exception handler and are
* SLOW. RAM is also at premium, so these short routines are driven by
* byte vectors copied into RAM and then used to generate temporary RAM

View File

@ -525,7 +525,7 @@ static int uncompress_stream (UZLIB_DATA *d) {
}
/*
* This implementation has a different usecase to Paul Sokolovsky's
* This implementation has a different use case to Paul Sokolovsky's
* uzlib implementation, in that it is designed to target IoT devices
* such as the ESP8266. Here clarity and compact code size is an
* advantage, but the ESP8266 only has 40-45Kb free heap, and has to
@ -533,14 +533,14 @@ static int uncompress_stream (UZLIB_DATA *d) {
* implementation is essential.
*
* I have taken the architectural decision to hide the implementation
* detials from the uncompress routines and the caller must provide
* details from the uncompress routines and the caller must provide
* three support routines to handle the streaming:
*
* void get_byte(void)
* void put_byte(uchar b)
* uchar recall_byte(uint offset)
*
* This last must be able to recall an output byte with an offet up to
* This last must be able to recall an output byte with an offset up to
* the maximum dictionary size.
*/

View File

@ -82,7 +82,7 @@ Essentially testing any eLua compiler or runtime changes are a total pain, becau
I tested my patch in standard Lua built with "make generic" and against the [Lua 5.1 suite](http://lua-users.org/lists/lua-l/2006-03/msg00723.html). The test suite was an excellent testing tool, and it revealed a number of cases that exposed logic flaws in my approach, resulting from Lua's approach of not carrying out inline status testing by instead implementing a throw / catch strategy. In fact I realised that I had to redesign the vector generation algorithm to handle this robustly.
As with all eLua builds the patch assumes Lua will not be executing in a multithreaded environment with OS threads running different lua_States. (This is also the case for the NodeMCU firmware). It executes the full test suite cleanly as maximum test levels and I also added some specific tests to cover new **stripdebug** usecases.
As with all eLua builds the patch assumes Lua will not be executing in a multithreaded environment with OS threads running different lua_States. (This is also the case for the NodeMCU firmware). It executes the full test suite cleanly as maximum test levels and I also added some specific tests to cover new **stripdebug** use cases.
Once this testing was completed, I then ported the patch to the NodeMCU build. This was pretty straight forward as this code is essentially independent of the NodeMCU functional changes. The only real issue as to ensure that the NodeMCU `c_strlen()` calls replaced the standard `strlen()`, etc.

View File

@ -183,7 +183,7 @@ Another good use of this technique is when you have resources such as CSS, HTML
- Linux users can just use these tools natively. Windows users can also to do this in a linux VM or use our standard Docker image. Another alternative is to get yourself a Raspberry Pi or equivalent SBC and use a package like [DietPi](http://www.dietpi.com/) which makes it easy to install the OS, a Webserver and Samba and make the RPi look like a NAS to your PC. It is also straightforward to write a script to automatically recompile a Samba folder after updates and to make the LFS image available on the webservice so that your ESP modules can update themselves OTA using the new `HTTP_OTA.lua` example.
- In principle, only the environment component needed to support application development is `luac.cross`, built by the `app/lua/lua_cross` make. (Some developers might also use the `spiffsimg` exectable, made in the `tools/spifsimg` subdirectory). Both of these components use the host toolchain (that is the compiler and associated utilities), rather than the Xtensa cross-compiler toolchain, so it is therefore straightforward to make under any environment which provides POSIX runtime support, including WSL, MacOS and Cygwin.
- In principle, only the environment component needed to support application development is `luac.cross`, built by the `app/lua/lua_cross` make. (Some developers might also use the `spiffsimg` executable, made in the `tools/spifsimg` subdirectory). Both of these components use the host toolchain (that is the compiler and associated utilities), rather than the Xtensa cross-compiler toolchain, so it is therefore straightforward to make under any environment which provides POSIX runtime support, including WSL, MacOS and Cygwin.
Most Lua developers seem to start with the [ESPlorer](https://github.com/4refr0nt/ESPlorer) tool, a 'simple to use' IDE that enables beginning Lua developers to get started. ESPlorer can be slow cumbersome for larger ESP application, and it requires a direct UART connection. So many experienced Lua developers switch to a rapid development cycle where they use a development machine to maintain your master Lua source. Going this route will allow you use your favourite program editor and source control, with one of various techniques for compiling the lua on-host and downloading the compiled code to the ESP:
@ -198,7 +198,7 @@ My current practice is to use a small bootstrap `init.lua` file in SPIFFS to con
Under rare circumstances, for example a power fail during the flashing process, the flash can be left in a part-written state following a `flashreload()`. The Lua RTS start-up sequence will detect this and take the failsafe option of resetting the LFS to empty, and if this happens then the LFS `_init` function will be unavailable. Your `init.lua` should therefore not assume that the LFS contains any modules (such as `_init`), and should contain logic to detect if LFS reset has occurred and if necessary reload the LFS again. Calling `node.flashindex("_init")()` directly will result in a panic loop in these circumstances. Therefore first check that `node.flashindex("_init")` returns a function or protect the call, `pcall(node.flashindex("_init"))`, and decode the error status to validate that initialisation was successful.
No doubt some standard usecase / templates will be developed by the community over the next six months.
No doubt some standard use case / templates will be developed by the community over the next six months.
A LFS image can be loaded in the LFS store by either during provisioning of the initial firmware image or programmatically at runtime as discussed further in [Compiling and Loading LFS Images](#compiling-and-loading-lfs-images) below.
one of two mechanisms:
@ -219,5 +219,5 @@ A separate `node.flashindex()` function creates a new Lua closure based on a mod
- **Flash caching coherency**. The ESP chipset employs hardware enabled caching of the `ICACHE_FLASH` address space, and writing to the flash does not flush this cache. However, in this restart model, the CPU is always restarted before any updates are read programmatically, so this (lack of) coherence isn't an issue.
- **Failsafe reversion**. Since the entire image is precompiled and validated before loading into LFS, the chances of failure during reload are small. The loader uses the Flash NAND rules to write the flash header flag in two parts: one at start of the load and again at the end. If on reboot, the flag in on incostent state, then the LFS is cleared and disabled until the next reload.
- **Failsafe reversion**. Since the entire image is precompiled and validated before loading into LFS, the chances of failure during reload are small. The loader uses the Flash NAND rules to write the flash header flag in two parts: one at start of the load and again at the end. If on reboot, the flag is in an inconsistent state, then the LFS is cleared and disabled until the next reload.

View File

@ -275,7 +275,7 @@ If you are used coding in a procedural paradigm then it is understandable that y
If you look at the `app/modules/tmr.c` code for this function, then you will see that it executes a low level `ets_delay_us(delay)`. This function isn't part of the NodeMCU code or the SDK; it's actually part of the xtensa-lx106 boot ROM, and is a simple timing loop which polls against the internal CPU clock. `tmr.delay()` is really intended to be used where you need to have more precise timing control on an external hardware I/O (e.g. lifting a GPIO pin high for 20 μSec). It does this with interrupts enabled, because so there is no guarantee that the delay will be as requested, and the Lua RTS itself may inject operations such as GC, so if you do this level of precise control then you should encode your application as a C library.
It will achieve no functional purpose in pretty much every other usecase, as any other system code-based activity will be blocked from execution; at worst it will break your application and create hard-to-diagnose timeout errors. We therefore deprecate its general use.
It will achieve no functional purpose in pretty much every other use case, as any other system code-based activity will be blocked from execution; at worst it will break your application and create hard-to-diagnose timeout errors. We therefore deprecate its general use.
### How do I avoid a PANIC loop in init.lua?

View File

@ -122,7 +122,7 @@ The same Lua51 ROTable functionality and limitations also apply to Lua53 in orde
### Proto Structures
Standard Lua 5.3 contains a new peep hole optimisation relating to closures: the Proto structure now contains one RW field pointing to the last closure created, and the GC adopts a lazy approach to recovering these closures. When a new closure is created, if the old one exists _and the upvals are the same_ then it is reused instead of creating a new one. This allows peephole optimisation of a usecase where a function closure is embedded in a do loop, so the higher cost closure creation is done once rather than `n` times.
Standard Lua 5.3 contains a new peep hole optimisation relating to closures: the Proto structure now contains one RW field pointing to the last closure created, and the GC adopts a lazy approach to recovering these closures. When a new closure is created, if the old one exists _and the upvals are the same_ then it is reused instead of creating a new one. This allows peephole optimisation of a use case where a function closure is embedded in a do loop, so the higher cost closure creation is done once rather than `n` times.
This reduces runtime at the cost of RAM overhead. However for RAM limited IoTs this change introduced two major issues: first, LFS relies on Protos being read-only and this RW `cache` field breaks this assumption; second closures can now exist past their lifetime, and this delays their GC. Memory constrained NodeMCU applications rely on the fact that dead closed upvals can be GCed once the closure is complete. This optimisation changes this behaviour. Not good.

View File

@ -84,7 +84,7 @@ file system will start on a 64k boundary. A newly formatted file system will sta
system will survive lots of reflashing and at least 64k of firmware growth.
The standard build process for the firmware builds the `spiffsimg` tool (found in the `tools/spiffsimg` subdirectory).
The top level Makfile also checks if
The top level Makefile also checks if
there is any data in the `local/fs` directory tree, and it will then copy these files
into the flash disk image. Two images will normally be created -- one for the 512k flash part and the other for the 4M flash part. If the data doesn't
fit into the 512k part after the firmware is included, then the file will not be generated.

View File

@ -2,6 +2,6 @@
Ever wished you could prepare a SPIFFS image offline and flash the whole
thing onto your microprocessor's storage instead of painstakingly upload
file-by-file through your app on the micro? With spiffsimg you can!
file-by-file through your app on the micro? With `spiffsimg` you can!
For the full gory details see [spiffs.md](../../docs/en/spiffs.md)
For the full gory details see [spiffs.md](../../docs/spiffs.md)