README 'spissimg' tool: fix broken link (#3430)

Fixing several typos
This commit is contained in:
Andreas Deininger 2021-05-10 22:18:11 +02:00 committed by GitHub
parent 136e09739b
commit d4ae3c364b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 14 additions and 14 deletions

View File

@ -11,7 +11,7 @@
* under the standard NodeMCU MIT licence, but is available to the other * under the standard NodeMCU MIT licence, but is available to the other
* contributors to this source under any permissive licence. * contributors to this source under any permissive licence.
* *
* My primary algorthmic reference is RFC 1951: "DEFLATE Compressed Data * My primary algorithmic reference is RFC 1951: "DEFLATE Compressed Data
* Format Specification version 1.3", dated May 1996. * Format Specification version 1.3", dated May 1996.
* *
* Also because the code in this module is drawn from different sources, * Also because the code in this module is drawn from different sources,
@ -161,7 +161,7 @@ struct outputBuf {
* Set up the constant tables used to drive the compression * Set up the constant tables used to drive the compression
* *
* Constants are stored in flash memory on the ESP8266 NodeMCU firmware * Constants are stored in flash memory on the ESP8266 NodeMCU firmware
* builds, but only word aligned data access are supported in hardare so * builds, but only word aligned data access are supported in hardware so
* short and byte accesses are handled by a S/W exception handler and are * short and byte accesses are handled by a S/W exception handler and are
* SLOW. RAM is also at premium, so these short routines are driven by * SLOW. RAM is also at premium, so these short routines are driven by
* byte vectors copied into RAM and then used to generate temporary RAM * byte vectors copied into RAM and then used to generate temporary RAM

View File

@ -533,14 +533,14 @@ static int uncompress_stream (UZLIB_DATA *d) {
* implementation is essential. * implementation is essential.
* *
* I have taken the architectural decision to hide the implementation * I have taken the architectural decision to hide the implementation
* detials from the uncompress routines and the caller must provide * details from the uncompress routines and the caller must provide
* three support routines to handle the streaming: * three support routines to handle the streaming:
* *
* void get_byte(void) * void get_byte(void)
* void put_byte(uchar b) * void put_byte(uchar b)
* uchar recall_byte(uint offset) * uchar recall_byte(uint offset)
* *
* This last must be able to recall an output byte with an offet up to * This last must be able to recall an output byte with an offset up to
* the maximum dictionary size. * the maximum dictionary size.
*/ */

View File

@ -183,7 +183,7 @@ Another good use of this technique is when you have resources such as CSS, HTML
- Linux users can just use these tools natively. Windows users can also to do this in a linux VM or use our standard Docker image. Another alternative is to get yourself a Raspberry Pi or equivalent SBC and use a package like [DietPi](http://www.dietpi.com/) which makes it easy to install the OS, a Webserver and Samba and make the RPi look like a NAS to your PC. It is also straightforward to write a script to automatically recompile a Samba folder after updates and to make the LFS image available on the webservice so that your ESP modules can update themselves OTA using the new `HTTP_OTA.lua` example. - Linux users can just use these tools natively. Windows users can also to do this in a linux VM or use our standard Docker image. Another alternative is to get yourself a Raspberry Pi or equivalent SBC and use a package like [DietPi](http://www.dietpi.com/) which makes it easy to install the OS, a Webserver and Samba and make the RPi look like a NAS to your PC. It is also straightforward to write a script to automatically recompile a Samba folder after updates and to make the LFS image available on the webservice so that your ESP modules can update themselves OTA using the new `HTTP_OTA.lua` example.
- In principle, only the environment component needed to support application development is `luac.cross`, built by the `app/lua/lua_cross` make. (Some developers might also use the `spiffsimg` exectable, made in the `tools/spifsimg` subdirectory). Both of these components use the host toolchain (that is the compiler and associated utilities), rather than the Xtensa cross-compiler toolchain, so it is therefore straightforward to make under any environment which provides POSIX runtime support, including WSL, MacOS and Cygwin. - In principle, only the environment component needed to support application development is `luac.cross`, built by the `app/lua/lua_cross` make. (Some developers might also use the `spiffsimg` executable, made in the `tools/spifsimg` subdirectory). Both of these components use the host toolchain (that is the compiler and associated utilities), rather than the Xtensa cross-compiler toolchain, so it is therefore straightforward to make under any environment which provides POSIX runtime support, including WSL, MacOS and Cygwin.
Most Lua developers seem to start with the [ESPlorer](https://github.com/4refr0nt/ESPlorer) tool, a 'simple to use' IDE that enables beginning Lua developers to get started. ESPlorer can be slow cumbersome for larger ESP application, and it requires a direct UART connection. So many experienced Lua developers switch to a rapid development cycle where they use a development machine to maintain your master Lua source. Going this route will allow you use your favourite program editor and source control, with one of various techniques for compiling the lua on-host and downloading the compiled code to the ESP: Most Lua developers seem to start with the [ESPlorer](https://github.com/4refr0nt/ESPlorer) tool, a 'simple to use' IDE that enables beginning Lua developers to get started. ESPlorer can be slow cumbersome for larger ESP application, and it requires a direct UART connection. So many experienced Lua developers switch to a rapid development cycle where they use a development machine to maintain your master Lua source. Going this route will allow you use your favourite program editor and source control, with one of various techniques for compiling the lua on-host and downloading the compiled code to the ESP:
@ -219,5 +219,5 @@ A separate `node.flashindex()` function creates a new Lua closure based on a mod
- **Flash caching coherency**. The ESP chipset employs hardware enabled caching of the `ICACHE_FLASH` address space, and writing to the flash does not flush this cache. However, in this restart model, the CPU is always restarted before any updates are read programmatically, so this (lack of) coherence isn't an issue. - **Flash caching coherency**. The ESP chipset employs hardware enabled caching of the `ICACHE_FLASH` address space, and writing to the flash does not flush this cache. However, in this restart model, the CPU is always restarted before any updates are read programmatically, so this (lack of) coherence isn't an issue.
- **Failsafe reversion**. Since the entire image is precompiled and validated before loading into LFS, the chances of failure during reload are small. The loader uses the Flash NAND rules to write the flash header flag in two parts: one at start of the load and again at the end. If on reboot, the flag in on incostent state, then the LFS is cleared and disabled until the next reload. - **Failsafe reversion**. Since the entire image is precompiled and validated before loading into LFS, the chances of failure during reload are small. The loader uses the Flash NAND rules to write the flash header flag in two parts: one at start of the load and again at the end. If on reboot, the flag is in an inconsistent state, then the LFS is cleared and disabled until the next reload.

View File

@ -84,7 +84,7 @@ file system will start on a 64k boundary. A newly formatted file system will sta
system will survive lots of reflashing and at least 64k of firmware growth. system will survive lots of reflashing and at least 64k of firmware growth.
The standard build process for the firmware builds the `spiffsimg` tool (found in the `tools/spiffsimg` subdirectory). The standard build process for the firmware builds the `spiffsimg` tool (found in the `tools/spiffsimg` subdirectory).
The top level Makfile also checks if The top level Makefile also checks if
there is any data in the `local/fs` directory tree, and it will then copy these files there is any data in the `local/fs` directory tree, and it will then copy these files
into the flash disk image. Two images will normally be created -- one for the 512k flash part and the other for the 4M flash part. If the data doesn't into the flash disk image. Two images will normally be created -- one for the 512k flash part and the other for the 4M flash part. If the data doesn't
fit into the 512k part after the firmware is included, then the file will not be generated. fit into the 512k part after the firmware is included, then the file will not be generated.

View File

@ -2,6 +2,6 @@
Ever wished you could prepare a SPIFFS image offline and flash the whole Ever wished you could prepare a SPIFFS image offline and flash the whole
thing onto your microprocessor's storage instead of painstakingly upload thing onto your microprocessor's storage instead of painstakingly upload
file-by-file through your app on the micro? With spiffsimg you can! file-by-file through your app on the micro? With `spiffsimg` you can!
For the full gory details see [spiffs.md](../../docs/en/spiffs.md) For the full gory details see [spiffs.md](../../docs/spiffs.md)