| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
Create a bios.py file to hold all the BIOS specific functions.
Implement the _boot_bios_linux in Python. The new boot process
tries to find the vmlinuz and initrd binaries at the desired
partition. Then it tries to load them with kexec with the proper
Grub boot params.
One step closer to the removal of the boot legacy script.
|
|
|
|
|
|
| |
The image restore command must check if the cache partition is
available. Otherwise if the user forgets to create the cache
tiptorrent fails.
|
|
|
|
|
|
| |
The mage creation process was being interrupted by an error
trying to read the Windows registry by the Hivex library.
Now the exceptions are handled and an error is reported.
|
|
|
|
|
|
|
| |
The OS probe logic must be able to check a distro programmatically,
add get_linux_distro_id to return an id whitout versioning.
Ensure the availability of 'ubuntu' when we need to ensure certain
features are only used with a supported system.
|
|
|
|
|
|
|
|
| |
This change is a preparative for reimplementing the BIOS boot
in order to deprecate the legacy script. All the codepaths to
boot systems located at a partition are now called from the
boot_os_at function enabling an easier structure for the incoming
code.
|
|
|
|
|
|
|
| |
Checking the existence /sys/firmware/efi as it might appear
sometimes in BIOS installs if the BIOS configuration is not
proper. Checking for the EFI partition is the safest method
to veryfy the install type.
|
|
|
|
|
|
| |
The function getlinuxversion receives a path to the os-release
file. The case of not being able to open it was not handled and
thus causing an unwanted exception.
|
|
|
|
|
| |
Log each partition that gets checked and make the exception messages
more informative.
|
|
|
|
| |
debian package with json support provides the binary through this path, update it.
|
|
|
|
|
|
| |
The json functionality proposed upstream might be merged one day
in efibootmgr so deploying a fork would not be needed anymore.
This change aims to ease the migration once that day comes.
|
|
|
|
|
|
| |
Replace IniciarSesion script in favor of native Python code when booting
a UEFI system into Linux. This completes the implementation of booting
into an OS on a UEFI compliant system.
|
|
|
|
|
|
|
|
| |
Replace IniciarSesion script in favor of native Python code when booting
a UEFI system. This applies when running the "session" command.
WIP: Only UEFI boots Windows systems. Raise NotImplementedError
exception trying to boot a Linux system using UEFI.
|
|
|
|
|
|
|
|
|
|
| |
Add utility module related to the process of booting a system from a
client's partition.
The main utility function to boot a clients system is boot_os_at(), from
which firmware (UEFI or BIOS) and os-family specific private functions are invoked.
This initial commit adds UEFI windows boot function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add UEFI related utilities inside a new utility module: uefi.py
_check_efibootmgr_json
======================
Check if the system efibootmgr executable supports JSON output. This is
a private function used only by other functions from uefi.py.
is_uefi_supported
=================
Check if the system supports UEFI firmware.
run_efibootmgr_json
===================
Runs efibootmgr with json output support. Return the JSON output as a
Python dict.
efibootmgr_create_bootentry
===========================
Create nvram boot entry. This bootentry is usually later set to boot
next just once via "BootNext" nvram variable.
efibootmgr_delete_bootentry
===========================
Delete a nvram boot entry. Used to avoid duplicates when booting the
same disk and partition from a given client.
efibootmgr_bootnext
===================
Set nvram "BootNext" variable to a given boot entry so after client
reboot, PXE is not executed and the given boot entry takes precedence.
Add dependency with efibootmgr version >= 18, and efibootmgr JSON output
which is currently out of tree from util-linux repo.
|
|
|
|
|
|
|
|
| |
Add a basic OS family enumeration: OSFamily.
Add utility function that probes for an installed Linux or Windows
system, returns the corresponding enum value, OSFamily.UNKNOWN
otherwise.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add utility function inside disk.py to find, if any, the first ESP
partition of a given disk.
The disk is provided as an integer (starting at 1 following OpenGnsys
scripts usual values), meaning the (n-1)th disk from the disk array
returned from get_disks(). In the future a better mechanism should be
put in place to fetch probed disks from a running client.
This change is part of the upcoming drop of "IniciarSesion" script in
favor of a Python native approach. Specifically regarding UEFI systems.
|
|
|
|
|
| |
use info instead of debug to make it easier to debug problems when creating the
cache.
|
|
|
|
|
|
| |
Improve logging when setting up partition, provide more hints on progress.
Fail in case partition layout is not supported.
|
| |
|
|
|
|
|
|
|
| |
... the exception shows the samba password in the logs
specify the error which tells us what has happened according to man mount(8)
Return Codes.
|
|
|
|
|
| |
otherwise partprobe does its best to find the disk, according to what I see
through strace.
|
|
|
|
| |
Remove leftover fallback to directly call utilities to poweroff and reboot.
|
|
|
|
|
|
| |
value extraction did not have error checking and was handled in
a one-liner. The actual implementation expands the parsing logic
and moves it into a function.
|
|
|
|
| |
just split this log message.
|
|
|
|
| |
this is broken, it uses default uses and password, remove it.
|
|
|
|
| |
writing to file might fail (permission denied, disk full), check for errors.
|
|
|
|
| |
instead of rising an exception
|
|
|
|
| |
log error in case resize2fs fails.
|
|
|
|
|
| |
According to ntfsresize.c, this retuns 0 in case nothing needs to be done.
It should be safe to check for non-zero error and bail out in that case.
|
|
|
|
|
|
|
|
| |
Revisit 5056b8f0d5ab ("fs: validate ntfsresize dry-run output") that has
introduced a possible infinity loop.
Disentangle this loop while at it: iterate until best smallest size is
found by probing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
do not return the returncode, instead return an integer.
do not use
except CalledProcessError as e:
it causes a another exception while handling exception.
Remount the original image repository.
it should be possible to simplify this further by:
- stacking mounts, no need to umount initial repo and mount it again
when switching to the new repo, because remount back initial repo
might fail (!)
- use check=False and simply check for x.returncode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove mbuffer, this is never used.
mbuffer has been never been used since ogClient supports native image restore.
Originally this was used like this:
partclone [...] | mbuffer -q -M 40M | lzop [...]
supposely to speed up partclone in case the device where the read happens is
slowier than the device that is used for writes.
See mbuffer(1) manpage examples.
In any case, this needs benchmarking to really make sure this is helping.
Remove it until that ever happens.
|
|
|
|
|
| |
Provide more context information for debugging issues with image creation and
restore.
|
|
|
|
|
|
|
|
|
| |
cover more error cases where exceptions need to be raised.
check return code in the invoked subprocess.
restoreImageCustom has been intentionally left behind, it
is unclear what this custom script returns on success and
error.
|
|
|
|
| |
make whitespace conherent with the rest of the file contents.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
validate 'Needed relocations: ' is in place before stepping on the split chunks
(2024-01-11 10:28:16) ogClient: [ERROR] - Exception when running "image create" subprocess
Traceback (most recent call last):
File "/opt/opengnsys/ogClient/src/live/ogOperations.py", line 454, in image_create
ogReduceFs(disk, partition)
File "/opt/opengnsys/ogClient/src/utils/fs.py", line 105, in ogReduceFs
_reduce_ntfsresize(partdev)
File "/opt/opengnsys/ogClient/src/utils/fs.py", line 235, in _reduce_ntfsresize
extra_size = int(out_resize_dryrun.split('Needed relocations : ')[1].split(' ')[0])*1.1+1024
IndexError: list index out of range
if not present, no need to adjust size
|
|
|
|
|
| |
- suggest to check permissions in samba folder
- fix typo, s/filesyste/filesystem/
|
|
|
|
| |
Just informational, provide a notice that the file already exists.
|
|
|
|
| |
check that there is a file and that is accessible
|
|
|
|
| |
display filesystem and path to device.
|
|
|
|
| |
check that it is readable and writable
|
|
|
|
|
|
| |
Otherwise it shows:
ValueError: Unable to process image {image_path}
|
|
|
|
| |
add .permissions and .lastupdate to json to report to ogserver.
|
|
|
|
| |
add .size json field to report the real size of the image file.
|
|
|
|
|
|
| |
Users can create an image of a filesystem that contains no OS, therefore,
instead of rising an exception when no OS is detected, deliver a "unknown"
OS and an empty list of software.
|
|
|
|
|
|
|
|
|
|
|
| |
Image backup is considered a legacy feature. Use the legacy mechanism of
naming image backups by adding ".ant" suffix.
Previously, by using the strftime suffix clients were reporting that the
disk were getting full rather quickly.
When a good method for image deletion is implemented then a proper
backup naming mechanism should be reconsidered.
|
|
|
|
|
|
|
|
|
|
|
| |
When a client's hardware presents an empty pci storage child there is an
invalid call to _bytes_to_human a string is supplied as a default value
if the storage child does not present a 'size' attribute.
Fix this by checking if 'size' is present in the JSON output from lshw.
If size is present then map the bytes to a human readable string using
_bytes_to_human, if no size is present then use 'Empty slot' to indicate
that the memory bank is not being used.
|
|
|
|
|
|
| |
Add missing underscore to _bytes_to_human call.
Fixes: 39c13287c53bd8 ("live: hw_inventory: fix empty memory bank bug")
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a client's hardware presents an empty memory bank and invalid call
to _bytes_to_human is performed because None is passed as a parameter.
size = _bytes_to_human(obj.get('size', None))
Fix this by checking if 'size' is present in the JSON output from lshw.
If size is present then map the bytes to a human readable string using
_bytes_to_human, if no size is present then use 'Empty slot' to indicate
that the memory bank is not being used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some users have mistakenly reported tiptorrent problems when the process
takes a long time. Specifically by rebooting or powering off the client
in the middle of the md5sum computation stage, just after the tiptorrent
transfer.
Same problem occurs when image creation command takes a long period of
time.
In order to help the user understand the different stages of commands
such as image creation or image restore using tiptorrent, the following
changes have been made to the current logging solution:
- Add log messages to warn users not to reboot or shut down the client
during a tiptorrent transfer, and also during the md5sum computation
stage.
- Add a log message telling the user that the image creation processes
have started.
- Use logging.exception inside "except:" blocks to print a traceback
with the log messsage.
(https://docs.python.org/3/library/logging.html#logging.exception)
|