| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Log each partition that gets checked and make the exception messages
more informative.
|
|
|
|
| |
debian package with json support provides the binary through this path, update it.
|
|
|
|
|
|
| |
The json functionality proposed upstream might be merged one day
in efibootmgr so deploying a fork would not be needed anymore.
This change aims to ease the migration once that day comes.
|
|
|
|
|
|
| |
Replace IniciarSesion script in favor of native Python code when booting
a UEFI system into Linux. This completes the implementation of booting
into an OS on a UEFI compliant system.
|
|
|
|
|
|
|
|
|
|
| |
Add utility module related to the process of booting a system from a
client's partition.
The main utility function to boot a clients system is boot_os_at(), from
which firmware (UEFI or BIOS) and os-family specific private functions are invoked.
This initial commit adds UEFI windows boot function.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add UEFI related utilities inside a new utility module: uefi.py
_check_efibootmgr_json
======================
Check if the system efibootmgr executable supports JSON output. This is
a private function used only by other functions from uefi.py.
is_uefi_supported
=================
Check if the system supports UEFI firmware.
run_efibootmgr_json
===================
Runs efibootmgr with json output support. Return the JSON output as a
Python dict.
efibootmgr_create_bootentry
===========================
Create nvram boot entry. This bootentry is usually later set to boot
next just once via "BootNext" nvram variable.
efibootmgr_delete_bootentry
===========================
Delete a nvram boot entry. Used to avoid duplicates when booting the
same disk and partition from a given client.
efibootmgr_bootnext
===================
Set nvram "BootNext" variable to a given boot entry so after client
reboot, PXE is not executed and the given boot entry takes precedence.
Add dependency with efibootmgr version >= 18, and efibootmgr JSON output
which is currently out of tree from util-linux repo.
|
|
|
|
|
|
|
|
| |
Add a basic OS family enumeration: OSFamily.
Add utility function that probes for an installed Linux or Windows
system, returns the corresponding enum value, OSFamily.UNKNOWN
otherwise.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add utility function inside disk.py to find, if any, the first ESP
partition of a given disk.
The disk is provided as an integer (starting at 1 following OpenGnsys
scripts usual values), meaning the (n-1)th disk from the disk array
returned from get_disks(). In the future a better mechanism should be
put in place to fetch probed disks from a running client.
This change is part of the upcoming drop of "IniciarSesion" script in
favor of a Python native approach. Specifically regarding UEFI systems.
|
|
|
|
|
| |
use info instead of debug to make it easier to debug problems when creating the
cache.
|
| |
|
|
|
|
|
|
|
| |
... the exception shows the samba password in the logs
specify the error which tells us what has happened according to man mount(8)
Return Codes.
|
|
|
|
|
|
| |
value extraction did not have error checking and was handled in
a one-liner. The actual implementation expands the parsing logic
and moves it into a function.
|
|
|
|
| |
this is broken, it uses default uses and password, remove it.
|
|
|
|
| |
instead of rising an exception
|
|
|
|
| |
log error in case resize2fs fails.
|
|
|
|
|
| |
According to ntfsresize.c, this retuns 0 in case nothing needs to be done.
It should be safe to check for non-zero error and bail out in that case.
|
|
|
|
|
|
|
|
| |
Revisit 5056b8f0d5ab ("fs: validate ntfsresize dry-run output") that has
introduced a possible infinity loop.
Disentangle this loop while at it: iterate until best smallest size is
found by probing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
do not return the returncode, instead return an integer.
do not use
except CalledProcessError as e:
it causes a another exception while handling exception.
Remount the original image repository.
it should be possible to simplify this further by:
- stacking mounts, no need to umount initial repo and mount it again
when switching to the new repo, because remount back initial repo
might fail (!)
- use check=False and simply check for x.returncode
|
|
|
|
|
|
|
|
|
| |
cover more error cases where exceptions need to be raised.
check return code in the invoked subprocess.
restoreImageCustom has been intentionally left behind, it
is unclear what this custom script returns on success and
error.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
validate 'Needed relocations: ' is in place before stepping on the split chunks
(2024-01-11 10:28:16) ogClient: [ERROR] - Exception when running "image create" subprocess
Traceback (most recent call last):
File "/opt/opengnsys/ogClient/src/live/ogOperations.py", line 454, in image_create
ogReduceFs(disk, partition)
File "/opt/opengnsys/ogClient/src/utils/fs.py", line 105, in ogReduceFs
_reduce_ntfsresize(partdev)
File "/opt/opengnsys/ogClient/src/utils/fs.py", line 235, in _reduce_ntfsresize
extra_size = int(out_resize_dryrun.split('Needed relocations : ')[1].split(' ')[0])*1.1+1024
IndexError: list index out of range
if not present, no need to adjust size
|
|
|
|
|
|
| |
Otherwise it shows:
ValueError: Unable to process image {image_path}
|
|
|
|
| |
add .permissions and .lastupdate to json to report to ogserver.
|
|
|
|
| |
add .size json field to report the real size of the image file.
|
|
|
|
|
|
| |
Users can create an image of a filesystem that contains no OS, therefore,
instead of rising an exception when no OS is detected, deliver a "unknown"
OS and an empty list of software.
|
|
|
|
|
|
|
|
|
|
|
| |
When a client's hardware presents an empty pci storage child there is an
invalid call to _bytes_to_human a string is supplied as a default value
if the storage child does not present a 'size' attribute.
Fix this by checking if 'size' is present in the JSON output from lshw.
If size is present then map the bytes to a human readable string using
_bytes_to_human, if no size is present then use 'Empty slot' to indicate
that the memory bank is not being used.
|
|
|
|
|
|
| |
Add missing underscore to _bytes_to_human call.
Fixes: 39c13287c53bd8 ("live: hw_inventory: fix empty memory bank bug")
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a client's hardware presents an empty memory bank and invalid call
to _bytes_to_human is performed because None is passed as a parameter.
size = _bytes_to_human(obj.get('size', None))
Fix this by checking if 'size' is present in the JSON output from lshw.
If size is present then map the bytes to a human readable string using
_bytes_to_human, if no size is present then use 'Empty slot' to indicate
that the memory bank is not being used.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some users have mistakenly reported tiptorrent problems when the process
takes a long time. Specifically by rebooting or powering off the client
in the middle of the md5sum computation stage, just after the tiptorrent
transfer.
Same problem occurs when image creation command takes a long period of
time.
In order to help the user understand the different stages of commands
such as image creation or image restore using tiptorrent, the following
changes have been made to the current logging solution:
- Add log messages to warn users not to reboot or shut down the client
during a tiptorrent transfer, and also during the md5sum computation
stage.
- Add a log message telling the user that the image creation processes
have started.
- Use logging.exception inside "except:" blocks to print a traceback
with the log messsage.
(https://docs.python.org/3/library/logging.html#logging.exception)
|
|
|
|
|
|
|
|
|
| |
The first stage of parsing the "lshw -json" command output is to load
the json string into a Python dictionary. lshw output is large and
varies from machine to machine, so it's not safe to assume that
different keys will be present in the dictionary.
Use dict.get() instead of dict[key] to avoid KeyError exceptions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The subprocess module expects bytes-like object for "input" parameter by
default. Passing a string object result in the following error:
(2023-06-13 14:44:43) ogClient: [ERROR] - Exception when running "image create" subprocess
(2023-06-13 14:44:43) ogClient: [ERROR] - Unexpected error
Traceback (most recent call last):
File "/opt/opengnsys/ogClient/src/live/ogOperations.py", line 465, in image_create
ogExtendFs(disk, partition)
File "/opt/opengnsys/ogClient/src/utils/fs.py", line 124, in ogExtendFs
_extend_ntfsresize(partdev)
File "/opt/opengnsys/ogClient/src/utils/fs.py", line 250, in _extend_ntfsresize
proc = subprocess.run(cmd, input='y')
File "/usr/lib/python3.8/subprocess.py", line 495, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1013, in communicate
self._stdin_write(input)
File "/usr/lib/python3.8/subprocess.py", line 962, in _stdin_write
self.stdin.write(input)
TypeError: a bytes-like object is required, not 'str'
Fixes: dd999bfe34e7 ("utils: rewrite ogReduceFs")
|
|
|
|
|
|
|
|
| |
There is a corner case in which a target NTFS filesystem is already
shrunken. When this happens ntfsresize text output parsing breaks.
Check when ntfsresize reports nothing to do, warn the user about this
and stop the dry-run ntfsresize loop.
|
|
|
|
|
|
|
|
|
|
| |
_extend_ntfsresize contains an incorrect variable name inside
subprocess.run referring the resize command value.
Simplify this variable name inside each specific _extend_* function:
s/cmd_resize2fs/cmd
s/cmd_ntfsresize/cmd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Don't raise exception if any windows program is missing DisplayName
node in the windows registry.
This attribute/node should contain the program's name. This name is used
as the package's name in the software set (software inventory).
This patch should be considered a hotfix, python-hivex does not report
any helpful message about this error.
(2023-05-09 14:43:13) ogClient: [ERROR] - Unexpected error
Traceback (most recent call last):
[...]
RuntimeError: Success
Before this patch, image creation *might* fail because it cannot create
the software inventory associated with the image due to the previously
described error. The software inventory is part of the response payload
of the image creation command (see src/ogRest:image_create).
Fixes: 04bb35bd86b5 (live: rewrite software inventory)
|
|
|
|
|
|
|
|
| |
Add utility function to unmount any mountpoint present in the /mnt
folder.
This function is a simplified version of the legacy bash function
ogUnmountAll used in several operations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Drop subprocess call to bash function ogExtendFs. Use a native python
solution with subprocess calls to the required underlying tools.
Use get_filesystem_type to get the present filesystem from a partition
and call the corresponding filesystem grow function.
Filesystem specific functions are declared "_extend_{filesystem}" and
should not be imported elsewhere.
Each filesystem specific function wraps a subprocess call to the
required underlying program:
- NTFS filesystems: "ntfsresize -f [partition]"
- ext4 filesystems: "resize2fs -f [partition]"
Set NTFS related subprocess stdin to 'y' because human input cannot be
unset with other ntfsresize parameters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Drop subprocess call to bash function ogReduceFs. Use a native python
solution with subprocess calls to the required underlying tools.
Use get_filesystem_type to get the filesystem from a partition and call
the corresponding supported filesystem shrink function.
Filesystem specific functions are declared "_reduce_{filesystem}" and
should not be imported elsewhere.
In case of NTFS filesystems, the output of 'ntfsresize' is processed
directly. This is dirty, but we can expect no changes to the output
strings if we read the following comment in the nftsresize.c source
code:
https://github.com/tuxera/ntfs-3g/blob/edge/ntfsprogs/ntfsresize.c#L12
ntfsresize requires to do previous dry-run executions to confirm
that the resizing is possible.
If a dry-run fails but a 10% increase in size is still smaller than
original filesystem then retry the operation until dry-run reports
sucess or the size increase is bigger than original.
If resizing to a smaller ntfs filesystem is not possible then ogReduceFs
will do nothing.
|
|
|
|
|
| |
Retrieve filesystem type from a partition using get_filesystem_type.
Encapsulates a subprocess call to blkid.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
hw_inventory.py defines classes and helpers functions enabling
fetching of hardware inventory from a running client.
Uses a subprocess call to the command 'lshw -json' to obtain hardware
information.
Relevant public functions:
> get_hardware_inventory()
Main function encapsulating subprocess and output processing
logic.
Returns a HardwareInventory object.
> legacy_list_hardware_inventory(inventory)
Legacy string representation of parameter HardwareInventory object
|
|
|
|
|
|
|
|
| |
Rename software inventory file to sw_inventory to better distinguish
it from a future hardware inventory code.
In the future sw_inventory and hw_inventory might be merged together
once each file is tidied up.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Replace legacy bash script in favor of Python code. Improves error
traceability and further development.
The software inventory operation mounts the target partition and it
fetches the list of installed software (package set). Once the
operation is complete, it unmounts the target partition.
For Windows, introduce hivex library python bindings for accessing
Windows registry hive files (https://libguestfs.org/hivex.3.html).
This operation is still processed by legacy code in the server side
(ogAdmServer.c in ogServer). Legacy backend process expects the software
inventory like the following example:
"software": "Windows 10 Enterprise Evaluation 2004 \nIntel(R) Network Connections 24.0.0.11 24.0.0.11 ..."
The os name is inserted first in this list followed by a '\n' separated
string of the software packages.
The legacy server code can be found in function actualizaSoftware at
ogServer/src/ogAdmServer.c
It is expected for software inventory payload to change in the future to
a simpler solution using just a json array of strings.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the name of the helper functions used when getting opengnsys
image information (legacy ogGetImageInfo bash script). As of now the
process consist of decompressing the image file with lzop and feeding
that output to partclone.info.
Prefer a more explicit function name rather than "process_image_*"
Add comment about skipping the first two lines of partclone.info output.
Usually, partclone.info starts printing out these two lines that are not
related to the partclone image information:
Partclone v0.3.23 http://partclone.org
Showing info of image (-)
As long as partclone.info output doesn't change we'll be fine, but we
should not depend on human readable output. This might change in the
future (i.e. adding json output format to partclone.info).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rewrites this legacy script behavior using native Python code, using
subprocess module when executing programs like partclone.info or lzop
ogGetImageInfo is a bash script that retrieves information regarding an
OpenGnsys partition image, specifically:
- clonator
- compressor
- filesystem
- datasize (size of the partition image)
This rewrite only supports partclone and lzop compressed images. This is
standard behavior, we have no reports of other programs or compression
algorithms in use.
Keep this legacy function with hungarian notation to emphasize this is
still a legacy component that may be replaced in the future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Drop ogChangeRepo Bash script in favor of a native Python
approach. Use only necessary subprocess calls instead of bringing
all the logic of this function into a Bash script black box.
ogChangeRepo unmounts the current OpenGnsys image samba folder
(/opt/opengnsys/images) and mounts (connects to) a new directory using
the new provided ip address. Keeping access mode from previous mount.
If anything goes wrong when mounting the new directory, it will fallback
to mounting the previous directory.
If no previous OpenGnsys image samba directory is detected, this
functions tries to mount the new directory anyway. In this case,
it will raise CalledProcessError if something goes wrong.
|
|
|
|
|
| |
Expand function docstring and do not use CalledProcessError handling to
return True or False. Just checking for returncode value is simpler.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
init_cache() creates the default directory in which OpenGnsys stores
images when using any cache enabled transfer method.
As of this commit this folder must exist for tiptorrent.py to
work properly.
Subprocess Popen object inside tiptorrent.py use
'cwd' optional parameter like:
cwd='/opt/opengnsys/cache/opt/opengnsys/images/'
This folder convention might change in the future.
|
|
|
|
|
|
|
|
|
|
|
| |
Adds utility module which wraps several mkfs.* calls as a subprocess.
The main utility function is mkfs(fs, disk, partition, label), which
subsequently calls the corresponding mkfs_*(partition_device) function.
mkfs() supports specifying a drive label where supported.
Other modules using fs.py should call mkfs() only.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix error paths in live operations which do not
reset the "browser" to the main page (one with the menu).
Add error logging messages when:
* _restartBrowser fails.
* ogChangeRepo fails.
Improve checksum fetch error handling. For example, when an invalid
repository IP is specified.
|
|
|
|
|
|
| |
Raise exception when tiptorrent-client subprocess runs normally but
exits with non-zero code. (For example, if download file allocation
failed)
|
|
|
|
|
|
|
|
| |
Integrates image restore command into native ogClient code. Further
reduces the need for external Bash scripts.
After a succesful image restore, OS configuration is still using
external Bash script "osConfigure/osConfigureCustom".
|