| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
|
|
|
| |
ogServer searches queued commands (formerly actions) in the DB by
session. This can lead to problems because session is not an uniquely
identifier but an identifier that several commands can share to group
them.
This commit changes query filter from session to id, ensuring correct
results
This reverts commit d9b6aadf66655a6713bcacb25d2ea6b01c07e3b5.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add ogServer support procedure execution. Now users can send a procedure
and a list of clients to ogServer, then ogServer breaks down the
procedure into commands (formerly actions) and queues them for each
indicated client.
TODO: Do not reply 200 OK when the procedure do not exist.
Request:
POST /procedure/run
{
"clients": ["192.168.56.11", "192.168.56.12"],
"procedure": "33"
}
Response:
200 OK
|
|
|
|
|
|
|
|
|
|
|
|
| |
Delete operation for procedures stored in the database.
POST /procedure/delete
{
"id": "7"
}
If no procedure is found returns 200 OK but a syslog call is issued to
warn so. Such behavior will likely change in the future.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 141b079 introduced a slight change in how rows from table
"acciones" are filtered when queuing a command, from "sesion" column to
"idaccion" column.
This seemed reasonable, as id column is the one autoincrementing. But
remotepc queued commands inserting actions id using a timestamp instead
of the action row id.
See https://github.com/opengnsys/OpenGnsys/blob/c17ffa5d032a82e8eca61481dd8a8adb8b3fc5b1/admin/WebConsole/rest/remotepc.php#L188
Revert said change as long as remotepc keeps such behavior.
|
|
|
|
|
| |
json_t * parameter is not modified, constify to allow compiler to spew warnings
in case the function tries to modify it.
|
|
|
|
|
|
|
|
| |
These function declarations belong to json.h:
int og_json_parse_partition_setup(json_t *element, struct og_msg_params *params);
int og_json_parse_create_image(json_t *element, struct og_msg_params *params);
int og_json_parse_restore_image(json_t *element, struct og_msg_params *params);
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds the possibility to create a procedure with commands and other
procedures integrated as steps.
Note: "steps" parameter is optional and "steps" array object order
defines execution order.
Request:
POST /procedure/add
{
"center": "1",
"name": "procedure",
"description": "My procedure",
"steps": [
{
"command": "wol",
"params": { "type": "broadcast" }
},
{
"procedure": 22
},
{
"command": "poweroff",
"params": {}
}
]
}
Response:
200 OK
This commit also updates unit tests for /procedure/add POST method to
include steps.
|
| |
|
|
|
|
|
|
|
|
|
| |
Enables ogserver to schedule commands (also referred as actions in
legacy web console jargon).
This feature enables ogserver to write in the "acciones" table in order
to have full capabilities for command scheduling purposes, thus not
depending in the legacy web console to insert into "acciones" table.
|
|
|
|
|
|
|
|
|
|
| |
When trying to open a connection to a database, an instance of
libdbi is created before any connection attempt. If connection is
unsuccessful then the og_dbi struct is freed but not the
libdbi instance member, thus leaking its memory.
Use libdbi dbi_shutdown_r to shutdown libdbi instance member
before freeing og_dbi struct inside og_dbi_open.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This method adds a procedure associated with a center to the database.
Required payload parameters are center and name, description is
optional.
Note: ogServer does not allow to add more than one procedure with the
same name and center.
Request:
POST /procedure/add
{
"center": "1"
"name": "procedure1"
"description": "My procedure"
}
Response:
200 OK
This commit also adds unit tests for /procedure/add POST method.
|
|
|
|
|
|
| |
/usr/bin/ld: src/schema.o:/home/soleta/opengnsys/ogServer/src/schema.c:50: multiple definition of `ogconfig'; src/main.o:/home/soleta/opengnsys/ogServer/src/main.c:31: first defined here
collect2: error: ld returned 1 exit status
make: *** [Makefile:411: ogserver] Error 1
|
|
|
|
|
|
| |
Simplify database update v3, no need for iteration.
Fixes: 12d8fff (#1037 Add disk type)
|
|
|
|
|
|
|
| |
Add ogServer support for parsing and storing in the DB disk type data
from ogClient refresh response.
See also commits with #1037 in ogClient and WebConsole repo.
|
|
|
|
|
| |
Otherwise, ogServer rejects the response if ogClient sends more
parameters than required.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This method deletes a room (lab) from the DB and deletes on cascade
computers and computers partitions.
Note: if the room id do not exists in the database, ogserver still
tries to delete it and replies with 200 OK.
Request:
POST /room/delete
{
"id": "1"
}
Response:
200 OK
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This method deletes a center from the DB and deletes on cascade
rooms/labs, computers and computers partitions.
Note: if the center id do not exists in the database, ogserver still
tries to delete it and replies with 200 OK.
Request:
POST /center/delete
{
"id": "1"
}
Response:
200 OK
|
|
|
|
| |
If ogClient sends an unknown attribute, ignore it.
|
|
|
|
|
|
| |
If a probe response contains speedinformation, parse and store
it inside the client struct. Speed is interpreted as an unsigned
integer representing Mbit/s.
|
|
|
|
| |
Update license header in files.
|
|
|
|
| |
Socket hidra API has been removed, all connections use a REST API.
|
|
|
|
| |
Needed by the old socket Hydra that does not exist anymore
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoids multiple entries of a same client like
{"clients": [{"addr": "192.168.2.230", "state": "WOL_SENT"}, {"addr": "192.168.2.230", "state": "OPG"}]
These can arise when ogserver processes a WoL request for an already
connected client.
When processing the WoL request, search for the target address in the
clients list, if found we avoid creating the wol entry.
|
|
|
|
|
| |
WOL_SENT tells that WakeOnLan was sent to computer, after 60 seconds,
if computer does not boot, this state is released.
|
|
|
|
| |
og_json_client_append() adds a client objet to the json tree.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add foreign keys (version 1 introduced innoDB as default db engine)
allowing cascade deletions for some tables:
- perfilessoft_softwares
If a software profile or a software component is deleted, the
corresponding row in this table will be deleted too.
- ordenadores_particiones
If a computer or a partition is deleted from the DB, delete the
corresponding row inside this table.
- aulas
If the center the room is in is removed, delete the room too.
- ordenadores
If the room in which a computer is in is removed, the computer
will be deleted accordingly.
We should take into account that this schema superseeds some code
regarding deletions inside WebConsole that probably are not needed any
more, at least for the tables mentioned.
(See admin/WebConsole/gestores/relaciones/*.php in OpenGnsys repo)
|
|
|
|
|
| |
Enable TCP keepalive to detect if the ogClient is gone (hard reset). If no reply
after 120 seconds, then release the connection to the client.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds POST method to add rooms (labs), required payload parameters are
name, netmask and center; any additional attributes are optional.
Required JSON:
{ "center": 0,
"name": "classroom10",
"netmask": "255.255.255.0" }
Full JSON:
{ "center": 0,
"name": "classroom11",
"netmask": "255.255.255.0",
"group": 0,
"location": "First floor",
"gateway": "192.168.56.1",
"ntp": "hora.cica.es",
"dns": "1.1.1.1",
"remote": True }
This commit also adds unit tests for /room/add POST method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch adds database schema management capabilities to ogServer:
- ogServer now tracks the version of its database schema, if no version
is detected, creates a 'version' table with a single row starting at 0.
- ogServer can upgrade its database schema to a newer version if
detected. (ogServer ships required SQL commands to do so)
If ogServer is unable to upgrade the schema at startup (if needed be) it
*will not* start.
Defines schema update v1 which upgrades database engine tables of
ogServer database (usually named 'ogAdmBD') from myISAM to innoDB.
|
|
|
|
|
| |
GET /scope could generate a response larger than 64 Kbytes.
Rise the maximum API REST response size to 256 Kbytes.
|
|
|
|
|
| |
Otherwise, ogServer sends "200 OK" after a "500 Internal Server Error
error" response.
|
|
|
|
|
|
|
|
| |
Otherwise, copying response json to response buffer could lead to stack
smashing is the json response is too large.
stdout example:
*** stack smashing detected ***: <unknown> terminated
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Several universities have reported that creating a software profile
hangs the machine running the ogServer for a while, sometimes up to
minutes.
Legacy SQL code is producing said bottleneck, responsible for pruning a
intermediate table between "perfilessoft" and "softwares". There is
redundant code, "perfilssoft" should be pruned first, speeding up the
later task of pruning the intermediate table "perfilessoft_softwares"
There is no need to execute:
DELETE FROM perfilessoft_softwares
WHERE idperfilsoft IN (
SELECT idperfilsoft
FROM perfilessoft
WHERE idperfilsoft NOT IN (
SELECT DISTINCT idperfilsoft
from ordenadores_particiones)
AND idperfilsoft NOT IN (
SELECT DISTINCT idperfilsoft from imagenes))
When afterwards "perfilessoft" is going to be pruned and
"perfilessoft_softwares" pruned again:
DELETE FROM perfilessoft WHERE idperfilsoft NOT IN
(SELECT DISTINCT idperfilsoft from ordenadores_particiones)
AND idperfilsoft NOT IN
(SELECT DISTINCT idperfilsoft from imagenes)
DELETE FROM perfilessoft_softwares WHERE idperfilsoft NOT IN
(SELECT idperfilsoft from perfilessoft)
The two latter commands suffice.
This should not happen when using a relational database supporting
foreign keys and ON DELETE CASCADE, like innoDB, which will be adopted
soon.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
mktime modifies the struct tm it receives and takes into account whether DST is
active or not (tm_isdst). tm_isdst == 0 adjusts the time, which causes the time
mismatch error.
All fields are being initialized to 0 and therefore it is assumed that the time
that has been passed is not in daylight saving time.
When the value is negative in tm.tm_isdst it delegates to mktime to guess if it
is in daylight saving time or not, this works 99% of the time.
Best way would be that ogserver knows what is its timezone and when daylight
saving applies, so tm_isdst is set to 0 or 1 accordingly.
Meanwhile, "tm_isdst = -1" provides the hotfix.
|
|
|
|
|
|
|
|
|
| |
Adds POST method to add centers (organizational unit), required payload
parameter is the name, and an additional comment is optional.
{"name": "ACME"}
{"name": "ACME", "comment": "Some comment"}
|
|
|
|
|
|
|
|
|
|
| |
/create/image adds an entry to the database for the given partition
image created when payload contains a "description" attribute. This
insertion into the database is lacking a check for duplicates, which are
not supported for the images table.
Add a prior duplicate check before inserting. Exit with -1 code if an
image with the same name is found.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This function returns the installed and available ogLiveS in the server
to be booted from.
Request:
GET /oglive/list
NO BODY
Response
200 OK
{
"oglive": [
{
"distribution": "bionic",
"kernel": "5.4.0-40-generic",
"architecture": "amd64",
"revision": "r20200629",
"directory": "ogLive-5.4.0-r20200629",
"iso": "ogLive-bionic-5.4.0-40-generic-amd64-r20200629.85eceaf.iso"
},
{
"distribution": "bionic",
"kernel": "5.0.0-27-generic",
"architecture": "amd64",
"revision": "r20190830",
"directory": "ogLive-5.0.0-r20190830",
"iso": "ogLive-bionic-5.0.0-27-generic-amd64-r20190830.7208cc9.iso"
}
],
"default": 0
}
This commit also adds tests for GET /oglive/test.
|
|
|
|
| |
Fix incorrect error if json is missing.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
==28831== 1 errors in context 1 of 2:
==28831== Invalid read of size 1
==28831== at 0x55AC6FD: inet_aton (inet_addr.c:127)
==28831== by 0x10ECCA: WakeUp (ogAdmServer.c:337)
==28831== by 0x10EED6: Levanta (ogAdmServer.c:292)
==28831== by 0x11651E: og_cmd_wol (rest.c:498)
==28831== by 0x11651E: og_client_state_process_payload_rest (rest.c:3970)
==28831== by 0x110CF3: og_client_read_cb (core.c:143)
==28831== by 0x4E41D72: ev_invoke_pending (in /usr/lib/x86_64-linux-gnu/libev.so.4.0.0)
==28831== by 0x4E453DD: ev_run (in /usr/lib/x86_64-linux-gnu/libev.so.4.0.0)
==28831== by 0x10E3E5: ev_loop (ev.h:835)
==28831== by 0x10E3E5: main (main.c:100)
==28831== Address 0x0 is not stack'd, malloc'd or (recently) free'd
Use number of matching ip addresses in the database, skip if zero.
|
|
|
|
| |
Otherwise dbi_result_get_uint returns 0.
|
|
|
|
| |
ogclient might return an empty serial number.
|
|
|
|
|
|
|
|
|
|
|
|
| |
ogServer gets netmask address from computer (ordenadores) table, see
commit a35b7c4. Netmask field is empty in most cases, is only filled
when the user adds computers with dhcpd.conf syntax and cannot be edited
in computer properties view.
Labs/rooms (aulas) table also have netmask field, WebConsole backend
ensures it is not empty and can be edited in lab properties view.
Get netmask from labs table to ensure it is not empty.
|
|
|
|
| |
inet_aton() reports 0 on failure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tests for e68fefe were made after 00pm (12:00) so we did not cover <12:00 cases
for immediate commands that are logged (scheduled for the exact moment
they are processed and ignored the fact they are stale so they are executed
right away)
In addition, libdbi was complaining about the data type used to
represent the hours, they were not being inserted properly. From syslog:
failed to query database (og_dbi_schedule_create:3288) 1264: Out of
range value for column 'horas' at row 1
Fix og_tm_hours_mask so <12:00 immediate schedule is handled correctly.
Change return type to uint16_t, as the 'hours' column type is smallint(4)
Fixes e68fefe ("#997 Set stale check flag when processing schedule/create")
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit e68fefe introduced 'check_stale' flag to better distinguish
real scheduled actions that do not execute if they are stale from
immediate actions that we want them to be logged in the action queue
(by creating a decoy schedule for the exact moment they are processed,
meaning that we ignore if the are stale).
Add this feature into schedule update too, in order to avoid executing
stale commands that were not meant to, ie. real scheduled commands.
Follows e68fefe ("Set stale check flag when processing schedule/create")
|
|
|
|
|
| |
Return error if json parser fails, ignore unknown json attributes.
Missing uninitialized error value.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After executing an scheduled command/proc/task valgrind reported
leaks inside og_dbi_queue_{command,procedure,task}. String
duplication is not being freed after using them.
==21281== 36 bytes in 1 blocks are definitely lost in loss record 470 of
592
...
==21281== by 0x113DCB: og_dbi_queue_procedure (rest.c:2748)
==21281== by 0x113F91: og_dbi_queue_task (rest.c:2804)
==21281== by 0x114392: og_schedule_run (rest.c:2916)
==21281== by 0x112059: og_agent_timer_cb (schedule.c:441)
...
==21281== by 0x10E2A5: main (main.c:100)
These strdup are not necessary because the dbi result is not freed
before using them, it's safe to use the dbi result's reference to
this string.
Fix previous memleaks when executing scheduled commands, procedures
and tasks.
|
|
|
|
|
|
|
|
|
|
|
| |
If you schedule a command in the past, the scheduler executes such
command immediately.
When expanding a schedule that result in commands that run weekly,
commands in the past are also executed, which is not expected.
Fix this by using the check_stale flag (formerly on_start) so
commands in the past that result from expansions are skipped.
|
|
|
|
|
|
|
|
|
|
| |
image_json object is created to store the json representation of
an image returned by the database. This object is going to be appended
to a json list that will compose the overall root json object.
Use json_array_append_new to let "images" steal the reference of
image_json so when further decref(root) there is no json reference
hanging around.
|
| |
|