Is it normal that tabel import_job_log is large?

I just noticed that my database is somewhat large for the number of items/products.
After investigating I found that my top 8 largest tables is:

+-------------------------+------------+
| Table                   | Size in MB |
+-------------------------+------------+
| import_job_log          |    7604.89 |
| product_attribute_value |     314.80 |
| note                    |     204.80 |
| export_job              |      40.44 |
| note_user               |      30.13 |
| queue_item              |      23.11 |
| action_history_record   |      10.09 |
| product                 |       7.22 |
+-------------------------+------------+

Why do I have a bit over 11 000 000 records in import_job_log ?
Is there any problem for the system to clean this table from every record older than say 14 days?
edit Oldest row is from 2023-05-24 and the exact number just now is 11878539

Import Job Log is an entity where we keep all information about import actions. How many records were created, updated, deleted, skipped. This information needs to be deleted from time to time. When Import Job deleted all related import job logs deleted too. You can delete import job manually or via Scheduled Jobs. For Scheduled Jobs go to Administration / Scheduled jobs and create Import Job Remover.

I have set up and run the scheduled job “Import Job Remover” once every day, but can´t see any difference. Is there other jobs that have to run before rows is deleted?
For example the system-jobs “Remove Deleted Items” or “Delete Jobs” ?

Delete works in two steps. Firstly, the system marks the record as deleted (there is a special column for this). Second step is to delete the record forever. System removes records forever via special job. All records already marked as deleted will be deleted forever in 2 months. We have done this because sometimes customers remove records by mistake. If you want to delete records right now just do it manually from database. Delete all records marked as deleted.

Thank you for the info, I have seen the “deleted” field, but none of the posts have other than 0 in my table.

mysql> select ( select count(id) as total_rows from import_job_log ) as total_rows,
    -> ( select count(id) as deleted_rows from import_job_log WHERE deleted <> 0 ) as deleted_rows;
+------------+--------------+
| total_rows | deleted_rows |
+------------+--------------+
| 11 974 966 |            0 |
+------------+--------------+
1 row in set (5,94 sec)

But I will check on this next week, after more jobs have run, and see if any rows has been marked as deleted.

Please check if you delete import job. If not so import job log will exists.

For the moment I have bigger problems, tried yesterday to upgrade (1.10.35 => 1.10.40), It hang on something, I hade to try to restore, didn’t work.
Not even import a sql-dump into the database was working.
So I ended up deleting all 12 million rows of import_job_logs from the .sql-file, and this could be imported. And the system came alive again on version 1.10.35
I´m now in the process of trying an upgrade again and see if this fixes it.
When writing this, upgrade is stuck 45min on “Run migration Atro 1.10.37”

Edit: it´s now almost 3h, stuck on “Run migration Atro 1.10.37”, how long should I wait until rebooting and running “php composer.phar restore” ?

Edit 2: Update: I have now found what I think is my problem, if I look at the processes inside mysql, I find that it updating the table ‘note’, (this is what migration V1dot10dot37 is doing). This is slowly being done, but it takes FOREVER. I guessing I have some settings different on this server, because same migration take seconds on another server, with the same database.
So now I know to just wait it out, in worst case until tomorrow.

More updates:
Yesterdays migration and upgrade was successful after a bit under 8 hours.
I was clearly seeing slowness in database while running the sql update.
The same slowness don´t exist on my test-server.
I tried to update the “delete” field and this returned OK on 2887 rows, it needed 0,92 sec on my test-server, and 5,84 sec on my PROD-server.

I think I have something wrong with my PROD-db that make it slow to run UPDATE, and running jobs like “xxxxx Job Remover” didn´t make it trying to update 12 million records. I have now only newly created rows from this morning in both import_job, import_job_log and export_job. and activated the cleaning-jobs to see if this can have number of rows to a minimum in the future.
It creates almost 110 000 rows in import_job_log every morning.

There was some bug in Import Job deleting. We have already fixed it. Just keep your installation up to date. Also I recommend you to use posgresql for prod server. posgresql is much faster and better from all points of view.