Nov 1 02:20:54.555964 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 02:20:54.555977 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 02:20:54.555984 kernel: BIOS-provided physical RAM map: Nov 1 02:20:54.555988 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Nov 1 02:20:54.555992 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Nov 1 02:20:54.555996 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Nov 1 02:20:54.556000 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Nov 1 02:20:54.556004 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Nov 1 02:20:54.556008 kernel: BIOS-e820: [mem 0x0000000040400000-0x00000000819c3fff] usable Nov 1 02:20:54.556012 kernel: BIOS-e820: [mem 0x00000000819c4000-0x00000000819c4fff] ACPI NVS Nov 1 02:20:54.556017 kernel: BIOS-e820: [mem 0x00000000819c5000-0x00000000819c5fff] reserved Nov 1 02:20:54.556020 kernel: BIOS-e820: [mem 0x00000000819c6000-0x000000008afcdfff] usable Nov 1 02:20:54.556024 kernel: BIOS-e820: [mem 0x000000008afce000-0x000000008c0b2fff] reserved Nov 1 02:20:54.556028 kernel: BIOS-e820: [mem 0x000000008c0b3000-0x000000008c23bfff] usable Nov 1 02:20:54.556033 kernel: BIOS-e820: [mem 0x000000008c23c000-0x000000008c66dfff] ACPI NVS Nov 1 02:20:54.556038 kernel: BIOS-e820: [mem 0x000000008c66e000-0x000000008eefefff] reserved Nov 1 02:20:54.556043 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Nov 1 02:20:54.556047 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Nov 1 02:20:54.556051 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 02:20:54.556055 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Nov 1 02:20:54.556059 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Nov 1 02:20:54.556064 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Nov 1 02:20:54.556068 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Nov 1 02:20:54.556072 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Nov 1 02:20:54.556076 kernel: NX (Execute Disable) protection: active Nov 1 02:20:54.556080 kernel: SMBIOS 3.2.1 present. Nov 1 02:20:54.556085 kernel: DMI: Supermicro PIO-519C-MR-PH004/X11SCH-F, BIOS 2.6 12/05/2024 Nov 1 02:20:54.556090 kernel: tsc: Detected 3400.000 MHz processor Nov 1 02:20:54.556094 kernel: tsc: Detected 3399.906 MHz TSC Nov 1 02:20:54.556098 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 02:20:54.556103 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 02:20:54.556107 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Nov 1 02:20:54.556112 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 02:20:54.556116 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Nov 1 02:20:54.556121 kernel: Using GB pages for direct mapping Nov 1 02:20:54.556125 kernel: ACPI: Early table checksum verification disabled Nov 1 02:20:54.556130 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Nov 1 02:20:54.556134 kernel: ACPI: XSDT 0x000000008C54F0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Nov 1 02:20:54.556139 kernel: ACPI: FACP 0x000000008C58B5F0 000114 (v06 01072009 AMI 00010013) Nov 1 02:20:54.556143 kernel: ACPI: DSDT 0x000000008C54F268 03C386 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Nov 1 02:20:54.556150 kernel: ACPI: FACS 0x000000008C66DF80 000040 Nov 1 02:20:54.556154 kernel: ACPI: APIC 0x000000008C58B708 00012C (v04 01072009 AMI 00010013) Nov 1 02:20:54.556160 kernel: ACPI: FPDT 0x000000008C58B838 000044 (v01 01072009 AMI 00010013) Nov 1 02:20:54.556165 kernel: ACPI: FIDT 0x000000008C58B880 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Nov 1 02:20:54.556169 kernel: ACPI: MCFG 0x000000008C58B920 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Nov 1 02:20:54.556174 kernel: ACPI: SPMI 0x000000008C58B960 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Nov 1 02:20:54.556179 kernel: ACPI: SSDT 0x000000008C58B9A8 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Nov 1 02:20:54.556184 kernel: ACPI: SSDT 0x000000008C58D4C8 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Nov 1 02:20:54.556188 kernel: ACPI: SSDT 0x000000008C590690 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Nov 1 02:20:54.556193 kernel: ACPI: HPET 0x000000008C5929C0 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 02:20:54.556198 kernel: ACPI: SSDT 0x000000008C5929F8 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Nov 1 02:20:54.556203 kernel: ACPI: SSDT 0x000000008C5939A8 0008F7 (v02 INTEL xh_mossb 00000000 INTL 20160527) Nov 1 02:20:54.556208 kernel: ACPI: UEFI 0x000000008C5942A0 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 02:20:54.556213 kernel: ACPI: LPIT 0x000000008C5942E8 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 02:20:54.556217 kernel: ACPI: SSDT 0x000000008C594380 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Nov 1 02:20:54.556222 kernel: ACPI: SSDT 0x000000008C596B60 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Nov 1 02:20:54.556227 kernel: ACPI: DBGP 0x000000008C598048 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Nov 1 02:20:54.556231 kernel: ACPI: DBG2 0x000000008C598080 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Nov 1 02:20:54.556237 kernel: ACPI: SSDT 0x000000008C5980D8 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Nov 1 02:20:54.556242 kernel: ACPI: DMAR 0x000000008C599C40 000070 (v01 INTEL EDK2 00000002 01000013) Nov 1 02:20:54.556246 kernel: ACPI: SSDT 0x000000008C599CB0 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Nov 1 02:20:54.556251 kernel: ACPI: TPM2 0x000000008C599DF8 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Nov 1 02:20:54.556256 kernel: ACPI: SSDT 0x000000008C599E30 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Nov 1 02:20:54.556261 kernel: ACPI: WSMT 0x000000008C59ABC0 000028 (v01 SUPERM 01072009 AMI 00010013) Nov 1 02:20:54.556265 kernel: ACPI: EINJ 0x000000008C59ABE8 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Nov 1 02:20:54.556270 kernel: ACPI: ERST 0x000000008C59AD18 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Nov 1 02:20:54.556275 kernel: ACPI: BERT 0x000000008C59AF48 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Nov 1 02:20:54.556280 kernel: ACPI: HEST 0x000000008C59AF78 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Nov 1 02:20:54.556285 kernel: ACPI: SSDT 0x000000008C59B1F8 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Nov 1 02:20:54.556290 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58b5f0-0x8c58b703] Nov 1 02:20:54.556294 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54f268-0x8c58b5ed] Nov 1 02:20:54.556299 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66df80-0x8c66dfbf] Nov 1 02:20:54.556304 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58b708-0x8c58b833] Nov 1 02:20:54.556308 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58b838-0x8c58b87b] Nov 1 02:20:54.556313 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58b880-0x8c58b91b] Nov 1 02:20:54.556318 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58b920-0x8c58b95b] Nov 1 02:20:54.556323 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58b960-0x8c58b9a0] Nov 1 02:20:54.556328 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58b9a8-0x8c58d4c3] Nov 1 02:20:54.556333 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58d4c8-0x8c59068d] Nov 1 02:20:54.556337 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c590690-0x8c5929ba] Nov 1 02:20:54.556342 kernel: ACPI: Reserving HPET table memory at [mem 0x8c5929c0-0x8c5929f7] Nov 1 02:20:54.556347 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5929f8-0x8c5939a5] Nov 1 02:20:54.556351 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5939a8-0x8c59429e] Nov 1 02:20:54.556359 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c5942a0-0x8c5942e1] Nov 1 02:20:54.556364 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c5942e8-0x8c59437b] Nov 1 02:20:54.556391 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c594380-0x8c596b5d] Nov 1 02:20:54.556396 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c596b60-0x8c598041] Nov 1 02:20:54.556401 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c598048-0x8c59807b] Nov 1 02:20:54.556406 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c598080-0x8c5980d3] Nov 1 02:20:54.556410 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c5980d8-0x8c599c3e] Nov 1 02:20:54.556415 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c599c40-0x8c599caf] Nov 1 02:20:54.556420 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599cb0-0x8c599df3] Nov 1 02:20:54.556439 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c599df8-0x8c599e2b] Nov 1 02:20:54.556445 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c599e30-0x8c59abbe] Nov 1 02:20:54.556450 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c59abc0-0x8c59abe7] Nov 1 02:20:54.556454 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c59abe8-0x8c59ad17] Nov 1 02:20:54.556459 kernel: ACPI: Reserving ERST table memory at [mem 0x8c59ad18-0x8c59af47] Nov 1 02:20:54.556464 kernel: ACPI: Reserving BERT table memory at [mem 0x8c59af48-0x8c59af77] Nov 1 02:20:54.556468 kernel: ACPI: Reserving HEST table memory at [mem 0x8c59af78-0x8c59b1f3] Nov 1 02:20:54.556473 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59b1f8-0x8c59b359] Nov 1 02:20:54.556478 kernel: No NUMA configuration found Nov 1 02:20:54.556482 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Nov 1 02:20:54.556488 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Nov 1 02:20:54.556493 kernel: Zone ranges: Nov 1 02:20:54.556498 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 02:20:54.556502 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 02:20:54.556507 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Nov 1 02:20:54.556512 kernel: Movable zone start for each node Nov 1 02:20:54.556516 kernel: Early memory node ranges Nov 1 02:20:54.556521 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Nov 1 02:20:54.556526 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Nov 1 02:20:54.556531 kernel: node 0: [mem 0x0000000040400000-0x00000000819c3fff] Nov 1 02:20:54.556536 kernel: node 0: [mem 0x00000000819c6000-0x000000008afcdfff] Nov 1 02:20:54.556541 kernel: node 0: [mem 0x000000008c0b3000-0x000000008c23bfff] Nov 1 02:20:54.556546 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Nov 1 02:20:54.556551 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Nov 1 02:20:54.556555 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Nov 1 02:20:54.556560 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 02:20:54.556568 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Nov 1 02:20:54.556574 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Nov 1 02:20:54.556579 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Nov 1 02:20:54.556584 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Nov 1 02:20:54.556590 kernel: On node 0, zone DMA32: 11459 pages in unavailable ranges Nov 1 02:20:54.556596 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Nov 1 02:20:54.556601 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Nov 1 02:20:54.556606 kernel: ACPI: PM-Timer IO Port: 0x1808 Nov 1 02:20:54.556611 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Nov 1 02:20:54.556616 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Nov 1 02:20:54.556621 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Nov 1 02:20:54.556627 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Nov 1 02:20:54.556632 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Nov 1 02:20:54.556637 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Nov 1 02:20:54.556642 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Nov 1 02:20:54.556646 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Nov 1 02:20:54.556651 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Nov 1 02:20:54.556656 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Nov 1 02:20:54.556661 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Nov 1 02:20:54.556666 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Nov 1 02:20:54.556672 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Nov 1 02:20:54.556677 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Nov 1 02:20:54.556682 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Nov 1 02:20:54.556687 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Nov 1 02:20:54.556692 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Nov 1 02:20:54.556697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 02:20:54.556702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 02:20:54.556708 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 02:20:54.556713 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 02:20:54.556718 kernel: TSC deadline timer available Nov 1 02:20:54.556724 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Nov 1 02:20:54.556729 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Nov 1 02:20:54.556734 kernel: Booting paravirtualized kernel on bare hardware Nov 1 02:20:54.556739 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 02:20:54.556744 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Nov 1 02:20:54.556749 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 02:20:54.556754 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 02:20:54.556759 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 02:20:54.556765 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232416 Nov 1 02:20:54.556770 kernel: Policy zone: Normal Nov 1 02:20:54.556776 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 02:20:54.556781 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 02:20:54.556786 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Nov 1 02:20:54.556791 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Nov 1 02:20:54.556796 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 02:20:54.556802 kernel: Memory: 32722608K/33452984K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 730116K reserved, 0K cma-reserved) Nov 1 02:20:54.556807 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 02:20:54.556812 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 02:20:54.556817 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 02:20:54.556822 kernel: rcu: Hierarchical RCU implementation. Nov 1 02:20:54.556828 kernel: rcu: RCU event tracing is enabled. Nov 1 02:20:54.556833 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 02:20:54.556838 kernel: Rude variant of Tasks RCU enabled. Nov 1 02:20:54.556843 kernel: Tracing variant of Tasks RCU enabled. Nov 1 02:20:54.556849 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 02:20:54.556854 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 02:20:54.556859 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Nov 1 02:20:54.556864 kernel: random: crng init done Nov 1 02:20:54.556869 kernel: Console: colour dummy device 80x25 Nov 1 02:20:54.556874 kernel: printk: console [tty0] enabled Nov 1 02:20:54.556879 kernel: printk: console [ttyS1] enabled Nov 1 02:20:54.556884 kernel: ACPI: Core revision 20210730 Nov 1 02:20:54.556889 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 79635855245 ns Nov 1 02:20:54.556895 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 02:20:54.556901 kernel: DMAR: Host address width 39 Nov 1 02:20:54.556906 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Nov 1 02:20:54.556911 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Nov 1 02:20:54.556916 kernel: DMAR: RMRR base: 0x0000008cf19000 end: 0x0000008d162fff Nov 1 02:20:54.556921 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Nov 1 02:20:54.556926 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Nov 1 02:20:54.556931 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Nov 1 02:20:54.556936 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Nov 1 02:20:54.556941 kernel: x2apic enabled Nov 1 02:20:54.556947 kernel: Switched APIC routing to cluster x2apic. Nov 1 02:20:54.556952 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 02:20:54.556957 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Nov 1 02:20:54.556962 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Nov 1 02:20:54.556967 kernel: CPU0: Thermal monitoring enabled (TM1) Nov 1 02:20:54.556972 kernel: process: using mwait in idle threads Nov 1 02:20:54.556977 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 02:20:54.556982 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 02:20:54.556987 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 02:20:54.556993 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 02:20:54.556999 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 02:20:54.557004 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 02:20:54.557009 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 02:20:54.557014 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 02:20:54.557019 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 02:20:54.557024 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 02:20:54.557029 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 02:20:54.557035 kernel: TAA: Mitigation: TSX disabled Nov 1 02:20:54.557040 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Nov 1 02:20:54.557045 kernel: SRBDS: Mitigation: Microcode Nov 1 02:20:54.557050 kernel: GDS: Mitigation: Microcode Nov 1 02:20:54.557055 kernel: active return thunk: its_return_thunk Nov 1 02:20:54.557060 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 02:20:54.557065 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 02:20:54.557070 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 02:20:54.557075 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 02:20:54.557081 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 02:20:54.557086 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 02:20:54.557091 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 02:20:54.557096 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 02:20:54.557101 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 02:20:54.557106 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Nov 1 02:20:54.557111 kernel: Freeing SMP alternatives memory: 32K Nov 1 02:20:54.557116 kernel: pid_max: default: 32768 minimum: 301 Nov 1 02:20:54.557121 kernel: LSM: Security Framework initializing Nov 1 02:20:54.557127 kernel: SELinux: Initializing. Nov 1 02:20:54.557132 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 02:20:54.557137 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 02:20:54.557142 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Nov 1 02:20:54.557147 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Nov 1 02:20:54.557152 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Nov 1 02:20:54.557157 kernel: ... version: 4 Nov 1 02:20:54.557162 kernel: ... bit width: 48 Nov 1 02:20:54.557167 kernel: ... generic registers: 4 Nov 1 02:20:54.557173 kernel: ... value mask: 0000ffffffffffff Nov 1 02:20:54.557178 kernel: ... max period: 00007fffffffffff Nov 1 02:20:54.557183 kernel: ... fixed-purpose events: 3 Nov 1 02:20:54.557188 kernel: ... event mask: 000000070000000f Nov 1 02:20:54.557193 kernel: signal: max sigframe size: 2032 Nov 1 02:20:54.557198 kernel: rcu: Hierarchical SRCU implementation. Nov 1 02:20:54.557203 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Nov 1 02:20:54.557208 kernel: smp: Bringing up secondary CPUs ... Nov 1 02:20:54.557213 kernel: x86: Booting SMP configuration: Nov 1 02:20:54.557219 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Nov 1 02:20:54.557224 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 02:20:54.557229 kernel: #9 #10 #11 #12 #13 #14 #15 Nov 1 02:20:54.557234 kernel: smp: Brought up 1 node, 16 CPUs Nov 1 02:20:54.557239 kernel: smpboot: Max logical packages: 1 Nov 1 02:20:54.557244 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Nov 1 02:20:54.557249 kernel: devtmpfs: initialized Nov 1 02:20:54.557255 kernel: x86/mm: Memory block size: 128MB Nov 1 02:20:54.557260 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x819c4000-0x819c4fff] (4096 bytes) Nov 1 02:20:54.557265 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23c000-0x8c66dfff] (4399104 bytes) Nov 1 02:20:54.557270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 02:20:54.557275 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 02:20:54.557281 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 02:20:54.557286 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 02:20:54.557291 kernel: audit: initializing netlink subsys (disabled) Nov 1 02:20:54.557296 kernel: audit: type=2000 audit(1761963649.119:1): state=initialized audit_enabled=0 res=1 Nov 1 02:20:54.557300 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 02:20:54.557305 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 02:20:54.557311 kernel: cpuidle: using governor menu Nov 1 02:20:54.557316 kernel: ACPI: bus type PCI registered Nov 1 02:20:54.557321 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 02:20:54.557326 kernel: dca service started, version 1.12.1 Nov 1 02:20:54.557331 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Nov 1 02:20:54.557336 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Nov 1 02:20:54.557342 kernel: PCI: Using configuration type 1 for base access Nov 1 02:20:54.557347 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Nov 1 02:20:54.557351 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 02:20:54.557359 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 02:20:54.557364 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 02:20:54.557369 kernel: ACPI: Added _OSI(Module Device) Nov 1 02:20:54.557395 kernel: ACPI: Added _OSI(Processor Device) Nov 1 02:20:54.557401 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 02:20:54.557406 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 02:20:54.557411 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 02:20:54.557416 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 02:20:54.557421 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Nov 1 02:20:54.557427 kernel: ACPI: Dynamic OEM Table Load: Nov 1 02:20:54.557432 kernel: ACPI: SSDT 0xFFFF9B6D80219A00 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Nov 1 02:20:54.557437 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Nov 1 02:20:54.557443 kernel: ACPI: Dynamic OEM Table Load: Nov 1 02:20:54.557448 kernel: ACPI: SSDT 0xFFFF9B6D81AE4800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Nov 1 02:20:54.557453 kernel: ACPI: Dynamic OEM Table Load: Nov 1 02:20:54.557458 kernel: ACPI: SSDT 0xFFFF9B6D81A59000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Nov 1 02:20:54.557463 kernel: ACPI: Dynamic OEM Table Load: Nov 1 02:20:54.557468 kernel: ACPI: SSDT 0xFFFF9B6D81B4B000 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Nov 1 02:20:54.557474 kernel: ACPI: Dynamic OEM Table Load: Nov 1 02:20:54.557479 kernel: ACPI: SSDT 0xFFFF9B6D80149000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Nov 1 02:20:54.557484 kernel: ACPI: Dynamic OEM Table Load: Nov 1 02:20:54.557489 kernel: ACPI: SSDT 0xFFFF9B6D81AE2400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Nov 1 02:20:54.557494 kernel: ACPI: Interpreter enabled Nov 1 02:20:54.557499 kernel: ACPI: PM: (supports S0 S5) Nov 1 02:20:54.557505 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 02:20:54.557510 kernel: HEST: Enabling Firmware First mode for corrected errors. Nov 1 02:20:54.557515 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Nov 1 02:20:54.557521 kernel: HEST: Table parsing has been initialized. Nov 1 02:20:54.557526 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Nov 1 02:20:54.557531 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 02:20:54.557536 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Nov 1 02:20:54.557541 kernel: ACPI: PM: Power Resource [USBC] Nov 1 02:20:54.557547 kernel: ACPI: PM: Power Resource [V0PR] Nov 1 02:20:54.557552 kernel: ACPI: PM: Power Resource [V1PR] Nov 1 02:20:54.557557 kernel: ACPI: PM: Power Resource [V2PR] Nov 1 02:20:54.557562 kernel: ACPI: PM: Power Resource [WRST] Nov 1 02:20:54.557567 kernel: ACPI: PM: Power Resource [FN00] Nov 1 02:20:54.557573 kernel: ACPI: PM: Power Resource [FN01] Nov 1 02:20:54.557578 kernel: ACPI: PM: Power Resource [FN02] Nov 1 02:20:54.557583 kernel: ACPI: PM: Power Resource [FN03] Nov 1 02:20:54.557588 kernel: ACPI: PM: Power Resource [FN04] Nov 1 02:20:54.557593 kernel: ACPI: PM: Power Resource [PIN] Nov 1 02:20:54.557598 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Nov 1 02:20:54.557667 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 02:20:54.557718 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Nov 1 02:20:54.557766 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Nov 1 02:20:54.557773 kernel: PCI host bridge to bus 0000:00 Nov 1 02:20:54.557821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 02:20:54.557864 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 02:20:54.557905 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 02:20:54.557946 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Nov 1 02:20:54.557986 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Nov 1 02:20:54.558029 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Nov 1 02:20:54.558084 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Nov 1 02:20:54.558140 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Nov 1 02:20:54.558189 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.558240 kernel: pci 0000:00:01.1: [8086:1905] type 01 class 0x060400 Nov 1 02:20:54.558287 kernel: pci 0000:00:01.1: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.558340 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Nov 1 02:20:54.558391 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Nov 1 02:20:54.558441 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Nov 1 02:20:54.558489 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Nov 1 02:20:54.558542 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Nov 1 02:20:54.558590 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Nov 1 02:20:54.558638 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Nov 1 02:20:54.558687 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Nov 1 02:20:54.558734 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Nov 1 02:20:54.558780 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Nov 1 02:20:54.558833 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Nov 1 02:20:54.558879 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 02:20:54.558931 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Nov 1 02:20:54.558976 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 02:20:54.559026 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Nov 1 02:20:54.559073 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Nov 1 02:20:54.559118 kernel: pci 0000:00:16.0: PME# supported from D3hot Nov 1 02:20:54.559169 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Nov 1 02:20:54.559216 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Nov 1 02:20:54.559263 kernel: pci 0000:00:16.1: PME# supported from D3hot Nov 1 02:20:54.559313 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Nov 1 02:20:54.559362 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Nov 1 02:20:54.559411 kernel: pci 0000:00:16.4: PME# supported from D3hot Nov 1 02:20:54.559468 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Nov 1 02:20:54.559517 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Nov 1 02:20:54.559563 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Nov 1 02:20:54.559610 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Nov 1 02:20:54.559656 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Nov 1 02:20:54.559702 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Nov 1 02:20:54.559747 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Nov 1 02:20:54.559793 kernel: pci 0000:00:17.0: PME# supported from D3hot Nov 1 02:20:54.559844 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Nov 1 02:20:54.559893 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.559946 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Nov 1 02:20:54.559993 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.560044 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Nov 1 02:20:54.560091 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.560145 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Nov 1 02:20:54.560192 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.560242 kernel: pci 0000:00:1c.1: [8086:a339] type 01 class 0x060400 Nov 1 02:20:54.560290 kernel: pci 0000:00:1c.1: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.560340 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Nov 1 02:20:54.560391 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Nov 1 02:20:54.560443 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Nov 1 02:20:54.560496 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Nov 1 02:20:54.560543 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Nov 1 02:20:54.560589 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Nov 1 02:20:54.560639 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Nov 1 02:20:54.560687 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Nov 1 02:20:54.560735 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 02:20:54.560788 kernel: pci 0000:02:00.0: [15b3:1015] type 00 class 0x020000 Nov 1 02:20:54.560838 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Nov 1 02:20:54.560886 kernel: pci 0000:02:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Nov 1 02:20:54.560934 kernel: pci 0000:02:00.0: PME# supported from D3cold Nov 1 02:20:54.560982 kernel: pci 0000:02:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 02:20:54.561032 kernel: pci 0000:02:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 02:20:54.561085 kernel: pci 0000:02:00.1: [15b3:1015] type 00 class 0x020000 Nov 1 02:20:54.561135 kernel: pci 0000:02:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Nov 1 02:20:54.561183 kernel: pci 0000:02:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Nov 1 02:20:54.561232 kernel: pci 0000:02:00.1: PME# supported from D3cold Nov 1 02:20:54.561279 kernel: pci 0000:02:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Nov 1 02:20:54.561327 kernel: pci 0000:02:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Nov 1 02:20:54.561380 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 1 02:20:54.561427 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Nov 1 02:20:54.561474 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 02:20:54.561520 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 1 02:20:54.561575 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Nov 1 02:20:54.561644 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Nov 1 02:20:54.561692 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Nov 1 02:20:54.561742 kernel: pci 0000:04:00.0: reg 0x18: [io 0x5000-0x501f] Nov 1 02:20:54.561789 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Nov 1 02:20:54.561836 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.561883 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 1 02:20:54.561929 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 02:20:54.561974 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 02:20:54.562027 kernel: pci 0000:05:00.0: working around ROM BAR overlap defect Nov 1 02:20:54.562130 kernel: pci 0000:05:00.0: [8086:1533] type 00 class 0x020000 Nov 1 02:20:54.562202 kernel: pci 0000:05:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Nov 1 02:20:54.562249 kernel: pci 0000:05:00.0: reg 0x18: [io 0x4000-0x401f] Nov 1 02:20:54.562297 kernel: pci 0000:05:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Nov 1 02:20:54.562344 kernel: pci 0000:05:00.0: PME# supported from D0 D3hot D3cold Nov 1 02:20:54.562438 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 1 02:20:54.562486 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 02:20:54.562531 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 02:20:54.562579 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 1 02:20:54.562632 kernel: pci 0000:07:00.0: [1a03:1150] type 01 class 0x060400 Nov 1 02:20:54.562680 kernel: pci 0000:07:00.0: enabling Extended Tags Nov 1 02:20:54.562727 kernel: pci 0000:07:00.0: supports D1 D2 Nov 1 02:20:54.562776 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 02:20:54.562823 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 1 02:20:54.562868 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 1 02:20:54.562915 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Nov 1 02:20:54.562968 kernel: pci_bus 0000:08: extended config space not accessible Nov 1 02:20:54.563023 kernel: pci 0000:08:00.0: [1a03:2000] type 00 class 0x030000 Nov 1 02:20:54.563073 kernel: pci 0000:08:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Nov 1 02:20:54.563125 kernel: pci 0000:08:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Nov 1 02:20:54.563174 kernel: pci 0000:08:00.0: reg 0x18: [io 0x3000-0x307f] Nov 1 02:20:54.563225 kernel: pci 0000:08:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 02:20:54.563274 kernel: pci 0000:08:00.0: supports D1 D2 Nov 1 02:20:54.563327 kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 1 02:20:54.563400 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 1 02:20:54.563468 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 1 02:20:54.563516 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 02:20:54.563524 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Nov 1 02:20:54.563529 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Nov 1 02:20:54.563535 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Nov 1 02:20:54.563540 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Nov 1 02:20:54.563547 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Nov 1 02:20:54.563552 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Nov 1 02:20:54.563558 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Nov 1 02:20:54.563563 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Nov 1 02:20:54.563568 kernel: iommu: Default domain type: Translated Nov 1 02:20:54.563574 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 02:20:54.563623 kernel: pci 0000:08:00.0: vgaarb: setting as boot VGA device Nov 1 02:20:54.563673 kernel: pci 0000:08:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 02:20:54.563725 kernel: pci 0000:08:00.0: vgaarb: bridge control possible Nov 1 02:20:54.563733 kernel: vgaarb: loaded Nov 1 02:20:54.563739 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 02:20:54.563744 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 02:20:54.563750 kernel: PTP clock support registered Nov 1 02:20:54.563755 kernel: PCI: Using ACPI for IRQ routing Nov 1 02:20:54.563761 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 02:20:54.563766 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Nov 1 02:20:54.563771 kernel: e820: reserve RAM buffer [mem 0x819c4000-0x83ffffff] Nov 1 02:20:54.563778 kernel: e820: reserve RAM buffer [mem 0x8afce000-0x8bffffff] Nov 1 02:20:54.563783 kernel: e820: reserve RAM buffer [mem 0x8c23c000-0x8fffffff] Nov 1 02:20:54.563788 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Nov 1 02:20:54.563793 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Nov 1 02:20:54.563799 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 1 02:20:54.563804 kernel: hpet0: 8 comparators, 64-bit 24.000000 MHz counter Nov 1 02:20:54.563809 kernel: clocksource: Switched to clocksource tsc-early Nov 1 02:20:54.563815 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 02:20:54.563820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 02:20:54.563827 kernel: pnp: PnP ACPI init Nov 1 02:20:54.563875 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Nov 1 02:20:54.563921 kernel: pnp 00:02: [dma 0 disabled] Nov 1 02:20:54.563969 kernel: pnp 00:03: [dma 0 disabled] Nov 1 02:20:54.564014 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Nov 1 02:20:54.564056 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Nov 1 02:20:54.564100 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Nov 1 02:20:54.564148 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Nov 1 02:20:54.564189 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Nov 1 02:20:54.564231 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Nov 1 02:20:54.564272 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Nov 1 02:20:54.564313 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Nov 1 02:20:54.564354 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Nov 1 02:20:54.564443 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Nov 1 02:20:54.564484 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Nov 1 02:20:54.564528 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Nov 1 02:20:54.564571 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Nov 1 02:20:54.564611 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Nov 1 02:20:54.564652 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Nov 1 02:20:54.564694 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Nov 1 02:20:54.564736 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Nov 1 02:20:54.564779 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Nov 1 02:20:54.564825 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Nov 1 02:20:54.564833 kernel: pnp: PnP ACPI: found 10 devices Nov 1 02:20:54.564838 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 02:20:54.564844 kernel: NET: Registered PF_INET protocol family Nov 1 02:20:54.564849 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 02:20:54.564855 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 02:20:54.564862 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 02:20:54.564867 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 02:20:54.564872 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 02:20:54.564878 kernel: TCP: Hash tables configured (established 262144 bind 65536) Nov 1 02:20:54.564883 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 02:20:54.564889 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 02:20:54.564894 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 02:20:54.564900 kernel: NET: Registered PF_XDP protocol family Nov 1 02:20:54.564947 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Nov 1 02:20:54.564996 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Nov 1 02:20:54.565043 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Nov 1 02:20:54.565090 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Nov 1 02:20:54.565138 kernel: pci 0000:02:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 02:20:54.565186 kernel: pci 0000:02:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 02:20:54.565234 kernel: pci 0000:02:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Nov 1 02:20:54.565282 kernel: pci 0000:02:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Nov 1 02:20:54.565329 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Nov 1 02:20:54.565404 kernel: pci 0000:00:01.1: bridge window [mem 0x95100000-0x952fffff] Nov 1 02:20:54.565451 kernel: pci 0000:00:01.1: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 02:20:54.565500 kernel: pci 0000:00:1b.0: PCI bridge to [bus 03] Nov 1 02:20:54.565546 kernel: pci 0000:00:1b.4: PCI bridge to [bus 04] Nov 1 02:20:54.565596 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Nov 1 02:20:54.565643 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Nov 1 02:20:54.565689 kernel: pci 0000:00:1b.5: PCI bridge to [bus 05] Nov 1 02:20:54.565737 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Nov 1 02:20:54.565784 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Nov 1 02:20:54.565831 kernel: pci 0000:00:1c.0: PCI bridge to [bus 06] Nov 1 02:20:54.565880 kernel: pci 0000:07:00.0: PCI bridge to [bus 08] Nov 1 02:20:54.565929 kernel: pci 0000:07:00.0: bridge window [io 0x3000-0x3fff] Nov 1 02:20:54.565978 kernel: pci 0000:07:00.0: bridge window [mem 0x94000000-0x950fffff] Nov 1 02:20:54.566026 kernel: pci 0000:00:1c.1: PCI bridge to [bus 07-08] Nov 1 02:20:54.566074 kernel: pci 0000:00:1c.1: bridge window [io 0x3000-0x3fff] Nov 1 02:20:54.566120 kernel: pci 0000:00:1c.1: bridge window [mem 0x94000000-0x950fffff] Nov 1 02:20:54.566164 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Nov 1 02:20:54.566206 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 02:20:54.566248 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 02:20:54.566289 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 02:20:54.566330 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Nov 1 02:20:54.566376 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Nov 1 02:20:54.566427 kernel: pci_bus 0000:02: resource 1 [mem 0x95100000-0x952fffff] Nov 1 02:20:54.566472 kernel: pci_bus 0000:02: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Nov 1 02:20:54.566519 kernel: pci_bus 0000:04: resource 0 [io 0x5000-0x5fff] Nov 1 02:20:54.566564 kernel: pci_bus 0000:04: resource 1 [mem 0x95400000-0x954fffff] Nov 1 02:20:54.566632 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 02:20:54.566676 kernel: pci_bus 0000:05: resource 1 [mem 0x95300000-0x953fffff] Nov 1 02:20:54.566724 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Nov 1 02:20:54.566768 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Nov 1 02:20:54.566813 kernel: pci_bus 0000:08: resource 0 [io 0x3000-0x3fff] Nov 1 02:20:54.566858 kernel: pci_bus 0000:08: resource 1 [mem 0x94000000-0x950fffff] Nov 1 02:20:54.566866 kernel: PCI: CLS 64 bytes, default 64 Nov 1 02:20:54.566872 kernel: DMAR: No ATSR found Nov 1 02:20:54.566877 kernel: DMAR: No SATC found Nov 1 02:20:54.566884 kernel: DMAR: dmar0: Using Queued invalidation Nov 1 02:20:54.566931 kernel: pci 0000:00:00.0: Adding to iommu group 0 Nov 1 02:20:54.566978 kernel: pci 0000:00:01.0: Adding to iommu group 1 Nov 1 02:20:54.567025 kernel: pci 0000:00:01.1: Adding to iommu group 1 Nov 1 02:20:54.567071 kernel: pci 0000:00:08.0: Adding to iommu group 2 Nov 1 02:20:54.567116 kernel: pci 0000:00:12.0: Adding to iommu group 3 Nov 1 02:20:54.567162 kernel: pci 0000:00:14.0: Adding to iommu group 4 Nov 1 02:20:54.567208 kernel: pci 0000:00:14.2: Adding to iommu group 4 Nov 1 02:20:54.567255 kernel: pci 0000:00:15.0: Adding to iommu group 5 Nov 1 02:20:54.567300 kernel: pci 0000:00:15.1: Adding to iommu group 5 Nov 1 02:20:54.567346 kernel: pci 0000:00:16.0: Adding to iommu group 6 Nov 1 02:20:54.567438 kernel: pci 0000:00:16.1: Adding to iommu group 6 Nov 1 02:20:54.567484 kernel: pci 0000:00:16.4: Adding to iommu group 6 Nov 1 02:20:54.567530 kernel: pci 0000:00:17.0: Adding to iommu group 7 Nov 1 02:20:54.567577 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Nov 1 02:20:54.567624 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Nov 1 02:20:54.567672 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Nov 1 02:20:54.567719 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Nov 1 02:20:54.567764 kernel: pci 0000:00:1c.1: Adding to iommu group 12 Nov 1 02:20:54.567810 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Nov 1 02:20:54.567856 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Nov 1 02:20:54.567901 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Nov 1 02:20:54.567948 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Nov 1 02:20:54.567995 kernel: pci 0000:02:00.0: Adding to iommu group 1 Nov 1 02:20:54.568045 kernel: pci 0000:02:00.1: Adding to iommu group 1 Nov 1 02:20:54.568093 kernel: pci 0000:04:00.0: Adding to iommu group 15 Nov 1 02:20:54.568141 kernel: pci 0000:05:00.0: Adding to iommu group 16 Nov 1 02:20:54.568189 kernel: pci 0000:07:00.0: Adding to iommu group 17 Nov 1 02:20:54.568240 kernel: pci 0000:08:00.0: Adding to iommu group 17 Nov 1 02:20:54.568247 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Nov 1 02:20:54.568253 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 02:20:54.568259 kernel: software IO TLB: mapped [mem 0x0000000086fce000-0x000000008afce000] (64MB) Nov 1 02:20:54.568264 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Nov 1 02:20:54.568271 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Nov 1 02:20:54.568276 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Nov 1 02:20:54.568282 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Nov 1 02:20:54.568330 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Nov 1 02:20:54.568339 kernel: Initialise system trusted keyrings Nov 1 02:20:54.568344 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Nov 1 02:20:54.568349 kernel: Key type asymmetric registered Nov 1 02:20:54.568355 kernel: Asymmetric key parser 'x509' registered Nov 1 02:20:54.568385 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 02:20:54.568391 kernel: io scheduler mq-deadline registered Nov 1 02:20:54.568396 kernel: io scheduler kyber registered Nov 1 02:20:54.568402 kernel: io scheduler bfq registered Nov 1 02:20:54.568472 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Nov 1 02:20:54.568519 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 122 Nov 1 02:20:54.568565 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 123 Nov 1 02:20:54.568613 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 124 Nov 1 02:20:54.568661 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 125 Nov 1 02:20:54.568708 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 126 Nov 1 02:20:54.568754 kernel: pcieport 0000:00:1c.1: PME: Signaling with IRQ 127 Nov 1 02:20:54.568807 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Nov 1 02:20:54.568816 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Nov 1 02:20:54.568821 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Nov 1 02:20:54.568827 kernel: pstore: Registered erst as persistent store backend Nov 1 02:20:54.568832 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 02:20:54.568839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 02:20:54.568844 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 02:20:54.568850 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 02:20:54.568896 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Nov 1 02:20:54.568904 kernel: i8042: PNP: No PS/2 controller found. Nov 1 02:20:54.568946 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Nov 1 02:20:54.568989 kernel: rtc_cmos rtc_cmos: registered as rtc0 Nov 1 02:20:54.569030 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-11-01T02:20:53 UTC (1761963653) Nov 1 02:20:54.569075 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Nov 1 02:20:54.569083 kernel: intel_pstate: Intel P-state driver initializing Nov 1 02:20:54.569088 kernel: intel_pstate: Disabling energy efficiency optimization Nov 1 02:20:54.569094 kernel: intel_pstate: HWP enabled Nov 1 02:20:54.569099 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Nov 1 02:20:54.569105 kernel: vesafb: scrolling: redraw Nov 1 02:20:54.569110 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Nov 1 02:20:54.569115 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000056362da3, using 768k, total 768k Nov 1 02:20:54.569122 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 02:20:54.569128 kernel: fb0: VESA VGA frame buffer device Nov 1 02:20:54.569133 kernel: NET: Registered PF_INET6 protocol family Nov 1 02:20:54.569138 kernel: Segment Routing with IPv6 Nov 1 02:20:54.569144 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 02:20:54.569149 kernel: NET: Registered PF_PACKET protocol family Nov 1 02:20:54.569154 kernel: Key type dns_resolver registered Nov 1 02:20:54.569160 kernel: microcode: sig=0x906ed, pf=0x2, revision=0x102 Nov 1 02:20:54.569166 kernel: microcode: Microcode Update Driver: v2.2. Nov 1 02:20:54.569171 kernel: IPI shorthand broadcast: enabled Nov 1 02:20:54.569177 kernel: sched_clock: Marking stable (1780248407, 1335464667)->(4537186316, -1421473242) Nov 1 02:20:54.569183 kernel: registered taskstats version 1 Nov 1 02:20:54.569188 kernel: Loading compiled-in X.509 certificates Nov 1 02:20:54.569193 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 02:20:54.569199 kernel: Key type .fscrypt registered Nov 1 02:20:54.569204 kernel: Key type fscrypt-provisioning registered Nov 1 02:20:54.569209 kernel: pstore: Using crash dump compression: deflate Nov 1 02:20:54.569215 kernel: ima: Allocated hash algorithm: sha1 Nov 1 02:20:54.569221 kernel: ima: No architecture policies found Nov 1 02:20:54.569226 kernel: clk: Disabling unused clocks Nov 1 02:20:54.569232 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 02:20:54.569237 kernel: Write protecting the kernel read-only data: 28672k Nov 1 02:20:54.569243 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 02:20:54.569248 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 02:20:54.569254 kernel: Run /init as init process Nov 1 02:20:54.569259 kernel: with arguments: Nov 1 02:20:54.569264 kernel: /init Nov 1 02:20:54.569270 kernel: with environment: Nov 1 02:20:54.569276 kernel: HOME=/ Nov 1 02:20:54.569281 kernel: TERM=linux Nov 1 02:20:54.569286 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 02:20:54.569293 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 02:20:54.569300 systemd[1]: Detected architecture x86-64. Nov 1 02:20:54.569305 systemd[1]: Running in initrd. Nov 1 02:20:54.569311 systemd[1]: No hostname configured, using default hostname. Nov 1 02:20:54.569317 systemd[1]: Hostname set to . Nov 1 02:20:54.569322 systemd[1]: Initializing machine ID from random generator. Nov 1 02:20:54.569328 systemd[1]: Queued start job for default target initrd.target. Nov 1 02:20:54.569334 systemd[1]: Started systemd-ask-password-console.path. Nov 1 02:20:54.569340 systemd[1]: Reached target cryptsetup.target. Nov 1 02:20:54.569345 systemd[1]: Reached target paths.target. Nov 1 02:20:54.569350 systemd[1]: Reached target slices.target. Nov 1 02:20:54.569358 systemd[1]: Reached target swap.target. Nov 1 02:20:54.569384 systemd[1]: Reached target timers.target. Nov 1 02:20:54.569389 systemd[1]: Listening on iscsid.socket. Nov 1 02:20:54.569395 systemd[1]: Listening on iscsiuio.socket. Nov 1 02:20:54.569421 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 02:20:54.569427 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 02:20:54.569432 systemd[1]: Listening on systemd-journald.socket. Nov 1 02:20:54.569438 systemd[1]: Listening on systemd-networkd.socket. Nov 1 02:20:54.569445 kernel: tsc: Refined TSC clocksource calibration: 3408.018 MHz Nov 1 02:20:54.569450 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fe4e33bb, max_idle_ns: 440795249257 ns Nov 1 02:20:54.569456 kernel: clocksource: Switched to clocksource tsc Nov 1 02:20:54.569461 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 02:20:54.569467 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 02:20:54.569472 systemd[1]: Reached target sockets.target. Nov 1 02:20:54.569478 systemd[1]: Starting kmod-static-nodes.service... Nov 1 02:20:54.569483 systemd[1]: Finished network-cleanup.service. Nov 1 02:20:54.569489 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 02:20:54.569495 systemd[1]: Starting systemd-journald.service... Nov 1 02:20:54.569501 systemd[1]: Starting systemd-modules-load.service... Nov 1 02:20:54.569509 systemd-journald[269]: Journal started Nov 1 02:20:54.569535 systemd-journald[269]: Runtime Journal (/run/log/journal/2279bc2848cb4773b544af26347935be) is 8.0M, max 640.1M, 632.1M free. Nov 1 02:20:54.571689 systemd-modules-load[270]: Inserted module 'overlay' Nov 1 02:20:54.577000 audit: BPF prog-id=6 op=LOAD Nov 1 02:20:54.596421 kernel: audit: type=1334 audit(1761963654.577:2): prog-id=6 op=LOAD Nov 1 02:20:54.596436 systemd[1]: Starting systemd-resolved.service... Nov 1 02:20:54.646416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 02:20:54.646435 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 02:20:54.679362 kernel: Bridge firewalling registered Nov 1 02:20:54.679379 systemd[1]: Started systemd-journald.service. Nov 1 02:20:54.694029 systemd-modules-load[270]: Inserted module 'br_netfilter' Nov 1 02:20:54.742478 kernel: audit: type=1130 audit(1761963654.701:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.696820 systemd-resolved[272]: Positive Trust Anchors: Nov 1 02:20:54.820504 kernel: SCSI subsystem initialized Nov 1 02:20:54.820517 kernel: audit: type=1130 audit(1761963654.753:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.820525 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 02:20:54.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.696826 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 02:20:54.920276 kernel: device-mapper: uevent: version 1.0.3 Nov 1 02:20:54.920313 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 02:20:54.920321 kernel: audit: type=1130 audit(1761963654.876:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.696847 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 02:20:54.993611 kernel: audit: type=1130 audit(1761963654.927:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.698463 systemd-resolved[272]: Defaulting to hostname 'linux'. Nov 1 02:20:55.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.702584 systemd[1]: Started systemd-resolved.service. Nov 1 02:20:55.101140 kernel: audit: type=1130 audit(1761963655.001:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.101155 kernel: audit: type=1130 audit(1761963655.054:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:54.754542 systemd[1]: Finished kmod-static-nodes.service. Nov 1 02:20:54.878056 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 02:20:54.921266 systemd-modules-load[270]: Inserted module 'dm_multipath' Nov 1 02:20:54.928535 systemd[1]: Finished systemd-modules-load.service. Nov 1 02:20:55.002718 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 02:20:55.055651 systemd[1]: Reached target nss-lookup.target. Nov 1 02:20:55.109965 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 02:20:55.129872 systemd[1]: Starting systemd-sysctl.service... Nov 1 02:20:55.130181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 02:20:55.133047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 02:20:55.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.133798 systemd[1]: Finished systemd-sysctl.service. Nov 1 02:20:55.182570 kernel: audit: type=1130 audit(1761963655.131:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.194706 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 02:20:55.259465 kernel: audit: type=1130 audit(1761963655.193:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.250977 systemd[1]: Starting dracut-cmdline.service... Nov 1 02:20:55.273473 dracut-cmdline[294]: dracut-dracut-053 Nov 1 02:20:55.273473 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Nov 1 02:20:55.273473 dracut-cmdline[294]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 02:20:55.343450 kernel: Loading iSCSI transport class v2.0-870. Nov 1 02:20:55.343463 kernel: iscsi: registered transport (tcp) Nov 1 02:20:55.401545 kernel: iscsi: registered transport (qla4xxx) Nov 1 02:20:55.401565 kernel: QLogic iSCSI HBA Driver Nov 1 02:20:55.417625 systemd[1]: Finished dracut-cmdline.service. Nov 1 02:20:55.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:55.427080 systemd[1]: Starting dracut-pre-udev.service... Nov 1 02:20:55.483430 kernel: raid6: avx2x4 gen() 45561 MB/s Nov 1 02:20:55.518391 kernel: raid6: avx2x4 xor() 21986 MB/s Nov 1 02:20:55.553437 kernel: raid6: avx2x2 gen() 53487 MB/s Nov 1 02:20:55.588393 kernel: raid6: avx2x2 xor() 31935 MB/s Nov 1 02:20:55.623438 kernel: raid6: avx2x1 gen() 45114 MB/s Nov 1 02:20:55.657436 kernel: raid6: avx2x1 xor() 27839 MB/s Nov 1 02:20:55.691436 kernel: raid6: sse2x4 gen() 21301 MB/s Nov 1 02:20:55.725436 kernel: raid6: sse2x4 xor() 11980 MB/s Nov 1 02:20:55.759390 kernel: raid6: sse2x2 gen() 21618 MB/s Nov 1 02:20:55.793437 kernel: raid6: sse2x2 xor() 13352 MB/s Nov 1 02:20:55.827392 kernel: raid6: sse2x1 gen() 18281 MB/s Nov 1 02:20:55.879026 kernel: raid6: sse2x1 xor() 8930 MB/s Nov 1 02:20:55.879042 kernel: raid6: using algorithm avx2x2 gen() 53487 MB/s Nov 1 02:20:55.879052 kernel: raid6: .... xor() 31935 MB/s, rmw enabled Nov 1 02:20:55.897098 kernel: raid6: using avx2x2 recovery algorithm Nov 1 02:20:55.943454 kernel: xor: automatically using best checksumming function avx Nov 1 02:20:56.023409 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 02:20:56.028899 systemd[1]: Finished dracut-pre-udev.service. Nov 1 02:20:56.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:56.036000 audit: BPF prog-id=7 op=LOAD Nov 1 02:20:56.036000 audit: BPF prog-id=8 op=LOAD Nov 1 02:20:56.038319 systemd[1]: Starting systemd-udevd.service... Nov 1 02:20:56.045953 systemd-udevd[476]: Using default interface naming scheme 'v252'. Nov 1 02:20:56.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:56.051584 systemd[1]: Started systemd-udevd.service. Nov 1 02:20:56.068227 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 02:20:56.099759 dracut-pre-trigger[487]: rd.md=0: removing MD RAID activation Nov 1 02:20:56.159705 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 02:20:56.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:56.171183 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 02:20:56.262352 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 02:20:56.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:56.289366 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 02:20:56.292365 kernel: libata version 3.00 loaded. Nov 1 02:20:56.327430 kernel: ACPI: bus type USB registered Nov 1 02:20:56.327463 kernel: usbcore: registered new interface driver usbfs Nov 1 02:20:56.327473 kernel: usbcore: registered new interface driver hub Nov 1 02:20:56.362398 kernel: usbcore: registered new device driver usb Nov 1 02:20:56.370071 kernel: ahci 0000:00:17.0: version 3.0 Nov 1 02:20:56.688449 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 02:20:56.688465 kernel: mlx5_core 0000:02:00.0: firmware version: 14.31.1014 Nov 1 02:20:57.066204 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 8 ports 6 Gbps 0xff impl SATA mode Nov 1 02:20:57.066279 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Nov 1 02:20:57.066339 kernel: scsi host0: ahci Nov 1 02:20:57.066408 kernel: mlx5_core 0000:02:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 02:20:57.066465 kernel: scsi host1: ahci Nov 1 02:20:57.066523 kernel: scsi host2: ahci Nov 1 02:20:57.066578 kernel: scsi host3: ahci Nov 1 02:20:57.066633 kernel: scsi host4: ahci Nov 1 02:20:57.066691 kernel: scsi host5: ahci Nov 1 02:20:57.066747 kernel: scsi host6: ahci Nov 1 02:20:57.066800 kernel: scsi host7: ahci Nov 1 02:20:57.066857 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 128 Nov 1 02:20:57.066865 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 128 Nov 1 02:20:57.066872 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 128 Nov 1 02:20:57.066878 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 128 Nov 1 02:20:57.066885 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 128 Nov 1 02:20:57.066893 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 128 Nov 1 02:20:57.066900 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 128 Nov 1 02:20:57.066907 kernel: ata8: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516480 irq 128 Nov 1 02:20:57.066913 kernel: AES CTR mode by8 optimization enabled Nov 1 02:20:57.066920 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Nov 1 02:20:57.066927 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Nov 1 02:20:57.066933 kernel: mlx5_core 0000:02:00.0: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 02:20:57.066987 kernel: igb 0000:04:00.0: added PHC on eth0 Nov 1 02:20:57.067044 kernel: mlx5_core 0000:02:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 02:20:57.067099 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 02:20:57.067153 kernel: igb 0000:04:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d4 Nov 1 02:20:57.067205 kernel: igb 0000:04:00.0: eth0: PBA No: 010000-000 Nov 1 02:20:57.067258 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 02:20:57.067310 kernel: igb 0000:05:00.0: added PHC on eth1 Nov 1 02:20:57.067367 kernel: igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection Nov 1 02:20:57.067421 kernel: igb 0000:05:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:73:1d:d5 Nov 1 02:20:57.067473 kernel: igb 0000:05:00.0: eth1: PBA No: 010000-000 Nov 1 02:20:57.067527 kernel: igb 0000:05:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Nov 1 02:20:57.067579 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 02:20:57.067587 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 02:20:57.067593 kernel: ata7: SATA link down (SStatus 0 SControl 300) Nov 1 02:20:57.067600 kernel: mlx5_core 0000:02:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 02:20:57.067652 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 02:20:57.067659 kernel: mlx5_core 0000:02:00.1: firmware version: 14.31.1014 Nov 1 02:20:57.912005 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 02:20:57.912019 kernel: mlx5_core 0000:02:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Nov 1 02:20:57.912088 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 02:20:57.912096 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Nov 1 02:20:57.912103 kernel: ata8: SATA link down (SStatus 0 SControl 300) Nov 1 02:20:57.912110 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 02:20:57.912117 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Nov 1 02:20:57.912124 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 02:20:57.912130 kernel: ata2.00: Features: NCQ-prio Nov 1 02:20:57.912139 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Nov 1 02:20:57.912146 kernel: ata1.00: Features: NCQ-prio Nov 1 02:20:57.912153 kernel: ata2.00: configured for UDMA/133 Nov 1 02:20:57.912159 kernel: ata1.00: configured for UDMA/133 Nov 1 02:20:57.912166 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 02:20:57.912234 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Nov 1 02:20:57.912296 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 02:20:57.912351 kernel: igb 0000:04:00.0 eno1: renamed from eth0 Nov 1 02:20:57.912416 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Nov 1 02:20:57.912469 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Nov 1 02:20:57.912520 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Nov 1 02:20:57.912571 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Nov 1 02:20:57.912622 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Nov 1 02:20:57.912673 kernel: mlx5_core 0000:02:00.1: E-Switch: Total vports 10, per vport: max uc(128) max mc(2048) Nov 1 02:20:57.912726 kernel: hub 1-0:1.0: USB hub found Nov 1 02:20:57.912792 kernel: port_module: 9 callbacks suppressed Nov 1 02:20:57.912801 kernel: mlx5_core 0000:02:00.1: Port module event: module 1, Cable plugged Nov 1 02:20:57.912856 kernel: hub 1-0:1.0: 16 ports detected Nov 1 02:20:57.912913 kernel: igb 0000:05:00.0 eno2: renamed from eth1 Nov 1 02:20:57.912969 kernel: hub 2-0:1.0: USB hub found Nov 1 02:20:57.913033 kernel: hub 2-0:1.0: 10 ports detected Nov 1 02:20:57.913088 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:57.913096 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 02:20:57.913104 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 02:20:57.913165 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Nov 1 02:20:57.913224 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Nov 1 02:20:57.913281 kernel: mlx5_core 0000:02:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Nov 1 02:20:57.913336 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 02:20:57.913399 kernel: sd 1:0:0:0: [sdb] Write Protect is off Nov 1 02:20:57.913459 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 02:20:57.913517 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Nov 1 02:20:57.913576 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Nov 1 02:20:57.913632 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 02:20:57.913691 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 02:20:57.913748 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:57.913756 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 02:20:57.913763 kernel: ata1.00: Enabling discard_zeroes_data Nov 1 02:20:57.913769 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Nov 1 02:20:57.961322 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 02:20:57.961400 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 02:20:57.961408 kernel: GPT:9289727 != 937703087 Nov 1 02:20:57.961415 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 02:20:57.961424 kernel: GPT:9289727 != 937703087 Nov 1 02:20:57.961431 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 02:20:57.961438 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 02:20:57.961444 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:57.961453 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Nov 1 02:20:57.961517 kernel: mlx5_core 0000:02:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Nov 1 02:20:57.961574 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sdb6 scanned by (udev-worker) (519) Nov 1 02:20:57.961582 kernel: hub 1-14:1.0: USB hub found Nov 1 02:20:57.961647 kernel: hub 1-14:1.0: 4 ports detected Nov 1 02:20:57.961707 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: renamed from eth0 Nov 1 02:20:57.868775 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 02:20:57.954680 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 02:20:57.973872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 02:20:58.023407 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: renamed from eth2 Nov 1 02:20:58.003396 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 02:20:58.051448 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:58.023193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 02:20:58.082461 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 02:20:58.082472 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:58.033192 systemd[1]: Starting disk-uuid.service... Nov 1 02:20:58.098458 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 02:20:58.098511 disk-uuid[692]: Primary Header is updated. Nov 1 02:20:58.098511 disk-uuid[692]: Secondary Entries is updated. Nov 1 02:20:58.098511 disk-uuid[692]: Secondary Header is updated. Nov 1 02:20:58.137459 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:58.137503 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 02:20:58.262373 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Nov 1 02:20:58.398413 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 02:20:58.428803 kernel: usbcore: registered new interface driver usbhid Nov 1 02:20:58.428820 kernel: usbhid: USB HID core driver Nov 1 02:20:58.460436 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Nov 1 02:20:58.583110 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Nov 1 02:20:58.583249 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Nov 1 02:20:58.583258 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Nov 1 02:20:59.115143 kernel: ata2.00: Enabling discard_zeroes_data Nov 1 02:20:59.133289 disk-uuid[693]: The operation has completed successfully. Nov 1 02:20:59.141480 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Nov 1 02:20:59.169143 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 02:20:59.262755 kernel: audit: type=1130 audit(1761963659.175:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.262770 kernel: audit: type=1131 audit(1761963659.175:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.169207 systemd[1]: Finished disk-uuid.service. Nov 1 02:20:59.291451 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 02:20:59.179922 systemd[1]: Starting verity-setup.service... Nov 1 02:20:59.322258 systemd[1]: Found device dev-mapper-usr.device. Nov 1 02:20:59.331520 systemd[1]: Mounting sysusr-usr.mount... Nov 1 02:20:59.345613 systemd[1]: Finished verity-setup.service. Nov 1 02:20:59.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.399362 kernel: audit: type=1130 audit(1761963659.351:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.457024 systemd[1]: Mounted sysusr-usr.mount. Nov 1 02:20:59.472538 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 02:20:59.464638 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 02:20:59.553999 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:20:59.554014 kernel: BTRFS info (device sdb6): using free space tree Nov 1 02:20:59.554022 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 02:20:59.554028 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 02:20:59.465057 systemd[1]: Starting ignition-setup.service... Nov 1 02:20:59.487801 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 02:20:59.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.562840 systemd[1]: Finished ignition-setup.service. Nov 1 02:20:59.684230 kernel: audit: type=1130 audit(1761963659.578:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.684313 kernel: audit: type=1130 audit(1761963659.634:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.579707 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 02:20:59.715006 kernel: audit: type=1334 audit(1761963659.691:24): prog-id=9 op=LOAD Nov 1 02:20:59.691000 audit: BPF prog-id=9 op=LOAD Nov 1 02:20:59.636056 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 02:20:59.693300 systemd[1]: Starting systemd-networkd.service... Nov 1 02:20:59.730164 systemd-networkd[882]: lo: Link UP Nov 1 02:20:59.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.756956 ignition[870]: Ignition 2.14.0 Nov 1 02:20:59.808610 kernel: audit: type=1130 audit(1761963659.744:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.730166 systemd-networkd[882]: lo: Gained carrier Nov 1 02:20:59.756960 ignition[870]: Stage: fetch-offline Nov 1 02:20:59.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.730554 systemd-networkd[882]: Enumeration completed Nov 1 02:20:59.963971 kernel: audit: type=1130 audit(1761963659.829:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.963985 kernel: audit: type=1130 audit(1761963659.889:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.963993 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 1 02:20:59.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.756987 ignition[870]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 02:21:00.002559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f1np1: link becomes ready Nov 1 02:20:59.730623 systemd[1]: Started systemd-networkd.service. Nov 1 02:20:59.757001 ignition[870]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 02:20:59.731264 systemd-networkd[882]: enp2s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 02:21:00.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.765290 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 02:20:59.745536 systemd[1]: Reached target network.target. Nov 1 02:21:00.050493 iscsid[904]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 02:21:00.050493 iscsid[904]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 02:21:00.050493 iscsid[904]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 02:21:00.050493 iscsid[904]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 02:21:00.050493 iscsid[904]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 02:21:00.050493 iscsid[904]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 02:21:00.050493 iscsid[904]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 02:21:00.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.765361 ignition[870]: parsed url from cmdline: "" Nov 1 02:20:59.769734 unknown[870]: fetched base config from "system" Nov 1 02:21:00.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:20:59.765364 ignition[870]: no config URL provided Nov 1 02:20:59.769738 unknown[870]: fetched user config from "system" Nov 1 02:21:00.243557 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 1 02:20:59.765367 ignition[870]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 02:20:59.803991 systemd[1]: Starting iscsiuio.service... Nov 1 02:20:59.765413 ignition[870]: parsing config with SHA512: b7bd4c885777ce91916ef7d4b6b765283264e4732130940ba61987a33eac58bc0f97c136fc9c56867fc482721058e8c1105b89d2bba5abfd7bf7cfd2baa81d3b Nov 1 02:20:59.815600 systemd[1]: Started iscsiuio.service. Nov 1 02:20:59.770027 ignition[870]: fetch-offline: fetch-offline passed Nov 1 02:20:59.850045 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 02:20:59.770029 ignition[870]: POST message to Packet Timeline Nov 1 02:20:59.890611 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 02:20:59.770034 ignition[870]: POST Status error: resource requires networking Nov 1 02:20:59.891064 systemd[1]: Starting ignition-kargs.service... Nov 1 02:20:59.770068 ignition[870]: Ignition finished successfully Nov 1 02:20:59.966919 systemd-networkd[882]: enp2s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 02:20:59.969413 ignition[892]: Ignition 2.14.0 Nov 1 02:20:59.978920 systemd[1]: Starting iscsid.service... Nov 1 02:20:59.969461 ignition[892]: Stage: kargs Nov 1 02:21:00.009741 systemd[1]: Started iscsid.service. Nov 1 02:20:59.969603 ignition[892]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 02:21:00.017049 systemd[1]: Starting dracut-initqueue.service... Nov 1 02:20:59.969630 ignition[892]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 02:21:00.040631 systemd[1]: Finished dracut-initqueue.service. Nov 1 02:20:59.972267 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 02:21:00.062474 systemd[1]: Reached target remote-fs-pre.target. Nov 1 02:20:59.974259 ignition[892]: kargs: kargs passed Nov 1 02:21:00.106502 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 02:20:59.974263 ignition[892]: POST message to Packet Timeline Nov 1 02:21:00.132589 systemd[1]: Reached target remote-fs.target. Nov 1 02:20:59.974274 ignition[892]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 02:21:00.150717 systemd[1]: Starting dracut-pre-mount.service... Nov 1 02:20:59.994956 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:49048->[::1]:53: read: connection refused Nov 1 02:21:00.161779 systemd[1]: Finished dracut-pre-mount.service. Nov 1 02:21:00.195460 ignition[892]: GET https://metadata.packet.net/metadata: attempt #2 Nov 1 02:21:00.241315 systemd-networkd[882]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 02:21:00.195968 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:37093->[::1]:53: read: connection refused Nov 1 02:21:00.269844 systemd-networkd[882]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 02:21:00.299180 systemd-networkd[882]: enp2s0f1np1: Link UP Nov 1 02:21:00.299509 systemd-networkd[882]: enp2s0f1np1: Gained carrier Nov 1 02:21:00.311883 systemd-networkd[882]: enp2s0f0np0: Link UP Nov 1 02:21:00.312289 systemd-networkd[882]: eno2: Link UP Nov 1 02:21:00.312700 systemd-networkd[882]: eno1: Link UP Nov 1 02:21:00.596259 ignition[892]: GET https://metadata.packet.net/metadata: attempt #3 Nov 1 02:21:00.597569 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:59396->[::1]:53: read: connection refused Nov 1 02:21:01.017908 systemd-networkd[882]: enp2s0f0np0: Gained carrier Nov 1 02:21:01.026605 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp2s0f0np0: link becomes ready Nov 1 02:21:01.054704 systemd-networkd[882]: enp2s0f0np0: DHCPv4 address 86.109.11.55/31, gateway 86.109.11.54 acquired from 145.40.83.140 Nov 1 02:21:01.314788 systemd-networkd[882]: enp2s0f1np1: Gained IPv6LL Nov 1 02:21:01.397916 ignition[892]: GET https://metadata.packet.net/metadata: attempt #4 Nov 1 02:21:01.399392 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34414->[::1]:53: read: connection refused Nov 1 02:21:02.338842 systemd-networkd[882]: enp2s0f0np0: Gained IPv6LL Nov 1 02:21:03.000658 ignition[892]: GET https://metadata.packet.net/metadata: attempt #5 Nov 1 02:21:03.001936 ignition[892]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44839->[::1]:53: read: connection refused Nov 1 02:21:06.205349 ignition[892]: GET https://metadata.packet.net/metadata: attempt #6 Nov 1 02:21:07.151747 ignition[892]: GET result: OK Nov 1 02:21:07.591167 ignition[892]: Ignition finished successfully Nov 1 02:21:07.595808 systemd[1]: Finished ignition-kargs.service. Nov 1 02:21:07.683771 kernel: kauditd_printk_skb: 3 callbacks suppressed Nov 1 02:21:07.683788 kernel: audit: type=1130 audit(1761963667.605:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:07.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:07.615344 ignition[920]: Ignition 2.14.0 Nov 1 02:21:07.608837 systemd[1]: Starting ignition-disks.service... Nov 1 02:21:07.615347 ignition[920]: Stage: disks Nov 1 02:21:07.615493 ignition[920]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 02:21:07.615502 ignition[920]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 02:21:07.616896 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 02:21:07.618310 ignition[920]: disks: disks passed Nov 1 02:21:07.618313 ignition[920]: POST message to Packet Timeline Nov 1 02:21:07.618324 ignition[920]: GET https://metadata.packet.net/metadata: attempt #1 Nov 1 02:21:08.698538 ignition[920]: GET result: OK Nov 1 02:21:09.117431 ignition[920]: Ignition finished successfully Nov 1 02:21:09.119288 systemd[1]: Finished ignition-disks.service. Nov 1 02:21:09.190448 kernel: audit: type=1130 audit(1761963669.132:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.134103 systemd[1]: Reached target initrd-root-device.target. Nov 1 02:21:09.198616 systemd[1]: Reached target local-fs-pre.target. Nov 1 02:21:09.198765 systemd[1]: Reached target local-fs.target. Nov 1 02:21:09.213762 systemd[1]: Reached target sysinit.target. Nov 1 02:21:09.227668 systemd[1]: Reached target basic.target. Nov 1 02:21:09.249414 systemd[1]: Starting systemd-fsck-root.service... Nov 1 02:21:09.267981 systemd-fsck[937]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 02:21:09.282863 systemd[1]: Finished systemd-fsck-root.service. Nov 1 02:21:09.378630 kernel: audit: type=1130 audit(1761963669.291:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.378646 kernel: EXT4-fs (sdb9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 02:21:09.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.298136 systemd[1]: Mounting sysroot.mount... Nov 1 02:21:09.386103 systemd[1]: Mounted sysroot.mount. Nov 1 02:21:09.399696 systemd[1]: Reached target initrd-root-fs.target. Nov 1 02:21:09.407325 systemd[1]: Mounting sysroot-usr.mount... Nov 1 02:21:09.432188 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 02:21:09.440953 systemd[1]: Starting flatcar-static-network.service... Nov 1 02:21:09.456438 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 02:21:09.456550 systemd[1]: Reached target ignition-diskful.target. Nov 1 02:21:09.474643 systemd[1]: Mounted sysroot-usr.mount. Nov 1 02:21:09.499181 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 02:21:09.571475 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by mount (950) Nov 1 02:21:09.571491 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:21:09.512159 systemd[1]: Starting initrd-setup-root.service... Nov 1 02:21:09.648020 kernel: BTRFS info (device sdb6): using free space tree Nov 1 02:21:09.648035 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 02:21:09.648043 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 02:21:09.648053 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 02:21:09.711573 kernel: audit: type=1130 audit(1761963669.655:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.711688 coreos-metadata[944]: Nov 01 02:21:09.577 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 02:21:09.574182 systemd[1]: Finished initrd-setup-root.service. Nov 1 02:21:09.732736 coreos-metadata[945]: Nov 01 02:21:09.577 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 02:21:09.759736 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Nov 1 02:21:09.657664 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 02:21:09.786614 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 02:21:09.719956 systemd[1]: Starting ignition-mount.service... Nov 1 02:21:09.803624 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 02:21:09.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.746942 systemd[1]: Starting sysroot-boot.service... Nov 1 02:21:09.876605 kernel: audit: type=1130 audit(1761963669.810:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:09.876635 ignition[1018]: INFO : Ignition 2.14.0 Nov 1 02:21:09.876635 ignition[1018]: INFO : Stage: mount Nov 1 02:21:09.876635 ignition[1018]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 02:21:09.876635 ignition[1018]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 02:21:09.876635 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 02:21:09.876635 ignition[1018]: INFO : mount: mount passed Nov 1 02:21:09.876635 ignition[1018]: INFO : POST message to Packet Timeline Nov 1 02:21:09.876635 ignition[1018]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 02:21:09.769516 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 02:21:09.769727 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 02:21:09.797823 systemd[1]: Finished sysroot-boot.service. Nov 1 02:21:10.617069 coreos-metadata[945]: Nov 01 02:21:10.616 INFO Fetch successful Nov 1 02:21:10.650150 systemd[1]: flatcar-static-network.service: Deactivated successfully. Nov 1 02:21:10.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:10.650199 systemd[1]: Finished flatcar-static-network.service. Nov 1 02:21:10.781581 kernel: audit: type=1130 audit(1761963670.658:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:10.781596 kernel: audit: type=1131 audit(1761963670.658:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:10.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:10.781625 coreos-metadata[944]: Nov 01 02:21:10.736 INFO Fetch successful Nov 1 02:21:10.781625 coreos-metadata[944]: Nov 01 02:21:10.764 INFO wrote hostname ci-3510.3.8-n-c654b621d4 to /sysroot/etc/hostname Nov 1 02:21:10.859569 kernel: audit: type=1130 audit(1761963670.788:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:10.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:10.764844 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 02:21:10.917885 ignition[1018]: INFO : GET result: OK Nov 1 02:21:11.341083 ignition[1018]: INFO : Ignition finished successfully Nov 1 02:21:11.343887 systemd[1]: Finished ignition-mount.service. Nov 1 02:21:11.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:11.359558 systemd[1]: Starting ignition-files.service... Nov 1 02:21:11.430466 kernel: audit: type=1130 audit(1761963671.356:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:11.425190 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 02:21:11.479460 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sdb6 scanned by mount (1039) Nov 1 02:21:11.479472 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Nov 1 02:21:11.512686 kernel: BTRFS info (device sdb6): using free space tree Nov 1 02:21:11.512702 kernel: BTRFS info (device sdb6): has skinny extents Nov 1 02:21:11.561361 kernel: BTRFS info (device sdb6): enabling ssd optimizations Nov 1 02:21:11.562625 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 02:21:11.578517 ignition[1058]: INFO : Ignition 2.14.0 Nov 1 02:21:11.578517 ignition[1058]: INFO : Stage: files Nov 1 02:21:11.578517 ignition[1058]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 02:21:11.578517 ignition[1058]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 02:21:11.578517 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 02:21:11.578517 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Nov 1 02:21:11.581707 unknown[1058]: wrote ssh authorized keys file for user: core Nov 1 02:21:11.654606 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 02:21:11.654606 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 02:21:11.654606 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 02:21:11.654606 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 02:21:11.654606 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 02:21:11.654606 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 02:21:11.654606 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 02:21:11.654606 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 02:21:11.788280 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 02:21:11.805656 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 02:21:11.805656 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Nov 1 02:21:12.159662 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 02:21:12.210639 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 02:21:12.225582 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1330867552" Nov 1 02:21:12.216839 systemd[1]: mnt-oem1330867552.mount: Deactivated successfully. Nov 1 02:21:12.485637 ignition[1058]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1330867552": device or resource busy Nov 1 02:21:12.485637 ignition[1058]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1330867552", trying btrfs: device or resource busy Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1330867552" Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1330867552" Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1330867552" Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1330867552" Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 02:21:12.485637 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 02:21:12.669030 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Nov 1 02:21:13.684195 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 02:21:13.684195 ignition[1058]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 02:21:13.684195 ignition[1058]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 02:21:13.684195 ignition[1058]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" Nov 1 02:21:13.684195 ignition[1058]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" Nov 1 02:21:13.684195 ignition[1058]: INFO : files: op(12): [started] processing unit "prepare-helm.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(12): op(13): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(12): op(13): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(12): [finished] processing unit "prepare-helm.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(14): [started] setting preset to enabled for "packet-phone-home.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(14): [finished] setting preset to enabled for "packet-phone-home.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 02:21:13.765683 ignition[1058]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 02:21:13.765683 ignition[1058]: INFO : files: createResultFile: createFiles: op(17): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: createResultFile: createFiles: op(17): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 02:21:13.765683 ignition[1058]: INFO : files: files passed Nov 1 02:21:13.765683 ignition[1058]: INFO : POST message to Packet Timeline Nov 1 02:21:13.765683 ignition[1058]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 02:21:14.954651 ignition[1058]: INFO : GET result: OK Nov 1 02:21:15.562069 ignition[1058]: INFO : Ignition finished successfully Nov 1 02:21:15.565351 systemd[1]: Finished ignition-files.service. Nov 1 02:21:15.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.639461 kernel: audit: type=1130 audit(1761963675.579:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.585682 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 02:21:15.647559 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 02:21:15.682692 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 02:21:15.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.647882 systemd[1]: Starting ignition-quench.service... Nov 1 02:21:15.872873 kernel: audit: type=1130 audit(1761963675.692:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.872889 kernel: audit: type=1130 audit(1761963675.758:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.872897 kernel: audit: type=1131 audit(1761963675.758:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.665703 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 02:21:15.692877 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 02:21:15.692945 systemd[1]: Finished ignition-quench.service. Nov 1 02:21:16.028650 kernel: audit: type=1130 audit(1761963675.913:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.028736 kernel: audit: type=1131 audit(1761963675.913:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.759614 systemd[1]: Reached target ignition-complete.target. Nov 1 02:21:15.881932 systemd[1]: Starting initrd-parse-etc.service... Nov 1 02:21:15.895567 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 02:21:16.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.895608 systemd[1]: Finished initrd-parse-etc.service. Nov 1 02:21:16.150570 kernel: audit: type=1130 audit(1761963676.076:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:15.914628 systemd[1]: Reached target initrd-fs.target. Nov 1 02:21:16.037573 systemd[1]: Reached target initrd.target. Nov 1 02:21:16.037633 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 02:21:16.037983 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 02:21:16.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.059695 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 02:21:16.284564 kernel: audit: type=1131 audit(1761963676.208:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.078161 systemd[1]: Starting initrd-cleanup.service... Nov 1 02:21:16.146446 systemd[1]: Stopped target nss-lookup.target. Nov 1 02:21:16.159618 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 02:21:16.175668 systemd[1]: Stopped target timers.target. Nov 1 02:21:16.190795 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 02:21:16.190902 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 02:21:16.209757 systemd[1]: Stopped target initrd.target. Nov 1 02:21:16.276609 systemd[1]: Stopped target basic.target. Nov 1 02:21:16.284664 systemd[1]: Stopped target ignition-complete.target. Nov 1 02:21:16.306661 systemd[1]: Stopped target ignition-diskful.target. Nov 1 02:21:16.322618 systemd[1]: Stopped target initrd-root-device.target. Nov 1 02:21:16.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.339656 systemd[1]: Stopped target remote-fs.target. Nov 1 02:21:16.540539 kernel: audit: type=1131 audit(1761963676.452:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.355885 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 02:21:16.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.373125 systemd[1]: Stopped target sysinit.target. Nov 1 02:21:16.617550 kernel: audit: type=1131 audit(1761963676.539:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.389116 systemd[1]: Stopped target local-fs.target. Nov 1 02:21:16.405101 systemd[1]: Stopped target local-fs-pre.target. Nov 1 02:21:16.420925 systemd[1]: Stopped target swap.target. Nov 1 02:21:16.436981 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 02:21:16.437356 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 02:21:16.454323 systemd[1]: Stopped target cryptsetup.target. Nov 1 02:21:16.532574 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 02:21:16.532649 systemd[1]: Stopped dracut-initqueue.service. Nov 1 02:21:16.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.540670 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 02:21:16.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.540736 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 02:21:16.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.609691 systemd[1]: Stopped target paths.target. Nov 1 02:21:16.786486 ignition[1109]: INFO : Ignition 2.14.0 Nov 1 02:21:16.786486 ignition[1109]: INFO : Stage: umount Nov 1 02:21:16.786486 ignition[1109]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 02:21:16.786486 ignition[1109]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Nov 1 02:21:16.786486 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Nov 1 02:21:16.786486 ignition[1109]: INFO : umount: umount passed Nov 1 02:21:16.786486 ignition[1109]: INFO : POST message to Packet Timeline Nov 1 02:21:16.786486 ignition[1109]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Nov 1 02:21:16.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:16.624589 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 02:21:16.626590 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 02:21:16.646642 systemd[1]: Stopped target slices.target. Nov 1 02:21:16.661581 systemd[1]: Stopped target sockets.target. Nov 1 02:21:16.679660 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 02:21:16.679760 systemd[1]: Closed iscsid.socket. Nov 1 02:21:16.694750 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 02:21:16.694910 systemd[1]: Closed iscsiuio.socket. Nov 1 02:21:16.710162 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 02:21:16.710583 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 02:21:16.730051 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 02:21:16.730435 systemd[1]: Stopped ignition-files.service. Nov 1 02:21:16.746009 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 02:21:16.746402 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 02:21:16.767146 systemd[1]: Stopping ignition-mount.service... Nov 1 02:21:16.779589 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 02:21:16.779683 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 02:21:16.796156 systemd[1]: Stopping sysroot-boot.service... Nov 1 02:21:16.810527 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 02:21:16.810663 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 02:21:16.818764 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 02:21:16.818912 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 02:21:16.842516 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 02:21:16.842861 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 02:21:16.842903 systemd[1]: Stopped sysroot-boot.service. Nov 1 02:21:16.864696 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 02:21:16.864754 systemd[1]: Finished initrd-cleanup.service. Nov 1 02:21:17.863918 ignition[1109]: INFO : GET result: OK Nov 1 02:21:18.275783 ignition[1109]: INFO : Ignition finished successfully Nov 1 02:21:18.278791 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 02:21:18.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.279038 systemd[1]: Stopped ignition-mount.service. Nov 1 02:21:18.293873 systemd[1]: Stopped target network.target. Nov 1 02:21:18.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.309618 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 02:21:18.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.309785 systemd[1]: Stopped ignition-disks.service. Nov 1 02:21:18.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.324846 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 02:21:18.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.324993 systemd[1]: Stopped ignition-kargs.service. Nov 1 02:21:18.339886 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 02:21:18.340038 systemd[1]: Stopped ignition-setup.service. Nov 1 02:21:18.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.356880 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 02:21:18.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.436000 audit: BPF prog-id=6 op=UNLOAD Nov 1 02:21:18.357032 systemd[1]: Stopped initrd-setup-root.service. Nov 1 02:21:18.374158 systemd[1]: Stopping systemd-networkd.service... Nov 1 02:21:18.383519 systemd-networkd[882]: enp2s0f1np1: DHCPv6 lease lost Nov 1 02:21:18.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.388888 systemd[1]: Stopping systemd-resolved.service... Nov 1 02:21:18.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.391568 systemd-networkd[882]: enp2s0f0np0: DHCPv6 lease lost Nov 1 02:21:18.508000 audit: BPF prog-id=9 op=UNLOAD Nov 1 02:21:18.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.404204 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 02:21:18.404484 systemd[1]: Stopped systemd-resolved.service. Nov 1 02:21:18.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.421037 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 02:21:18.421433 systemd[1]: Stopped systemd-networkd.service. Nov 1 02:21:18.436045 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 02:21:18.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.436133 systemd[1]: Closed systemd-networkd.socket. Nov 1 02:21:18.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.456057 systemd[1]: Stopping network-cleanup.service... Nov 1 02:21:18.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.469571 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 02:21:18.469712 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 02:21:18.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.485778 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 02:21:18.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.485924 systemd[1]: Stopped systemd-sysctl.service. Nov 1 02:21:18.501858 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 02:21:18.501976 systemd[1]: Stopped systemd-modules-load.service. Nov 1 02:21:18.517989 systemd[1]: Stopping systemd-udevd.service... Nov 1 02:21:18.536224 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 02:21:18.537709 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 02:21:18.538040 systemd[1]: Stopped systemd-udevd.service. Nov 1 02:21:18.551116 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 02:21:18.551224 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 02:21:18.564709 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 02:21:18.564805 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 02:21:18.580606 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 02:21:18.580723 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 02:21:18.595780 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 02:21:18.595922 systemd[1]: Stopped dracut-cmdline.service. Nov 1 02:21:18.611709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 02:21:18.611827 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 02:21:18.628486 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 02:21:18.641543 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 02:21:18.641570 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 02:21:18.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:18.656625 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 02:21:18.656678 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 02:21:18.836752 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 02:21:18.836985 systemd[1]: Stopped network-cleanup.service. Nov 1 02:21:18.847963 systemd[1]: Reached target initrd-switch-root.target. Nov 1 02:21:18.865117 systemd[1]: Starting initrd-switch-root.service... Nov 1 02:21:18.903150 systemd[1]: Switching root. Nov 1 02:21:18.957039 iscsid[904]: iscsid shutting down. Nov 1 02:21:18.957122 systemd-journald[269]: Journal stopped Nov 1 02:21:23.017159 systemd-journald[269]: Received SIGTERM from PID 1 (n/a). Nov 1 02:21:23.017173 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 02:21:23.017182 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 02:21:23.017188 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 02:21:23.017194 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 02:21:23.017199 kernel: SELinux: policy capability open_perms=1 Nov 1 02:21:23.017206 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 02:21:23.017212 kernel: SELinux: policy capability always_check_network=0 Nov 1 02:21:23.017217 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 02:21:23.017224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 02:21:23.017229 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 02:21:23.017235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 02:21:23.017241 systemd[1]: Successfully loaded SELinux policy in 319.924ms. Nov 1 02:21:23.017248 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.384ms. Nov 1 02:21:23.017256 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 02:21:23.017263 systemd[1]: Detected architecture x86-64. Nov 1 02:21:23.017270 systemd[1]: Detected first boot. Nov 1 02:21:23.017276 systemd[1]: Hostname set to . Nov 1 02:21:23.017282 systemd[1]: Initializing machine ID from random generator. Nov 1 02:21:23.017288 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 02:21:23.017294 systemd[1]: Populated /etc with preset unit settings. Nov 1 02:21:23.017302 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 02:21:23.017309 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 02:21:23.017316 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:21:23.017322 kernel: kauditd_printk_skb: 47 callbacks suppressed Nov 1 02:21:23.017328 kernel: audit: type=1334 audit(1761963681.272:90): prog-id=12 op=LOAD Nov 1 02:21:23.017334 kernel: audit: type=1334 audit(1761963681.272:91): prog-id=3 op=UNLOAD Nov 1 02:21:23.017341 kernel: audit: type=1334 audit(1761963681.340:92): prog-id=13 op=LOAD Nov 1 02:21:23.017347 kernel: audit: type=1334 audit(1761963681.362:93): prog-id=14 op=LOAD Nov 1 02:21:23.017353 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 02:21:23.017363 kernel: audit: type=1334 audit(1761963681.362:94): prog-id=4 op=UNLOAD Nov 1 02:21:23.017369 kernel: audit: type=1334 audit(1761963681.362:95): prog-id=5 op=UNLOAD Nov 1 02:21:23.017375 systemd[1]: Stopped iscsiuio.service. Nov 1 02:21:23.017382 kernel: audit: type=1131 audit(1761963681.363:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.017388 kernel: audit: type=1334 audit(1761963681.520:97): prog-id=12 op=UNLOAD Nov 1 02:21:23.017394 kernel: audit: type=1131 audit(1761963681.545:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.017401 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 02:21:23.017408 systemd[1]: Stopped iscsid.service. Nov 1 02:21:23.017414 kernel: audit: type=1131 audit(1761963681.640:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.017421 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 02:21:23.017429 systemd[1]: Stopped initrd-switch-root.service. Nov 1 02:21:23.017435 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 02:21:23.017442 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 02:21:23.017450 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 02:21:23.017456 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 02:21:23.017463 systemd[1]: Created slice system-getty.slice. Nov 1 02:21:23.017469 systemd[1]: Created slice system-modprobe.slice. Nov 1 02:21:23.017476 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 02:21:23.017483 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 02:21:23.017489 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 02:21:23.017496 systemd[1]: Created slice user.slice. Nov 1 02:21:23.017503 systemd[1]: Started systemd-ask-password-console.path. Nov 1 02:21:23.017510 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 02:21:23.017517 systemd[1]: Set up automount boot.automount. Nov 1 02:21:23.017523 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 02:21:23.017530 systemd[1]: Stopped target initrd-switch-root.target. Nov 1 02:21:23.017536 systemd[1]: Stopped target initrd-fs.target. Nov 1 02:21:23.017543 systemd[1]: Stopped target initrd-root-fs.target. Nov 1 02:21:23.017550 systemd[1]: Reached target integritysetup.target. Nov 1 02:21:23.017557 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 02:21:23.017564 systemd[1]: Reached target remote-fs.target. Nov 1 02:21:23.017570 systemd[1]: Reached target slices.target. Nov 1 02:21:23.017577 systemd[1]: Reached target swap.target. Nov 1 02:21:23.017583 systemd[1]: Reached target torcx.target. Nov 1 02:21:23.017590 systemd[1]: Reached target veritysetup.target. Nov 1 02:21:23.017598 systemd[1]: Listening on systemd-coredump.socket. Nov 1 02:21:23.017605 systemd[1]: Listening on systemd-initctl.socket. Nov 1 02:21:23.017612 systemd[1]: Listening on systemd-networkd.socket. Nov 1 02:21:23.017618 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 02:21:23.017625 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 02:21:23.017632 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 02:21:23.017639 systemd[1]: Mounting dev-hugepages.mount... Nov 1 02:21:23.017646 systemd[1]: Mounting dev-mqueue.mount... Nov 1 02:21:23.017653 systemd[1]: Mounting media.mount... Nov 1 02:21:23.017660 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:21:23.017667 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 02:21:23.017673 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 02:21:23.017680 systemd[1]: Mounting tmp.mount... Nov 1 02:21:23.017687 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 02:21:23.017693 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 02:21:23.017700 systemd[1]: Starting kmod-static-nodes.service... Nov 1 02:21:23.017707 systemd[1]: Starting modprobe@configfs.service... Nov 1 02:21:23.017714 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 02:21:23.017721 systemd[1]: Starting modprobe@drm.service... Nov 1 02:21:23.017728 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 02:21:23.017735 systemd[1]: Starting modprobe@fuse.service... Nov 1 02:21:23.017741 kernel: fuse: init (API version 7.34) Nov 1 02:21:23.017747 systemd[1]: Starting modprobe@loop.service... Nov 1 02:21:23.017754 kernel: loop: module loaded Nov 1 02:21:23.017760 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 02:21:23.017767 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 02:21:23.017775 systemd[1]: Stopped systemd-fsck-root.service. Nov 1 02:21:23.017782 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 02:21:23.017789 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 02:21:23.017796 systemd[1]: Stopped systemd-journald.service. Nov 1 02:21:23.017802 systemd[1]: Starting systemd-journald.service... Nov 1 02:21:23.017809 systemd[1]: Starting systemd-modules-load.service... Nov 1 02:21:23.017818 systemd-journald[1263]: Journal started Nov 1 02:21:23.017845 systemd-journald[1263]: Runtime Journal (/run/log/journal/103f5d2213cf4ab8a6499f8d74bb6d20) is 8.0M, max 640.1M, 632.1M free. Nov 1 02:21:19.366000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 02:21:19.640000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 02:21:19.642000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 02:21:19.642000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 02:21:19.642000 audit: BPF prog-id=10 op=LOAD Nov 1 02:21:19.642000 audit: BPF prog-id=10 op=UNLOAD Nov 1 02:21:19.642000 audit: BPF prog-id=11 op=LOAD Nov 1 02:21:19.642000 audit: BPF prog-id=11 op=UNLOAD Nov 1 02:21:19.706000 audit[1151]: AVC avc: denied { associate } for pid=1151 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 02:21:19.706000 audit[1151]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001278d2 a1=c00002ce58 a2=c00002b100 a3=32 items=0 ppid=1134 pid=1151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:21:19.706000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 02:21:19.731000 audit[1151]: AVC avc: denied { associate } for pid=1151 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 02:21:19.731000 audit[1151]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001279a9 a2=1ed a3=0 items=2 ppid=1134 pid=1151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:21:19.731000 audit: CWD cwd="/" Nov 1 02:21:19.731000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:19.731000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:19.731000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 02:21:21.272000 audit: BPF prog-id=12 op=LOAD Nov 1 02:21:21.272000 audit: BPF prog-id=3 op=UNLOAD Nov 1 02:21:21.340000 audit: BPF prog-id=13 op=LOAD Nov 1 02:21:21.362000 audit: BPF prog-id=14 op=LOAD Nov 1 02:21:21.362000 audit: BPF prog-id=4 op=UNLOAD Nov 1 02:21:21.362000 audit: BPF prog-id=5 op=UNLOAD Nov 1 02:21:21.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:21.520000 audit: BPF prog-id=12 op=UNLOAD Nov 1 02:21:21.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:21.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:21.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:21.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:22.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:22.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:22.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:22.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:22.988000 audit: BPF prog-id=15 op=LOAD Nov 1 02:21:22.989000 audit: BPF prog-id=16 op=LOAD Nov 1 02:21:22.989000 audit: BPF prog-id=17 op=LOAD Nov 1 02:21:22.989000 audit: BPF prog-id=13 op=UNLOAD Nov 1 02:21:22.989000 audit: BPF prog-id=14 op=UNLOAD Nov 1 02:21:23.014000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 02:21:23.014000 audit[1263]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc138a8640 a2=4000 a3=7ffc138a86dc items=0 ppid=1 pid=1263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:21:23.014000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 02:21:21.271924 systemd[1]: Queued start job for default target multi-user.target. Nov 1 02:21:19.705071 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 02:21:21.271930 systemd[1]: Unnecessary job was removed for dev-sdb6.device. Nov 1 02:21:19.705620 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 02:21:21.365099 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 02:21:19.705632 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 02:21:19.705650 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Nov 1 02:21:19.705656 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="skipped missing lower profile" missing profile=oem Nov 1 02:21:19.705674 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Nov 1 02:21:19.705681 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Nov 1 02:21:19.705800 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Nov 1 02:21:19.705822 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Nov 1 02:21:19.705829 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Nov 1 02:21:19.707026 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Nov 1 02:21:19.707048 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Nov 1 02:21:19.707060 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Nov 1 02:21:19.707068 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Nov 1 02:21:19.707078 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Nov 1 02:21:19.707086 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Nov 1 02:21:20.910654 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:20Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 02:21:20.910798 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:20Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 02:21:20.910854 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:20Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 02:21:20.910953 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:20Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Nov 1 02:21:20.910983 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:20Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Nov 1 02:21:20.911018 /usr/lib/systemd/system-generators/torcx-generator[1151]: time="2025-11-01T02:21:20Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Nov 1 02:21:23.048543 systemd[1]: Starting systemd-network-generator.service... Nov 1 02:21:23.070423 systemd[1]: Starting systemd-remount-fs.service... Nov 1 02:21:23.092406 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 02:21:23.124910 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 02:21:23.125006 systemd[1]: Stopped verity-setup.service. Nov 1 02:21:23.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.159407 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:21:23.174541 systemd[1]: Started systemd-journald.service. Nov 1 02:21:23.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.181910 systemd[1]: Mounted dev-hugepages.mount. Nov 1 02:21:23.189626 systemd[1]: Mounted dev-mqueue.mount. Nov 1 02:21:23.196625 systemd[1]: Mounted media.mount. Nov 1 02:21:23.203616 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 02:21:23.212596 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 02:21:23.220581 systemd[1]: Mounted tmp.mount. Nov 1 02:21:23.227677 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 02:21:23.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.235703 systemd[1]: Finished kmod-static-nodes.service. Nov 1 02:21:23.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.243749 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 02:21:23.243877 systemd[1]: Finished modprobe@configfs.service. Nov 1 02:21:23.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.252854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:21:23.253038 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 02:21:23.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.260000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.262001 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 02:21:23.262258 systemd[1]: Finished modprobe@drm.service. Nov 1 02:21:23.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.271340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 02:21:23.271829 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 02:21:23.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.281331 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 02:21:23.281771 systemd[1]: Finished modprobe@fuse.service. Nov 1 02:21:23.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.291323 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:21:23.291892 systemd[1]: Finished modprobe@loop.service. Nov 1 02:21:23.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.301405 systemd[1]: Finished systemd-modules-load.service. Nov 1 02:21:23.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.310255 systemd[1]: Finished systemd-network-generator.service. Nov 1 02:21:23.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.319241 systemd[1]: Finished systemd-remount-fs.service. Nov 1 02:21:23.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.328244 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 02:21:23.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.337726 systemd[1]: Reached target network-pre.target. Nov 1 02:21:23.349224 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 02:21:23.360206 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 02:21:23.367627 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 02:21:23.370957 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 02:21:23.379942 systemd[1]: Starting systemd-journal-flush.service... Nov 1 02:21:23.388599 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 02:21:23.389096 systemd[1]: Starting systemd-random-seed.service... Nov 1 02:21:23.389337 systemd-journald[1263]: Time spent on flushing to /var/log/journal/103f5d2213cf4ab8a6499f8d74bb6d20 is 15.570ms for 1594 entries. Nov 1 02:21:23.389337 systemd-journald[1263]: System Journal (/var/log/journal/103f5d2213cf4ab8a6499f8d74bb6d20) is 8.0M, max 195.6M, 187.6M free. Nov 1 02:21:23.424812 systemd-journald[1263]: Received client request to flush runtime journal. Nov 1 02:21:23.412484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 02:21:23.412992 systemd[1]: Starting systemd-sysctl.service... Nov 1 02:21:23.419978 systemd[1]: Starting systemd-sysusers.service... Nov 1 02:21:23.426957 systemd[1]: Starting systemd-udev-settle.service... Nov 1 02:21:23.434577 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 02:21:23.443536 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 02:21:23.452586 systemd[1]: Finished systemd-journal-flush.service. Nov 1 02:21:23.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.461602 systemd[1]: Finished systemd-random-seed.service. Nov 1 02:21:23.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.469575 systemd[1]: Finished systemd-sysctl.service. Nov 1 02:21:23.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.477582 systemd[1]: Finished systemd-sysusers.service. Nov 1 02:21:23.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.486645 systemd[1]: Reached target first-boot-complete.target. Nov 1 02:21:23.494727 udevadm[1279]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 02:21:23.696911 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 02:21:23.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.704000 audit: BPF prog-id=18 op=LOAD Nov 1 02:21:23.704000 audit: BPF prog-id=19 op=LOAD Nov 1 02:21:23.705000 audit: BPF prog-id=7 op=UNLOAD Nov 1 02:21:23.705000 audit: BPF prog-id=8 op=UNLOAD Nov 1 02:21:23.706685 systemd[1]: Starting systemd-udevd.service... Nov 1 02:21:23.718821 systemd-udevd[1280]: Using default interface naming scheme 'v252'. Nov 1 02:21:23.738063 systemd[1]: Started systemd-udevd.service. Nov 1 02:21:23.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:23.748845 systemd[1]: Condition check resulted in dev-ttyS1.device being skipped. Nov 1 02:21:23.748000 audit: BPF prog-id=20 op=LOAD Nov 1 02:21:23.750209 systemd[1]: Starting systemd-networkd.service... Nov 1 02:21:23.770372 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Nov 1 02:21:23.774385 kernel: ACPI: button: Sleep Button [SLPB] Nov 1 02:21:23.774457 kernel: IPMI message handler: version 39.2 Nov 1 02:21:23.774480 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 02:21:23.808388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 02:21:23.797000 audit: BPF prog-id=21 op=LOAD Nov 1 02:21:23.828000 audit: BPF prog-id=22 op=LOAD Nov 1 02:21:23.828000 audit: BPF prog-id=23 op=LOAD Nov 1 02:21:23.789000 audit[1347]: AVC avc: denied { confidentiality } for pid=1347 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 02:21:23.832570 systemd[1]: Starting systemd-userdbd.service... Nov 1 02:21:23.851373 kernel: ipmi device interface Nov 1 02:21:23.851444 kernel: ACPI: button: Power Button [PWRF] Nov 1 02:21:23.789000 audit[1347]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e7fa1e4930 a1=4d9cc a2=7f5b3a07dbc5 a3=5 items=42 ppid=1280 pid=1347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:21:23.789000 audit: CWD cwd="/" Nov 1 02:21:23.789000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=1 name=(null) inode=23733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=2 name=(null) inode=23733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=3 name=(null) inode=23734 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=4 name=(null) inode=23733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=5 name=(null) inode=23735 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=6 name=(null) inode=23733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=7 name=(null) inode=23736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=8 name=(null) inode=23736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=9 name=(null) inode=23737 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=10 name=(null) inode=23736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=11 name=(null) inode=23738 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=12 name=(null) inode=23736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=13 name=(null) inode=23739 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=14 name=(null) inode=23736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=15 name=(null) inode=23740 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=16 name=(null) inode=23736 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=17 name=(null) inode=23741 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=18 name=(null) inode=23733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=19 name=(null) inode=23742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=20 name=(null) inode=23742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=21 name=(null) inode=23743 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=22 name=(null) inode=23742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=23 name=(null) inode=23744 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=24 name=(null) inode=23742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=25 name=(null) inode=23745 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=26 name=(null) inode=23742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=27 name=(null) inode=23746 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=28 name=(null) inode=23742 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=29 name=(null) inode=23747 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=30 name=(null) inode=23733 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=31 name=(null) inode=23748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=32 name=(null) inode=23748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=33 name=(null) inode=23749 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=34 name=(null) inode=23748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=35 name=(null) inode=23750 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=36 name=(null) inode=23748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=37 name=(null) inode=23751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=38 name=(null) inode=23748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=39 name=(null) inode=23752 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=40 name=(null) inode=23748 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PATH item=41 name=(null) inode=23753 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 02:21:23.789000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 02:21:23.884666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 02:21:23.896368 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Nov 1 02:21:23.896565 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Nov 1 02:21:23.915464 kernel: ipmi_si: IPMI System Interface driver Nov 1 02:21:23.915486 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Nov 1 02:21:24.010127 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Nov 1 02:21:24.023897 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Nov 1 02:21:24.023998 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Nov 1 02:21:24.024013 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Nov 1 02:21:24.024104 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Nov 1 02:21:24.024120 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Nov 1 02:21:24.134078 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Nov 1 02:21:24.134170 kernel: iTCO_vendor_support: vendor-support=0 Nov 1 02:21:24.134184 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Nov 1 02:21:24.134259 kernel: ipmi_si: Adding ACPI-specified kcs state machine Nov 1 02:21:24.134273 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Nov 1 02:21:24.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.065801 systemd[1]: Started systemd-userdbd.service. Nov 1 02:21:24.175364 kernel: iTCO_wdt iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS Nov 1 02:21:24.210958 kernel: intel_rapl_common: Found RAPL domain package Nov 1 02:21:24.211002 kernel: intel_rapl_common: Found RAPL domain core Nov 1 02:21:24.211015 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Nov 1 02:21:24.211109 kernel: intel_rapl_common: Found RAPL domain dram Nov 1 02:21:24.261363 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b11, dev_id: 0x20) Nov 1 02:21:24.306118 systemd-networkd[1321]: bond0: netdev ready Nov 1 02:21:24.309065 systemd-networkd[1321]: lo: Link UP Nov 1 02:21:24.309068 systemd-networkd[1321]: lo: Gained carrier Nov 1 02:21:24.309632 systemd-networkd[1321]: Enumeration completed Nov 1 02:21:24.309712 systemd[1]: Started systemd-networkd.service. Nov 1 02:21:24.309949 systemd-networkd[1321]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Nov 1 02:21:24.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.321411 systemd-networkd[1321]: enp2s0f1np1: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:3b.network. Nov 1 02:21:24.353364 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Nov 1 02:21:24.371362 kernel: ipmi_ssif: IPMI SSIF Interface driver Nov 1 02:21:24.373597 systemd[1]: Finished systemd-udev-settle.service. Nov 1 02:21:24.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.382062 systemd[1]: Starting lvm2-activation-early.service... Nov 1 02:21:24.398385 lvm[1385]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 02:21:24.426713 systemd[1]: Finished lvm2-activation-early.service. Nov 1 02:21:24.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.434496 systemd[1]: Reached target cryptsetup.target. Nov 1 02:21:24.443006 systemd[1]: Starting lvm2-activation.service... Nov 1 02:21:24.445073 lvm[1386]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 02:21:24.482766 systemd[1]: Finished lvm2-activation.service. Nov 1 02:21:24.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.490500 systemd[1]: Reached target local-fs-pre.target. Nov 1 02:21:24.498449 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 02:21:24.498463 systemd[1]: Reached target local-fs.target. Nov 1 02:21:24.506450 systemd[1]: Reached target machines.target. Nov 1 02:21:24.515017 systemd[1]: Starting ldconfig.service... Nov 1 02:21:24.522080 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 02:21:24.522103 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 02:21:24.522636 systemd[1]: Starting systemd-boot-update.service... Nov 1 02:21:24.529903 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 02:21:24.540058 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 02:21:24.540708 systemd[1]: Starting systemd-sysext.service... Nov 1 02:21:24.540925 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1388 (bootctl) Nov 1 02:21:24.541640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 02:21:24.552735 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 02:21:24.561022 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 02:21:24.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.573099 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 02:21:24.573182 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 02:21:24.606363 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 02:21:24.685305 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 02:21:24.685640 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 02:21:24.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.717398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 02:21:24.731186 systemd-fsck[1397]: fsck.fat 4.2 (2021-01-31) Nov 1 02:21:24.731186 systemd-fsck[1397]: /dev/sdb1: 790 files, 120773/258078 clusters Nov 1 02:21:24.731997 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 02:21:24.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.743291 systemd[1]: Mounting boot.mount... Nov 1 02:21:24.760363 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 02:21:24.765203 systemd[1]: Mounted boot.mount. Nov 1 02:21:24.772855 (sd-sysext)[1401]: Using extensions 'kubernetes'. Nov 1 02:21:24.773048 (sd-sysext)[1401]: Merged extensions into '/usr'. Nov 1 02:21:24.783917 systemd[1]: Finished systemd-boot-update.service. Nov 1 02:21:24.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.792557 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:21:24.793337 systemd[1]: Mounting usr-share-oem.mount... Nov 1 02:21:24.800555 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 02:21:24.801181 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 02:21:24.807965 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 02:21:24.814954 systemd[1]: Starting modprobe@loop.service... Nov 1 02:21:24.821465 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 02:21:24.821534 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 02:21:24.821601 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:21:24.823224 systemd[1]: Mounted usr-share-oem.mount. Nov 1 02:21:24.830625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:21:24.830692 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 02:21:24.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.838657 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 02:21:24.838719 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 02:21:24.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.846640 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:21:24.846703 systemd[1]: Finished modprobe@loop.service. Nov 1 02:21:24.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.854671 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 02:21:24.854733 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 02:21:24.855255 systemd[1]: Finished systemd-sysext.service. Nov 1 02:21:24.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:24.863977 systemd[1]: Starting ensure-sysext.service... Nov 1 02:21:24.870931 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 02:21:24.880617 systemd[1]: Reloading. Nov 1 02:21:24.881460 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 02:21:24.886486 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 02:21:24.891544 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 02:21:24.894366 /usr/lib/systemd/system-generators/torcx-generator[1428]: time="2025-11-01T02:21:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 02:21:24.894392 /usr/lib/systemd/system-generators/torcx-generator[1428]: time="2025-11-01T02:21:24Z" level=info msg="torcx already run" Nov 1 02:21:24.928840 ldconfig[1387]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 02:21:24.948456 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 02:21:24.948465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 02:21:24.961014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:21:25.007000 audit: BPF prog-id=24 op=LOAD Nov 1 02:21:25.007000 audit: BPF prog-id=21 op=UNLOAD Nov 1 02:21:25.007000 audit: BPF prog-id=25 op=LOAD Nov 1 02:21:25.007000 audit: BPF prog-id=26 op=LOAD Nov 1 02:21:25.007000 audit: BPF prog-id=22 op=UNLOAD Nov 1 02:21:25.007000 audit: BPF prog-id=23 op=UNLOAD Nov 1 02:21:25.007000 audit: BPF prog-id=27 op=LOAD Nov 1 02:21:25.007000 audit: BPF prog-id=28 op=LOAD Nov 1 02:21:25.007000 audit: BPF prog-id=18 op=UNLOAD Nov 1 02:21:25.007000 audit: BPF prog-id=19 op=UNLOAD Nov 1 02:21:25.009000 audit: BPF prog-id=29 op=LOAD Nov 1 02:21:25.009000 audit: BPF prog-id=15 op=UNLOAD Nov 1 02:21:25.009000 audit: BPF prog-id=30 op=LOAD Nov 1 02:21:25.009000 audit: BPF prog-id=31 op=LOAD Nov 1 02:21:25.009000 audit: BPF prog-id=16 op=UNLOAD Nov 1 02:21:25.009000 audit: BPF prog-id=17 op=UNLOAD Nov 1 02:21:25.009000 audit: BPF prog-id=32 op=LOAD Nov 1 02:21:25.009000 audit: BPF prog-id=20 op=UNLOAD Nov 1 02:21:25.012445 systemd[1]: Finished ldconfig.service. Nov 1 02:21:25.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:25.020020 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 02:21:25.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 02:21:25.031134 systemd[1]: Starting audit-rules.service... Nov 1 02:21:25.038038 systemd[1]: Starting clean-ca-certificates.service... Nov 1 02:21:25.047155 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 02:21:25.047000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 02:21:25.047000 audit[1507]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd171463a0 a2=420 a3=0 items=0 ppid=1490 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 02:21:25.047000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 02:21:25.048643 augenrules[1507]: No rules Nov 1 02:21:25.056526 systemd[1]: Starting systemd-resolved.service... Nov 1 02:21:25.064432 systemd[1]: Starting systemd-timesyncd.service... Nov 1 02:21:25.072012 systemd[1]: Starting systemd-update-utmp.service... Nov 1 02:21:25.078968 systemd[1]: Finished audit-rules.service. Nov 1 02:21:25.085609 systemd[1]: Finished clean-ca-certificates.service. Nov 1 02:21:25.093674 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 02:21:25.107003 systemd[1]: Finished systemd-update-utmp.service. Nov 1 02:21:25.116403 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.117410 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 02:21:25.125237 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 02:21:25.133008 systemd[1]: Starting modprobe@loop.service... Nov 1 02:21:25.137985 systemd-resolved[1512]: Positive Trust Anchors: Nov 1 02:21:25.137992 systemd-resolved[1512]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 02:21:25.138013 systemd-resolved[1512]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 02:21:25.139421 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.139495 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 02:21:25.140436 systemd[1]: Starting systemd-update-done.service... Nov 1 02:21:25.142483 systemd-resolved[1512]: Using system hostname 'ci-3510.3.8-n-c654b621d4'. Nov 1 02:21:25.147399 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 02:21:25.147927 systemd[1]: Started systemd-timesyncd.service. Nov 1 02:21:25.156641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:21:25.156712 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 02:21:25.164588 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 02:21:25.164652 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 02:21:25.172591 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:21:25.172670 systemd[1]: Finished modprobe@loop.service. Nov 1 02:21:25.180591 systemd[1]: Finished systemd-update-done.service. Nov 1 02:21:25.189768 systemd[1]: Reached target time-set.target. Nov 1 02:21:25.197554 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.198192 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 02:21:25.206927 systemd[1]: Starting modprobe@drm.service... Nov 1 02:21:25.214917 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 02:21:25.222910 systemd[1]: Starting modprobe@loop.service... Nov 1 02:21:25.230434 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.230497 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 02:21:25.231096 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 02:21:25.239412 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 02:21:25.240060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 02:21:25.240125 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 02:21:25.248583 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 02:21:25.248645 systemd[1]: Finished modprobe@drm.service. Nov 1 02:21:25.257602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 02:21:25.257670 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 02:21:25.266585 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 02:21:25.266652 systemd[1]: Finished modprobe@loop.service. Nov 1 02:21:25.274687 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 02:21:25.274750 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.275320 systemd[1]: Finished ensure-sysext.service. Nov 1 02:21:25.408392 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 1 02:21:25.433423 kernel: bond0: (slave enp2s0f1np1): Enslaving as a backup interface with an up link Nov 1 02:21:25.434180 systemd-networkd[1321]: enp2s0f0np0: Configuring with /etc/systemd/network/10-b8:ce:f6:07:a6:3a.network. Nov 1 02:21:25.472398 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 02:21:25.574453 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:21:25.574478 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 02:21:25.602423 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 02:21:25.675411 kernel: mlx5_core 0000:02:00.0 enp2s0f0np0: Link up Nov 1 02:21:25.704381 kernel: bond0: (slave enp2s0f0np0): Enslaving as a backup interface with an up link Nov 1 02:21:25.704477 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Nov 1 02:21:25.704921 systemd[1]: Started systemd-resolved.service. Nov 1 02:21:25.723045 systemd-networkd[1321]: bond0: Link UP Nov 1 02:21:25.723381 systemd-networkd[1321]: enp2s0f1np1: Link UP Nov 1 02:21:25.737500 systemd[1]: Reached target network.target. Nov 1 02:21:25.746398 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 02:21:25.746438 kernel: bond0: active interface up! Nov 1 02:21:25.777454 systemd[1]: Reached target nss-lookup.target. Nov 1 02:21:25.781429 kernel: bond0: (slave enp2s0f0np0): link status definitely up, 10000 Mbps full duplex Nov 1 02:21:25.781432 systemd-networkd[1321]: enp2s0f1np1: Gained carrier Nov 1 02:21:25.782444 systemd-networkd[1321]: enp2s0f1np1: Reconfiguring with /etc/systemd/network/10-b8:ce:f6:07:a6:3a.network. Nov 1 02:21:25.789476 systemd[1]: Reached target sysinit.target. Nov 1 02:21:25.797497 systemd[1]: Started motdgen.path. Nov 1 02:21:25.805452 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 02:21:25.824523 systemd[1]: Started logrotate.timer. Nov 1 02:21:25.832427 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Nov 1 02:21:25.839441 systemd[1]: Started mdadm.timer. Nov 1 02:21:25.847402 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 02:21:25.856399 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 02:21:25.856417 systemd[1]: Reached target paths.target. Nov 1 02:21:25.864400 systemd[1]: Reached target timers.target. Nov 1 02:21:25.872620 systemd[1]: Listening on dbus.socket. Nov 1 02:21:25.881332 systemd[1]: Starting docker.socket... Nov 1 02:21:25.897976 systemd[1]: Listening on sshd.socket. Nov 1 02:21:25.905425 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 02:21:25.919506 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 02:21:25.919738 systemd[1]: Listening on docker.socket. Nov 1 02:21:25.927392 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 02:21:25.942495 systemd[1]: Reached target sockets.target. Nov 1 02:21:25.948392 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 02:21:25.965459 systemd[1]: Reached target basic.target. Nov 1 02:21:25.971364 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 02:21:25.976493 systemd-networkd[1321]: enp2s0f0np0: Link UP Nov 1 02:21:25.976699 systemd-networkd[1321]: bond0: Gained carrier Nov 1 02:21:25.976804 systemd-networkd[1321]: enp2s0f0np0: Gained carrier Nov 1 02:21:25.976829 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:25.986468 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.986484 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 02:21:25.986973 systemd[1]: Starting containerd.service... Nov 1 02:21:25.994363 kernel: bond0: (slave enp2s0f1np1): link status down for interface, disabling it in 200 ms Nov 1 02:21:25.994387 kernel: bond0: (slave enp2s0f1np1): invalid new link 1 on slave Nov 1 02:21:26.017554 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.017615 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.017748 systemd-networkd[1321]: enp2s0f1np1: Link DOWN Nov 1 02:21:26.017751 systemd-networkd[1321]: enp2s0f1np1: Lost carrier Nov 1 02:21:26.017924 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 02:21:26.026965 systemd[1]: Starting coreos-metadata.service... Nov 1 02:21:26.028557 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.028621 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.028804 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.033990 systemd[1]: Starting dbus.service... Nov 1 02:21:26.040963 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 02:21:26.045479 jq[1535]: false Nov 1 02:21:26.047959 systemd[1]: Starting extend-filesystems.service... Nov 1 02:21:26.048431 coreos-metadata[1528]: Nov 01 02:21:26.048 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 02:21:26.051214 dbus-daemon[1534]: [system] SELinux support is enabled Nov 1 02:21:26.054445 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 02:21:26.055084 systemd[1]: Starting motdgen.service... Nov 1 02:21:26.055291 extend-filesystems[1536]: Found loop1 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sda Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb1 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb2 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb3 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found usr Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb4 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb6 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb7 Nov 1 02:21:26.073503 extend-filesystems[1536]: Found sdb9 Nov 1 02:21:26.073503 extend-filesystems[1536]: Checking size of /dev/sdb9 Nov 1 02:21:26.073503 extend-filesystems[1536]: Resized partition /dev/sdb9 Nov 1 02:21:26.288274 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Nov 1 02:21:26.288297 kernel: mlx5_core 0000:02:00.1 enp2s0f1np1: Link up Nov 1 02:21:26.288408 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Nov 1 02:21:26.288421 kernel: bond0: (slave enp2s0f1np1): speed changed to 0 on port 1 Nov 1 02:21:26.288432 kernel: bond0: (slave enp2s0f1np1): link status up again after 200 ms Nov 1 02:21:26.062139 systemd[1]: Starting prepare-helm.service... Nov 1 02:21:26.288587 coreos-metadata[1531]: Nov 01 02:21:26.056 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Nov 1 02:21:26.232571 dbus-daemon[1534]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 02:21:26.288733 extend-filesystems[1552]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 02:21:26.316462 kernel: bond0: (slave enp2s0f1np1): link status definitely up, 10000 Mbps full duplex Nov 1 02:21:26.096213 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 02:21:26.115059 systemd[1]: Starting sshd-keygen.service... Nov 1 02:21:26.129851 systemd[1]: Starting systemd-logind.service... Nov 1 02:21:26.135480 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 02:21:26.136077 systemd[1]: Starting tcsd.service... Nov 1 02:21:26.316822 update_engine[1565]: I1101 02:21:26.198510 1565 main.cc:92] Flatcar Update Engine starting Nov 1 02:21:26.316822 update_engine[1565]: I1101 02:21:26.201577 1565 update_check_scheduler.cc:74] Next update check in 8m24s Nov 1 02:21:26.151674 systemd-logind[1563]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 02:21:26.317060 jq[1566]: true Nov 1 02:21:26.151686 systemd-logind[1563]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 02:21:26.151696 systemd-logind[1563]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Nov 1 02:21:26.317240 tar[1568]: linux-amd64/LICENSE Nov 1 02:21:26.317240 tar[1568]: linux-amd64/helm Nov 1 02:21:26.151807 systemd-logind[1563]: New seat seat0. Nov 1 02:21:26.317411 jq[1570]: true Nov 1 02:21:26.151822 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.243205669Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.253752243Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.254877114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.255520755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.1 Nov 1 02:21:26.317522 env[1571]: 92-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.255548556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.257066764Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter Nov 1 02:21:26.317522 env[1571]: : skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.257078362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.257086138Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.257091649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.257138288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317522 env[1571]: time="2025-11-01T02:21:26.257263272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 02:21:26.152338 systemd[1]: Starting update-engine.service... Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.257333955Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.257342872Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.259212318Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.259224793Z" level=info msg="metadata content store policy set" policy=shared Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269663642Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269677142Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269684573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269703862Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269715173Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269724600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269731642Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269738899Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.317840 env[1571]: time="2025-11-01T02:21:26.269745948Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.165961 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.269753194Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.269759692Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.269766385Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.269811611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.269859538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.269986326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270000590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270008781Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270032911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270041875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270048698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270054669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270060777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318096 env[1571]: time="2025-11-01T02:21:26.270067808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.180988 systemd[1]: Started dbus.service. Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270073859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270079950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270087023Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270149554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270159613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270166219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270172103Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270179538Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270185282Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270195101Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 02:21:26.318369 env[1571]: time="2025-11-01T02:21:26.270215038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 02:21:26.318552 bash[1594]: Updated "/home/core/.ssh/authorized_keys" Nov 1 02:21:26.196279 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 02:21:26.196391 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270324057Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270353141Z" level=info msg="Connect containerd service" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270373677Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270643784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270731112Z" level=info msg="Start subscribing containerd event" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270759171Z" level=info msg="Start recovering state" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270781117Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270795743Z" level=info msg="Start event monitor" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270804355Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270812598Z" level=info msg="Start snapshots syncer" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270820963Z" level=info msg="Start cni network conf syncer for default" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270827049Z" level=info msg="Start streaming server" Nov 1 02:21:26.318655 env[1571]: time="2025-11-01T02:21:26.270829137Z" level=info msg="containerd successfully booted in 0.028117s" Nov 1 02:21:26.196568 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 02:21:26.196662 systemd[1]: Finished motdgen.service. Nov 1 02:21:26.208895 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 02:21:26.208983 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 02:21:26.236870 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Nov 1 02:21:26.236966 systemd[1]: Condition check resulted in tcsd.service being skipped. Nov 1 02:21:26.239053 systemd[1]: Started systemd-logind.service. Nov 1 02:21:26.272010 systemd-networkd[1321]: enp2s0f1np1: Link UP Nov 1 02:21:26.272013 systemd-networkd[1321]: enp2s0f1np1: Gained carrier Nov 1 02:21:26.304579 systemd[1]: Started containerd.service. Nov 1 02:21:26.314602 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.314667 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.314719 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.314804 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:26.323813 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 02:21:26.337455 systemd[1]: Started update-engine.service. Nov 1 02:21:26.347177 systemd[1]: Started locksmithd.service. Nov 1 02:21:26.353473 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 02:21:26.353573 systemd[1]: Reached target system-config.target. Nov 1 02:21:26.361461 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 02:21:26.361555 systemd[1]: Reached target user-config.target. Nov 1 02:21:26.404318 locksmithd[1606]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 02:21:26.404479 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 02:21:26.416931 systemd[1]: Finished sshd-keygen.service. Nov 1 02:21:26.424438 systemd[1]: Starting issuegen.service... Nov 1 02:21:26.431698 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 02:21:26.431782 systemd[1]: Finished issuegen.service. Nov 1 02:21:26.439369 systemd[1]: Starting systemd-user-sessions.service... Nov 1 02:21:26.447739 systemd[1]: Finished systemd-user-sessions.service. Nov 1 02:21:26.456415 systemd[1]: Started getty@tty1.service. Nov 1 02:21:26.464265 systemd[1]: Started serial-getty@ttyS1.service. Nov 1 02:21:26.472595 systemd[1]: Reached target getty.target. Nov 1 02:21:26.528671 tar[1568]: linux-amd64/README.md Nov 1 02:21:26.531298 systemd[1]: Finished prepare-helm.service. Nov 1 02:21:26.614393 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Nov 1 02:21:26.642508 extend-filesystems[1552]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Nov 1 02:21:26.642508 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 56 Nov 1 02:21:26.642508 extend-filesystems[1552]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Nov 1 02:21:26.679433 extend-filesystems[1536]: Resized filesystem in /dev/sdb9 Nov 1 02:21:26.642985 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 02:21:26.643069 systemd[1]: Finished extend-filesystems.service. Nov 1 02:21:27.298456 systemd-networkd[1321]: bond0: Gained IPv6LL Nov 1 02:21:27.298759 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:27.426875 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:27.427083 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:27.428562 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 02:21:27.438630 systemd[1]: Reached target network-online.target. Nov 1 02:21:27.447641 systemd[1]: Starting kubelet.service... Nov 1 02:21:28.219060 systemd[1]: Started kubelet.service. Nov 1 02:21:28.625282 kubelet[1637]: E1101 02:21:28.625205 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:21:28.626264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:21:28.626343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:21:29.665546 kernel: mlx5_core 0000:02:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Nov 1 02:21:31.556376 login[1627]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 02:21:31.557720 login[1626]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 02:21:31.587407 systemd-logind[1563]: New session 2 of user core. Nov 1 02:21:31.591492 systemd[1]: Created slice user-500.slice. Nov 1 02:21:31.594671 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 02:21:31.606315 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 02:21:31.607087 systemd[1]: Starting user@500.service... Nov 1 02:21:31.609306 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:31.685339 systemd[1657]: Queued start job for default target default.target. Nov 1 02:21:31.685593 systemd[1657]: Reached target paths.target. Nov 1 02:21:31.685604 systemd[1657]: Reached target sockets.target. Nov 1 02:21:31.685613 systemd[1657]: Reached target timers.target. Nov 1 02:21:31.685620 systemd[1657]: Reached target basic.target. Nov 1 02:21:31.685640 systemd[1657]: Reached target default.target. Nov 1 02:21:31.685655 systemd[1657]: Startup finished in 73ms. Nov 1 02:21:31.685671 systemd[1]: Started user@500.service. Nov 1 02:21:31.686271 systemd[1]: Started session-2.scope. Nov 1 02:21:32.227565 coreos-metadata[1531]: Nov 01 02:21:32.227 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 02:21:32.228410 coreos-metadata[1528]: Nov 01 02:21:32.227 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Nov 1 02:21:32.561707 login[1627]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 02:21:32.564557 systemd-logind[1563]: New session 1 of user core. Nov 1 02:21:32.565158 systemd[1]: Started session-1.scope. Nov 1 02:21:33.227779 coreos-metadata[1528]: Nov 01 02:21:33.227 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 02:21:33.228050 coreos-metadata[1531]: Nov 01 02:21:33.227 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Nov 1 02:21:33.801419 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:2 port 2:2 Nov 1 02:21:33.801593 kernel: mlx5_core 0000:02:00.0: modify lag map port 1:1 port 2:2 Nov 1 02:21:34.294742 systemd[1]: Created slice system-sshd.slice. Nov 1 02:21:34.295349 systemd[1]: Started sshd@0-86.109.11.55:22-147.75.109.163:49112.service. Nov 1 02:21:34.335601 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 49112 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:34.337029 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:34.342063 systemd-logind[1563]: New session 3 of user core. Nov 1 02:21:34.343578 systemd[1]: Started session-3.scope. Nov 1 02:21:34.403518 systemd[1]: Started sshd@1-86.109.11.55:22-147.75.109.163:49120.service. Nov 1 02:21:34.429618 sshd[1683]: Accepted publickey for core from 147.75.109.163 port 49120 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:34.430339 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:34.432685 systemd-logind[1563]: New session 4 of user core. Nov 1 02:21:34.433248 systemd[1]: Started session-4.scope. Nov 1 02:21:34.484026 sshd[1683]: pam_unix(sshd:session): session closed for user core Nov 1 02:21:34.485559 systemd[1]: sshd@1-86.109.11.55:22-147.75.109.163:49120.service: Deactivated successfully. Nov 1 02:21:34.485900 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 02:21:34.486196 systemd-logind[1563]: Session 4 logged out. Waiting for processes to exit. Nov 1 02:21:34.486864 systemd[1]: Started sshd@2-86.109.11.55:22-147.75.109.163:49122.service. Nov 1 02:21:34.487259 systemd-logind[1563]: Removed session 4. Nov 1 02:21:34.514392 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 49122 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:34.515524 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:34.519265 systemd-logind[1563]: New session 5 of user core. Nov 1 02:21:34.520435 systemd[1]: Started session-5.scope. Nov 1 02:21:34.576871 sshd[1689]: pam_unix(sshd:session): session closed for user core Nov 1 02:21:34.578174 systemd[1]: sshd@2-86.109.11.55:22-147.75.109.163:49122.service: Deactivated successfully. Nov 1 02:21:34.578602 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 02:21:34.578971 systemd-logind[1563]: Session 5 logged out. Waiting for processes to exit. Nov 1 02:21:34.579364 systemd-logind[1563]: Removed session 5. Nov 1 02:21:35.375697 coreos-metadata[1528]: Nov 01 02:21:35.375 INFO Fetch successful Nov 1 02:21:35.459592 unknown[1528]: wrote ssh authorized keys file for user: core Nov 1 02:21:35.472560 update-ssh-keys[1694]: Updated "/home/core/.ssh/authorized_keys" Nov 1 02:21:35.472815 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 02:21:35.474602 coreos-metadata[1531]: Nov 01 02:21:35.474 INFO Fetch successful Nov 1 02:21:35.507273 systemd[1]: Finished coreos-metadata.service. Nov 1 02:21:35.508091 systemd[1]: Started packet-phone-home.service. Nov 1 02:21:35.508236 systemd[1]: Reached target multi-user.target. Nov 1 02:21:35.508948 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 02:21:35.513291 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 02:21:35.513457 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 02:21:35.513550 curl[1697]: % Total % Received % Xferd Average Speed Time Time Time Current Nov 1 02:21:35.513782 curl[1697]: Dload Upload Total Spent Left Speed Nov 1 02:21:35.513618 systemd[1]: Startup finished in 1.961s (kernel) + 25.186s (initrd) + 16.488s (userspace) = 43.636s. Nov 1 02:21:35.888619 curl[1697]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Nov 1 02:21:35.891095 systemd[1]: packet-phone-home.service: Deactivated successfully. Nov 1 02:21:38.817016 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 02:21:38.817605 systemd[1]: Stopped kubelet.service. Nov 1 02:21:38.820759 systemd[1]: Starting kubelet.service... Nov 1 02:21:39.050545 systemd[1]: Started kubelet.service. Nov 1 02:21:39.098867 kubelet[1703]: E1101 02:21:39.098761 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:21:39.101229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:21:39.101313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:21:44.586050 systemd[1]: Started sshd@3-86.109.11.55:22-147.75.109.163:46438.service. Nov 1 02:21:44.618121 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 46438 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:44.618853 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:44.620970 systemd-logind[1563]: New session 6 of user core. Nov 1 02:21:44.621509 systemd[1]: Started session-6.scope. Nov 1 02:21:44.672804 sshd[1723]: pam_unix(sshd:session): session closed for user core Nov 1 02:21:44.674599 systemd[1]: sshd@3-86.109.11.55:22-147.75.109.163:46438.service: Deactivated successfully. Nov 1 02:21:44.674943 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 02:21:44.675245 systemd-logind[1563]: Session 6 logged out. Waiting for processes to exit. Nov 1 02:21:44.675954 systemd[1]: Started sshd@4-86.109.11.55:22-147.75.109.163:46448.service. Nov 1 02:21:44.676350 systemd-logind[1563]: Removed session 6. Nov 1 02:21:44.713757 sshd[1729]: Accepted publickey for core from 147.75.109.163 port 46448 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:44.714595 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:44.717414 systemd-logind[1563]: New session 7 of user core. Nov 1 02:21:44.718185 systemd[1]: Started session-7.scope. Nov 1 02:21:44.768464 sshd[1729]: pam_unix(sshd:session): session closed for user core Nov 1 02:21:44.775170 systemd[1]: sshd@4-86.109.11.55:22-147.75.109.163:46448.service: Deactivated successfully. Nov 1 02:21:44.776896 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 02:21:44.778653 systemd-logind[1563]: Session 7 logged out. Waiting for processes to exit. Nov 1 02:21:44.781967 systemd[1]: Started sshd@5-86.109.11.55:22-147.75.109.163:46456.service. Nov 1 02:21:44.784650 systemd-logind[1563]: Removed session 7. Nov 1 02:21:44.842857 sshd[1736]: Accepted publickey for core from 147.75.109.163 port 46456 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:44.843493 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:44.845630 systemd-logind[1563]: New session 8 of user core. Nov 1 02:21:44.846186 systemd[1]: Started session-8.scope. Nov 1 02:21:44.899068 sshd[1736]: pam_unix(sshd:session): session closed for user core Nov 1 02:21:44.900562 systemd[1]: sshd@5-86.109.11.55:22-147.75.109.163:46456.service: Deactivated successfully. Nov 1 02:21:44.900901 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 02:21:44.901212 systemd-logind[1563]: Session 8 logged out. Waiting for processes to exit. Nov 1 02:21:44.901886 systemd[1]: Started sshd@6-86.109.11.55:22-147.75.109.163:46464.service. Nov 1 02:21:44.902287 systemd-logind[1563]: Removed session 8. Nov 1 02:21:44.983482 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 46464 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:21:44.985093 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:21:44.990059 systemd-logind[1563]: New session 9 of user core. Nov 1 02:21:44.991378 systemd[1]: Started session-9.scope. Nov 1 02:21:45.074926 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 02:21:45.075645 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 02:21:45.131884 systemd[1]: Starting docker.service... Nov 1 02:21:45.168550 env[1758]: time="2025-11-01T02:21:45.168480923Z" level=info msg="Starting up" Nov 1 02:21:45.169518 env[1758]: time="2025-11-01T02:21:45.169465241Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 02:21:45.169518 env[1758]: time="2025-11-01T02:21:45.169481857Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 02:21:45.169518 env[1758]: time="2025-11-01T02:21:45.169500131Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 02:21:45.169518 env[1758]: time="2025-11-01T02:21:45.169516786Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 02:21:45.171832 env[1758]: time="2025-11-01T02:21:45.171785825Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 02:21:45.171832 env[1758]: time="2025-11-01T02:21:45.171801040Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 02:21:45.171832 env[1758]: time="2025-11-01T02:21:45.171815732Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 02:21:45.171832 env[1758]: time="2025-11-01T02:21:45.171825038Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 02:21:45.189609 env[1758]: time="2025-11-01T02:21:45.189563752Z" level=info msg="Loading containers: start." Nov 1 02:21:45.314390 kernel: Initializing XFRM netlink socket Nov 1 02:21:45.387631 env[1758]: time="2025-11-01T02:21:45.387548634Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 02:21:45.388265 systemd-timesyncd[1513]: Network configuration changed, trying to establish connection. Nov 1 02:21:45.436015 systemd-networkd[1321]: docker0: Link UP Nov 1 02:21:45.451737 env[1758]: time="2025-11-01T02:21:45.451636747Z" level=info msg="Loading containers: done." Nov 1 02:21:45.471552 env[1758]: time="2025-11-01T02:21:45.471473068Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 02:21:45.471631 env[1758]: time="2025-11-01T02:21:45.471609180Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 02:21:45.471669 env[1758]: time="2025-11-01T02:21:45.471656775Z" level=info msg="Daemon has completed initialization" Nov 1 02:21:45.473157 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2641859440-merged.mount: Deactivated successfully. Nov 1 02:21:45.478259 systemd[1]: Started docker.service. Nov 1 02:21:45.481078 env[1758]: time="2025-11-01T02:21:45.481029011Z" level=info msg="API listen on /run/docker.sock" Nov 1 02:21:45.789935 systemd-timesyncd[1513]: Contacted time server [2604:9a00:1:106:1c00:84ff:fe00:349]:123 (2.flatcar.pool.ntp.org). Nov 1 02:21:45.789969 systemd-timesyncd[1513]: Initial clock synchronization to Sat 2025-11-01 02:21:45.734689 UTC. Nov 1 02:21:46.430176 env[1571]: time="2025-11-01T02:21:46.430021938Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 02:21:46.991508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180745149.mount: Deactivated successfully. Nov 1 02:21:48.223287 env[1571]: time="2025-11-01T02:21:48.223237174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:48.224022 env[1571]: time="2025-11-01T02:21:48.223980279Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:48.225071 env[1571]: time="2025-11-01T02:21:48.225031663Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:48.226457 env[1571]: time="2025-11-01T02:21:48.226399819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:48.226896 env[1571]: time="2025-11-01T02:21:48.226841069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 02:21:48.227314 env[1571]: time="2025-11-01T02:21:48.227300552Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 02:21:49.315623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 02:21:49.315773 systemd[1]: Stopped kubelet.service. Nov 1 02:21:49.316671 systemd[1]: Starting kubelet.service... Nov 1 02:21:49.531134 systemd[1]: Started kubelet.service. Nov 1 02:21:49.568345 env[1571]: time="2025-11-01T02:21:49.568263984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:49.569596 env[1571]: time="2025-11-01T02:21:49.569574140Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:49.571211 env[1571]: time="2025-11-01T02:21:49.571194129Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:49.572677 env[1571]: time="2025-11-01T02:21:49.572661674Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:49.573132 env[1571]: time="2025-11-01T02:21:49.573117644Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 02:21:49.573453 env[1571]: time="2025-11-01T02:21:49.573413940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 02:21:49.576307 kubelet[1915]: E1101 02:21:49.576291 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 02:21:49.577469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 02:21:49.577540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 02:21:50.721063 env[1571]: time="2025-11-01T02:21:50.721005106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:50.722145 env[1571]: time="2025-11-01T02:21:50.722110719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:50.723283 env[1571]: time="2025-11-01T02:21:50.723241230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:50.724198 env[1571]: time="2025-11-01T02:21:50.724150058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:50.724715 env[1571]: time="2025-11-01T02:21:50.724674210Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 02:21:50.725128 env[1571]: time="2025-11-01T02:21:50.725113569Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 02:21:51.712662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646293900.mount: Deactivated successfully. Nov 1 02:21:52.112600 env[1571]: time="2025-11-01T02:21:52.112577387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:52.113199 env[1571]: time="2025-11-01T02:21:52.113188101Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:52.113757 env[1571]: time="2025-11-01T02:21:52.113721044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:52.114631 env[1571]: time="2025-11-01T02:21:52.114590625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:52.114786 env[1571]: time="2025-11-01T02:21:52.114737179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 02:21:52.115170 env[1571]: time="2025-11-01T02:21:52.115141459Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 02:21:52.787602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525306377.mount: Deactivated successfully. Nov 1 02:21:53.492780 env[1571]: time="2025-11-01T02:21:53.492750314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:53.493450 env[1571]: time="2025-11-01T02:21:53.493436475Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:53.494957 env[1571]: time="2025-11-01T02:21:53.494903051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:53.495836 env[1571]: time="2025-11-01T02:21:53.495811499Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:53.496299 env[1571]: time="2025-11-01T02:21:53.496283077Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 02:21:53.496830 env[1571]: time="2025-11-01T02:21:53.496797694Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 02:21:54.018834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470995611.mount: Deactivated successfully. Nov 1 02:21:54.020070 env[1571]: time="2025-11-01T02:21:54.020047698Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:54.020669 env[1571]: time="2025-11-01T02:21:54.020655512Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:54.021321 env[1571]: time="2025-11-01T02:21:54.021310156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:54.022156 env[1571]: time="2025-11-01T02:21:54.022144076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:54.022400 env[1571]: time="2025-11-01T02:21:54.022366914Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 02:21:54.022859 env[1571]: time="2025-11-01T02:21:54.022834704Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 02:21:54.523263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672146039.mount: Deactivated successfully. Nov 1 02:21:56.190635 env[1571]: time="2025-11-01T02:21:56.190578225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:56.191246 env[1571]: time="2025-11-01T02:21:56.191212690Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:56.192505 env[1571]: time="2025-11-01T02:21:56.192460985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:56.193493 env[1571]: time="2025-11-01T02:21:56.193449987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:21:56.193958 env[1571]: time="2025-11-01T02:21:56.193922957Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 02:21:57.776770 systemd[1]: Stopped kubelet.service. Nov 1 02:21:57.778052 systemd[1]: Starting kubelet.service... Nov 1 02:21:57.790614 systemd[1]: Reloading. Nov 1 02:21:57.829479 /usr/lib/systemd/system-generators/torcx-generator[2004]: time="2025-11-01T02:21:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 02:21:57.829503 /usr/lib/systemd/system-generators/torcx-generator[2004]: time="2025-11-01T02:21:57Z" level=info msg="torcx already run" Nov 1 02:21:57.887986 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 02:21:57.887996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 02:21:57.901737 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:21:57.967192 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 02:21:57.967230 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 02:21:57.967331 systemd[1]: Stopped kubelet.service. Nov 1 02:21:57.968162 systemd[1]: Starting kubelet.service... Nov 1 02:21:58.221225 systemd[1]: Started kubelet.service. Nov 1 02:21:58.266229 kubelet[2068]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 02:21:58.266229 kubelet[2068]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 02:21:58.266229 kubelet[2068]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 02:21:58.266617 kubelet[2068]: I1101 02:21:58.266281 2068 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 02:21:58.826666 kubelet[2068]: I1101 02:21:58.826616 2068 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 02:21:58.826666 kubelet[2068]: I1101 02:21:58.826631 2068 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 02:21:58.826792 kubelet[2068]: I1101 02:21:58.826786 2068 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 02:21:58.919917 kubelet[2068]: I1101 02:21:58.919813 2068 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 02:21:58.934211 kubelet[2068]: E1101 02:21:58.934142 2068 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://86.109.11.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:21:58.953847 kubelet[2068]: E1101 02:21:58.953724 2068 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 02:21:58.953847 kubelet[2068]: I1101 02:21:58.953813 2068 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 02:21:58.994895 kubelet[2068]: I1101 02:21:58.994816 2068 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 02:21:58.998445 kubelet[2068]: I1101 02:21:58.998322 2068 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 02:21:58.998872 kubelet[2068]: I1101 02:21:58.998448 2068 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-c654b621d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 02:21:58.998872 kubelet[2068]: I1101 02:21:58.998839 2068 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 02:21:58.998872 kubelet[2068]: I1101 02:21:58.998862 2068 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 02:21:58.999229 kubelet[2068]: I1101 02:21:58.999077 2068 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:21:59.005612 kubelet[2068]: I1101 02:21:59.005563 2068 kubelet.go:446] "Attempting to sync node with API server" Nov 1 02:21:59.005612 kubelet[2068]: I1101 02:21:59.005599 2068 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 02:21:59.005765 kubelet[2068]: I1101 02:21:59.005623 2068 kubelet.go:352] "Adding apiserver pod source" Nov 1 02:21:59.005765 kubelet[2068]: I1101 02:21:59.005636 2068 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 02:21:59.025794 kubelet[2068]: I1101 02:21:59.025734 2068 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 02:21:59.026254 kubelet[2068]: I1101 02:21:59.026242 2068 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 02:21:59.040536 kubelet[2068]: W1101 02:21:59.040512 2068 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 02:21:59.041527 kubelet[2068]: W1101 02:21:59.041464 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://86.109.11.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c654b621d4&limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:21:59.041597 kubelet[2068]: E1101 02:21:59.041526 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://86.109.11.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c654b621d4&limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:21:59.044652 kubelet[2068]: W1101 02:21:59.044611 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://86.109.11.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:21:59.044722 kubelet[2068]: E1101 02:21:59.044660 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://86.109.11.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:21:59.046049 kubelet[2068]: I1101 02:21:59.046004 2068 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 02:21:59.046049 kubelet[2068]: I1101 02:21:59.046035 2068 server.go:1287] "Started kubelet" Nov 1 02:21:59.046268 kubelet[2068]: I1101 02:21:59.046233 2068 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 02:21:59.061127 kubelet[2068]: I1101 02:21:59.061092 2068 server.go:479] "Adding debug handlers to kubelet server" Nov 1 02:21:59.066266 kubelet[2068]: E1101 02:21:59.066255 2068 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 02:21:59.069071 kubelet[2068]: E1101 02:21:59.067913 2068 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://86.109.11.55:6443/api/v1/namespaces/default/events\": dial tcp 86.109.11.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-c654b621d4.1873c0aaccdd2c42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-c654b621d4,UID:ci-3510.3.8-n-c654b621d4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-c654b621d4,},FirstTimestamp:2025-11-01 02:21:59.046016066 +0000 UTC m=+0.821789883,LastTimestamp:2025-11-01 02:21:59.046016066 +0000 UTC m=+0.821789883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-c654b621d4,}" Nov 1 02:21:59.069659 kubelet[2068]: I1101 02:21:59.069540 2068 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 02:21:59.069856 kubelet[2068]: I1101 02:21:59.069820 2068 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 02:21:59.070617 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 02:21:59.070668 kubelet[2068]: I1101 02:21:59.070654 2068 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 02:21:59.070804 kubelet[2068]: I1101 02:21:59.070734 2068 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 02:21:59.070804 kubelet[2068]: I1101 02:21:59.070770 2068 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 02:21:59.070804 kubelet[2068]: E1101 02:21:59.070775 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:21:59.070916 kubelet[2068]: I1101 02:21:59.070821 2068 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 02:21:59.070916 kubelet[2068]: I1101 02:21:59.070868 2068 reconciler.go:26] "Reconciler: start to sync state" Nov 1 02:21:59.071880 kubelet[2068]: I1101 02:21:59.071867 2068 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 02:21:59.071935 kubelet[2068]: E1101 02:21:59.071918 2068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c654b621d4?timeout=10s\": dial tcp 86.109.11.55:6443: connect: connection refused" interval="200ms" Nov 1 02:21:59.072017 kubelet[2068]: W1101 02:21:59.071994 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://86.109.11.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:21:59.072060 kubelet[2068]: E1101 02:21:59.072027 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://86.109.11.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:21:59.072328 kubelet[2068]: I1101 02:21:59.072320 2068 factory.go:221] Registration of the containerd container factory successfully Nov 1 02:21:59.072328 kubelet[2068]: I1101 02:21:59.072327 2068 factory.go:221] Registration of the systemd container factory successfully Nov 1 02:21:59.079387 kubelet[2068]: I1101 02:21:59.079333 2068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 02:21:59.079852 kubelet[2068]: I1101 02:21:59.079841 2068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 02:21:59.079852 kubelet[2068]: I1101 02:21:59.079853 2068 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 02:21:59.079928 kubelet[2068]: I1101 02:21:59.079865 2068 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 02:21:59.079928 kubelet[2068]: I1101 02:21:59.079870 2068 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 02:21:59.079928 kubelet[2068]: E1101 02:21:59.079893 2068 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 02:21:59.080106 kubelet[2068]: W1101 02:21:59.080092 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://86.109.11.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:21:59.080134 kubelet[2068]: E1101 02:21:59.080117 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://86.109.11.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:21:59.172127 kubelet[2068]: E1101 02:21:59.171998 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:21:59.180629 kubelet[2068]: E1101 02:21:59.180509 2068 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 02:21:59.215282 kubelet[2068]: I1101 02:21:59.215184 2068 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 02:21:59.215282 kubelet[2068]: I1101 02:21:59.215224 2068 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 02:21:59.215282 kubelet[2068]: I1101 02:21:59.215268 2068 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:21:59.228719 kubelet[2068]: I1101 02:21:59.228641 2068 policy_none.go:49] "None policy: Start" Nov 1 02:21:59.228719 kubelet[2068]: I1101 02:21:59.228682 2068 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 02:21:59.228719 kubelet[2068]: I1101 02:21:59.228710 2068 state_mem.go:35] "Initializing new in-memory state store" Nov 1 02:21:59.238251 systemd[1]: Created slice kubepods.slice. Nov 1 02:21:59.249130 systemd[1]: Created slice kubepods-burstable.slice. Nov 1 02:21:59.256584 systemd[1]: Created slice kubepods-besteffort.slice. Nov 1 02:21:59.268241 kubelet[2068]: I1101 02:21:59.268159 2068 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 02:21:59.268937 kubelet[2068]: I1101 02:21:59.268475 2068 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 02:21:59.268937 kubelet[2068]: I1101 02:21:59.268503 2068 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 02:21:59.268937 kubelet[2068]: I1101 02:21:59.268841 2068 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 02:21:59.270117 kubelet[2068]: E1101 02:21:59.270063 2068 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 02:21:59.270299 kubelet[2068]: E1101 02:21:59.270170 2068 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:21:59.272626 kubelet[2068]: E1101 02:21:59.272530 2068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c654b621d4?timeout=10s\": dial tcp 86.109.11.55:6443: connect: connection refused" interval="400ms" Nov 1 02:21:59.373641 kubelet[2068]: I1101 02:21:59.373417 2068 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.374386 kubelet[2068]: E1101 02:21:59.374299 2068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://86.109.11.55:6443/api/v1/nodes\": dial tcp 86.109.11.55:6443: connect: connection refused" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.402271 systemd[1]: Created slice kubepods-burstable-podbe37defbf09df10a377636d360eb0421.slice. Nov 1 02:21:59.419153 kubelet[2068]: E1101 02:21:59.419067 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.427187 systemd[1]: Created slice kubepods-burstable-pod55f704c7f41fec07aaa172f1fa964c1e.slice. Nov 1 02:21:59.431275 kubelet[2068]: E1101 02:21:59.431193 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.435083 systemd[1]: Created slice kubepods-burstable-pod0444c6a96fbc1f9b9986dbcb7796347f.slice. Nov 1 02:21:59.438841 kubelet[2068]: E1101 02:21:59.438749 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.473674 kubelet[2068]: I1101 02:21:59.473593 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be37defbf09df10a377636d360eb0421-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" (UID: \"be37defbf09df10a377636d360eb0421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.473981 kubelet[2068]: I1101 02:21:59.473686 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be37defbf09df10a377636d360eb0421-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" (UID: \"be37defbf09df10a377636d360eb0421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.473981 kubelet[2068]: I1101 02:21:59.473762 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.473981 kubelet[2068]: I1101 02:21:59.473820 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0444c6a96fbc1f9b9986dbcb7796347f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-c654b621d4\" (UID: \"0444c6a96fbc1f9b9986dbcb7796347f\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.473981 kubelet[2068]: I1101 02:21:59.473935 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.474576 kubelet[2068]: I1101 02:21:59.474015 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be37defbf09df10a377636d360eb0421-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" (UID: \"be37defbf09df10a377636d360eb0421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.474576 kubelet[2068]: I1101 02:21:59.474075 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.474576 kubelet[2068]: I1101 02:21:59.474128 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.474576 kubelet[2068]: I1101 02:21:59.474177 2068 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.578670 kubelet[2068]: I1101 02:21:59.578573 2068 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.579439 kubelet[2068]: E1101 02:21:59.579314 2068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://86.109.11.55:6443/api/v1/nodes\": dial tcp 86.109.11.55:6443: connect: connection refused" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.673872 kubelet[2068]: E1101 02:21:59.673633 2068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.11.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c654b621d4?timeout=10s\": dial tcp 86.109.11.55:6443: connect: connection refused" interval="800ms" Nov 1 02:21:59.722017 env[1571]: time="2025-11-01T02:21:59.721858679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-c654b621d4,Uid:be37defbf09df10a377636d360eb0421,Namespace:kube-system,Attempt:0,}" Nov 1 02:21:59.733080 env[1571]: time="2025-11-01T02:21:59.732958296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-c654b621d4,Uid:55f704c7f41fec07aaa172f1fa964c1e,Namespace:kube-system,Attempt:0,}" Nov 1 02:21:59.744349 env[1571]: time="2025-11-01T02:21:59.744272036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-c654b621d4,Uid:0444c6a96fbc1f9b9986dbcb7796347f,Namespace:kube-system,Attempt:0,}" Nov 1 02:21:59.953645 kubelet[2068]: W1101 02:21:59.953349 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://86.109.11.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:21:59.953645 kubelet[2068]: E1101 02:21:59.953519 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://86.109.11.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:21:59.983701 kubelet[2068]: I1101 02:21:59.983597 2068 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:21:59.984407 kubelet[2068]: E1101 02:21:59.984299 2068 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://86.109.11.55:6443/api/v1/nodes\": dial tcp 86.109.11.55:6443: connect: connection refused" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:00.045151 kubelet[2068]: W1101 02:22:00.044984 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://86.109.11.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:22:00.045151 kubelet[2068]: E1101 02:22:00.045134 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://86.109.11.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:22:00.251893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368465566.mount: Deactivated successfully. Nov 1 02:22:00.253349 env[1571]: time="2025-11-01T02:22:00.253329630Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.254498 env[1571]: time="2025-11-01T02:22:00.254482015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.255046 env[1571]: time="2025-11-01T02:22:00.255032705Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.255718 env[1571]: time="2025-11-01T02:22:00.255706916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.256078 env[1571]: time="2025-11-01T02:22:00.256064918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.257177 env[1571]: time="2025-11-01T02:22:00.257160169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.257922 env[1571]: time="2025-11-01T02:22:00.257876751Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.259558 env[1571]: time="2025-11-01T02:22:00.259545526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.259932 env[1571]: time="2025-11-01T02:22:00.259915642Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.261017 env[1571]: time="2025-11-01T02:22:00.261004987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.261444 env[1571]: time="2025-11-01T02:22:00.261425988Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.262208 env[1571]: time="2025-11-01T02:22:00.262195734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:00.265018 env[1571]: time="2025-11-01T02:22:00.264971903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:00.265018 env[1571]: time="2025-11-01T02:22:00.265004475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:00.265124 env[1571]: time="2025-11-01T02:22:00.265016647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:00.265124 env[1571]: time="2025-11-01T02:22:00.265102728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b883cdc12a3db85d20ac353ff4ee5f81050789c891482bcfd992650ad77a1480 pid=2117 runtime=io.containerd.runc.v2 Nov 1 02:22:00.267438 env[1571]: time="2025-11-01T02:22:00.267399085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:00.267438 env[1571]: time="2025-11-01T02:22:00.267422445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:00.267438 env[1571]: time="2025-11-01T02:22:00.267432156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:00.267598 env[1571]: time="2025-11-01T02:22:00.267499643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f69034b7a7f615fc039ed8a0976f62394c70ff219d486234d472113b462a91e8 pid=2136 runtime=io.containerd.runc.v2 Nov 1 02:22:00.268670 env[1571]: time="2025-11-01T02:22:00.268638985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:00.268670 env[1571]: time="2025-11-01T02:22:00.268656817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:00.268670 env[1571]: time="2025-11-01T02:22:00.268664067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:00.268789 env[1571]: time="2025-11-01T02:22:00.268740207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/feac2d2d9864ceb64384a76cd3ea1e1dc5b0db3a4d1bf2983ab26b210da25817 pid=2156 runtime=io.containerd.runc.v2 Nov 1 02:22:00.272105 systemd[1]: Started cri-containerd-b883cdc12a3db85d20ac353ff4ee5f81050789c891482bcfd992650ad77a1480.scope. Nov 1 02:22:00.274012 systemd[1]: Started cri-containerd-f69034b7a7f615fc039ed8a0976f62394c70ff219d486234d472113b462a91e8.scope. Nov 1 02:22:00.275360 systemd[1]: Started cri-containerd-feac2d2d9864ceb64384a76cd3ea1e1dc5b0db3a4d1bf2983ab26b210da25817.scope. Nov 1 02:22:00.295565 env[1571]: time="2025-11-01T02:22:00.295532840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-c654b621d4,Uid:be37defbf09df10a377636d360eb0421,Namespace:kube-system,Attempt:0,} returns sandbox id \"b883cdc12a3db85d20ac353ff4ee5f81050789c891482bcfd992650ad77a1480\"" Nov 1 02:22:00.296178 env[1571]: time="2025-11-01T02:22:00.296158337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-c654b621d4,Uid:55f704c7f41fec07aaa172f1fa964c1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f69034b7a7f615fc039ed8a0976f62394c70ff219d486234d472113b462a91e8\"" Nov 1 02:22:00.296233 env[1571]: time="2025-11-01T02:22:00.296216379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-c654b621d4,Uid:0444c6a96fbc1f9b9986dbcb7796347f,Namespace:kube-system,Attempt:0,} returns sandbox id \"feac2d2d9864ceb64384a76cd3ea1e1dc5b0db3a4d1bf2983ab26b210da25817\"" Nov 1 02:22:00.297177 env[1571]: time="2025-11-01T02:22:00.297161179Z" level=info msg="CreateContainer within sandbox \"f69034b7a7f615fc039ed8a0976f62394c70ff219d486234d472113b462a91e8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 02:22:00.297230 env[1571]: time="2025-11-01T02:22:00.297177846Z" level=info msg="CreateContainer within sandbox \"feac2d2d9864ceb64384a76cd3ea1e1dc5b0db3a4d1bf2983ab26b210da25817\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 02:22:00.297230 env[1571]: time="2025-11-01T02:22:00.297215538Z" level=info msg="CreateContainer within sandbox \"b883cdc12a3db85d20ac353ff4ee5f81050789c891482bcfd992650ad77a1480\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 02:22:00.303198 env[1571]: time="2025-11-01T02:22:00.303178458Z" level=info msg="CreateContainer within sandbox \"f69034b7a7f615fc039ed8a0976f62394c70ff219d486234d472113b462a91e8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3477d12b15b06dbe54ee472e2c9db5e7e8155eb196e354c15e7decefbdfbf9cf\"" Nov 1 02:22:00.303473 env[1571]: time="2025-11-01T02:22:00.303461135Z" level=info msg="StartContainer for \"3477d12b15b06dbe54ee472e2c9db5e7e8155eb196e354c15e7decefbdfbf9cf\"" Nov 1 02:22:00.304202 env[1571]: time="2025-11-01T02:22:00.304182142Z" level=info msg="CreateContainer within sandbox \"feac2d2d9864ceb64384a76cd3ea1e1dc5b0db3a4d1bf2983ab26b210da25817\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"368509bee5540fe11bc4ca8d536c6dee0426cd15bf38965869e559a9a1eb38c5\"" Nov 1 02:22:00.304621 env[1571]: time="2025-11-01T02:22:00.304513038Z" level=info msg="StartContainer for \"368509bee5540fe11bc4ca8d536c6dee0426cd15bf38965869e559a9a1eb38c5\"" Nov 1 02:22:00.305700 env[1571]: time="2025-11-01T02:22:00.305680672Z" level=info msg="CreateContainer within sandbox \"b883cdc12a3db85d20ac353ff4ee5f81050789c891482bcfd992650ad77a1480\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76df854e529467ff685175ffd3e3119d37c5669976c6c1d9fa8ebdb8ead033ea\"" Nov 1 02:22:00.305913 env[1571]: time="2025-11-01T02:22:00.305901065Z" level=info msg="StartContainer for \"76df854e529467ff685175ffd3e3119d37c5669976c6c1d9fa8ebdb8ead033ea\"" Nov 1 02:22:00.311347 kubelet[2068]: W1101 02:22:00.311310 2068 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://86.109.11.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c654b621d4&limit=500&resourceVersion=0": dial tcp 86.109.11.55:6443: connect: connection refused Nov 1 02:22:00.311583 kubelet[2068]: E1101 02:22:00.311363 2068 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://86.109.11.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c654b621d4&limit=500&resourceVersion=0\": dial tcp 86.109.11.55:6443: connect: connection refused" logger="UnhandledError" Nov 1 02:22:00.311832 systemd[1]: Started cri-containerd-3477d12b15b06dbe54ee472e2c9db5e7e8155eb196e354c15e7decefbdfbf9cf.scope. Nov 1 02:22:00.314016 systemd[1]: Started cri-containerd-368509bee5540fe11bc4ca8d536c6dee0426cd15bf38965869e559a9a1eb38c5.scope. Nov 1 02:22:00.314589 systemd[1]: Started cri-containerd-76df854e529467ff685175ffd3e3119d37c5669976c6c1d9fa8ebdb8ead033ea.scope. Nov 1 02:22:00.337334 env[1571]: time="2025-11-01T02:22:00.337299122Z" level=info msg="StartContainer for \"3477d12b15b06dbe54ee472e2c9db5e7e8155eb196e354c15e7decefbdfbf9cf\" returns successfully" Nov 1 02:22:00.337471 env[1571]: time="2025-11-01T02:22:00.337431911Z" level=info msg="StartContainer for \"368509bee5540fe11bc4ca8d536c6dee0426cd15bf38965869e559a9a1eb38c5\" returns successfully" Nov 1 02:22:00.338052 env[1571]: time="2025-11-01T02:22:00.338036653Z" level=info msg="StartContainer for \"76df854e529467ff685175ffd3e3119d37c5669976c6c1d9fa8ebdb8ead033ea\" returns successfully" Nov 1 02:22:00.785980 kubelet[2068]: I1101 02:22:00.785958 2068 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:01.006789 kubelet[2068]: E1101 02:22:01.006769 2068 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:01.085678 kubelet[2068]: E1101 02:22:01.085665 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:01.085782 kubelet[2068]: E1101 02:22:01.085773 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:01.086514 kubelet[2068]: E1101 02:22:01.086504 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:01.124477 kubelet[2068]: I1101 02:22:01.124428 2068 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:01.124477 kubelet[2068]: E1101 02:22:01.124446 2068 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-3510.3.8-n-c654b621d4\": node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.130618 kubelet[2068]: E1101 02:22:01.130572 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.231111 kubelet[2068]: E1101 02:22:01.231092 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.332332 kubelet[2068]: E1101 02:22:01.332258 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.433180 kubelet[2068]: E1101 02:22:01.432949 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.534284 kubelet[2068]: E1101 02:22:01.534160 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.634854 kubelet[2068]: E1101 02:22:01.634771 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.735203 kubelet[2068]: E1101 02:22:01.735006 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.835371 kubelet[2068]: E1101 02:22:01.835233 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:01.936510 kubelet[2068]: E1101 02:22:01.936442 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.037648 kubelet[2068]: E1101 02:22:02.037446 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.092167 kubelet[2068]: E1101 02:22:02.092086 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:02.092413 kubelet[2068]: E1101 02:22:02.092271 2068 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c654b621d4\" not found" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:02.138537 kubelet[2068]: E1101 02:22:02.138444 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.239201 kubelet[2068]: E1101 02:22:02.239104 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.340229 kubelet[2068]: E1101 02:22:02.340131 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.440568 kubelet[2068]: E1101 02:22:02.440496 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.541721 kubelet[2068]: E1101 02:22:02.541624 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.642845 kubelet[2068]: E1101 02:22:02.642612 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.743536 kubelet[2068]: E1101 02:22:02.743477 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.844778 kubelet[2068]: E1101 02:22:02.844665 2068 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:02.971757 kubelet[2068]: I1101 02:22:02.971541 2068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:02.986241 kubelet[2068]: W1101 02:22:02.986192 2068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:02.986506 kubelet[2068]: I1101 02:22:02.986446 2068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:02.993280 kubelet[2068]: W1101 02:22:02.993231 2068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:02.993498 kubelet[2068]: I1101 02:22:02.993441 2068 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:03.000091 kubelet[2068]: W1101 02:22:03.000046 2068 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:03.008211 kubelet[2068]: I1101 02:22:03.008163 2068 apiserver.go:52] "Watching apiserver" Nov 1 02:22:03.074741 kubelet[2068]: I1101 02:22:03.074640 2068 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 02:22:03.695671 systemd[1]: Reloading. Nov 1 02:22:03.725823 /usr/lib/systemd/system-generators/torcx-generator[2405]: time="2025-11-01T02:22:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 02:22:03.725850 /usr/lib/systemd/system-generators/torcx-generator[2405]: time="2025-11-01T02:22:03Z" level=info msg="torcx already run" Nov 1 02:22:03.786012 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 02:22:03.786022 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 02:22:03.799928 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 02:22:03.871787 systemd[1]: Stopping kubelet.service... Nov 1 02:22:03.902250 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 02:22:03.902740 systemd[1]: Stopped kubelet.service. Nov 1 02:22:03.902843 systemd[1]: kubelet.service: Consumed 1.351s CPU time. Nov 1 02:22:03.906559 systemd[1]: Starting kubelet.service... Nov 1 02:22:04.174432 systemd[1]: Started kubelet.service. Nov 1 02:22:04.199051 kubelet[2468]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 02:22:04.199051 kubelet[2468]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 02:22:04.199051 kubelet[2468]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 02:22:04.199283 kubelet[2468]: I1101 02:22:04.199091 2468 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 02:22:04.202533 kubelet[2468]: I1101 02:22:04.202521 2468 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 02:22:04.202533 kubelet[2468]: I1101 02:22:04.202531 2468 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 02:22:04.202684 kubelet[2468]: I1101 02:22:04.202678 2468 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 02:22:04.203367 kubelet[2468]: I1101 02:22:04.203352 2468 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 02:22:04.204507 kubelet[2468]: I1101 02:22:04.204492 2468 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 02:22:04.206546 kubelet[2468]: E1101 02:22:04.206523 2468 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 02:22:04.206546 kubelet[2468]: I1101 02:22:04.206546 2468 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 02:22:04.238763 kubelet[2468]: I1101 02:22:04.238669 2468 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 02:22:04.239266 kubelet[2468]: I1101 02:22:04.239153 2468 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 02:22:04.239706 kubelet[2468]: I1101 02:22:04.239224 2468 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-c654b621d4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 02:22:04.239706 kubelet[2468]: I1101 02:22:04.239686 2468 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 02:22:04.239706 kubelet[2468]: I1101 02:22:04.239716 2468 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 02:22:04.240261 kubelet[2468]: I1101 02:22:04.239829 2468 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:22:04.240261 kubelet[2468]: I1101 02:22:04.240210 2468 kubelet.go:446] "Attempting to sync node with API server" Nov 1 02:22:04.240261 kubelet[2468]: I1101 02:22:04.240254 2468 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 02:22:04.240619 kubelet[2468]: I1101 02:22:04.240298 2468 kubelet.go:352] "Adding apiserver pod source" Nov 1 02:22:04.240619 kubelet[2468]: I1101 02:22:04.240321 2468 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 02:22:04.241713 kubelet[2468]: I1101 02:22:04.241655 2468 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 02:22:04.242883 kubelet[2468]: I1101 02:22:04.242847 2468 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 02:22:04.244141 kubelet[2468]: I1101 02:22:04.244112 2468 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 02:22:04.244285 kubelet[2468]: I1101 02:22:04.244243 2468 server.go:1287] "Started kubelet" Nov 1 02:22:04.245781 kubelet[2468]: I1101 02:22:04.245285 2468 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 02:22:04.247416 kubelet[2468]: I1101 02:22:04.247200 2468 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 02:22:04.249860 kubelet[2468]: I1101 02:22:04.249777 2468 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 02:22:04.250668 kubelet[2468]: E1101 02:22:04.250657 2468 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 02:22:04.250727 kubelet[2468]: I1101 02:22:04.250720 2468 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 02:22:04.250757 kubelet[2468]: I1101 02:22:04.250730 2468 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 02:22:04.250757 kubelet[2468]: I1101 02:22:04.250750 2468 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 02:22:04.250827 kubelet[2468]: E1101 02:22:04.250763 2468 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c654b621d4\" not found" Nov 1 02:22:04.250827 kubelet[2468]: I1101 02:22:04.250779 2468 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 02:22:04.250890 kubelet[2468]: I1101 02:22:04.250849 2468 reconciler.go:26] "Reconciler: start to sync state" Nov 1 02:22:04.251007 kubelet[2468]: I1101 02:22:04.250997 2468 server.go:479] "Adding debug handlers to kubelet server" Nov 1 02:22:04.251131 kubelet[2468]: I1101 02:22:04.251116 2468 factory.go:221] Registration of the systemd container factory successfully Nov 1 02:22:04.251232 kubelet[2468]: I1101 02:22:04.251211 2468 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 02:22:04.251809 kubelet[2468]: I1101 02:22:04.251798 2468 factory.go:221] Registration of the containerd container factory successfully Nov 1 02:22:04.257390 kubelet[2468]: I1101 02:22:04.257359 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 02:22:04.257935 kubelet[2468]: I1101 02:22:04.257923 2468 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 02:22:04.257982 kubelet[2468]: I1101 02:22:04.257938 2468 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 02:22:04.257982 kubelet[2468]: I1101 02:22:04.257950 2468 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 02:22:04.257982 kubelet[2468]: I1101 02:22:04.257954 2468 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 02:22:04.257982 kubelet[2468]: E1101 02:22:04.257979 2468 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 02:22:04.267561 kubelet[2468]: I1101 02:22:04.267545 2468 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 02:22:04.267561 kubelet[2468]: I1101 02:22:04.267555 2468 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 02:22:04.267561 kubelet[2468]: I1101 02:22:04.267565 2468 state_mem.go:36] "Initialized new in-memory state store" Nov 1 02:22:04.267681 kubelet[2468]: I1101 02:22:04.267651 2468 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 02:22:04.267681 kubelet[2468]: I1101 02:22:04.267657 2468 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 02:22:04.267681 kubelet[2468]: I1101 02:22:04.267668 2468 policy_none.go:49] "None policy: Start" Nov 1 02:22:04.267681 kubelet[2468]: I1101 02:22:04.267672 2468 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 02:22:04.267681 kubelet[2468]: I1101 02:22:04.267678 2468 state_mem.go:35] "Initializing new in-memory state store" Nov 1 02:22:04.267772 kubelet[2468]: I1101 02:22:04.267733 2468 state_mem.go:75] "Updated machine memory state" Nov 1 02:22:04.269349 kubelet[2468]: I1101 02:22:04.269339 2468 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 02:22:04.269464 kubelet[2468]: I1101 02:22:04.269456 2468 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 02:22:04.269504 kubelet[2468]: I1101 02:22:04.269465 2468 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 02:22:04.269556 kubelet[2468]: I1101 02:22:04.269549 2468 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 02:22:04.269839 kubelet[2468]: E1101 02:22:04.269828 2468 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 02:22:04.359853 kubelet[2468]: I1101 02:22:04.359766 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.360187 kubelet[2468]: I1101 02:22:04.359850 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.360187 kubelet[2468]: I1101 02:22:04.360106 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.370224 kubelet[2468]: W1101 02:22:04.370130 2468 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:04.370224 kubelet[2468]: W1101 02:22:04.370138 2468 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:04.370224 kubelet[2468]: W1101 02:22:04.370206 2468 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:04.370751 kubelet[2468]: E1101 02:22:04.370237 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.370751 kubelet[2468]: E1101 02:22:04.370280 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.370751 kubelet[2468]: E1101 02:22:04.370323 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-c654b621d4\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.376548 kubelet[2468]: I1101 02:22:04.376505 2468 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.387584 kubelet[2468]: I1101 02:22:04.387535 2468 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.387791 kubelet[2468]: I1101 02:22:04.387679 2468 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.552813 kubelet[2468]: I1101 02:22:04.552537 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.552813 kubelet[2468]: I1101 02:22:04.552747 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553189 kubelet[2468]: I1101 02:22:04.552860 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553189 kubelet[2468]: I1101 02:22:04.552957 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be37defbf09df10a377636d360eb0421-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" (UID: \"be37defbf09df10a377636d360eb0421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553189 kubelet[2468]: I1101 02:22:04.553028 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be37defbf09df10a377636d360eb0421-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" (UID: \"be37defbf09df10a377636d360eb0421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553189 kubelet[2468]: I1101 02:22:04.553097 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553189 kubelet[2468]: I1101 02:22:04.553168 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55f704c7f41fec07aaa172f1fa964c1e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" (UID: \"55f704c7f41fec07aaa172f1fa964c1e\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553875 kubelet[2468]: I1101 02:22:04.553256 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0444c6a96fbc1f9b9986dbcb7796347f-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-c654b621d4\" (UID: \"0444c6a96fbc1f9b9986dbcb7796347f\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.553875 kubelet[2468]: I1101 02:22:04.553444 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be37defbf09df10a377636d360eb0421-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" (UID: \"be37defbf09df10a377636d360eb0421\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:04.705606 sudo[2514]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 02:22:04.706243 sudo[2514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 02:22:05.081527 sudo[2514]: pam_unix(sudo:session): session closed for user root Nov 1 02:22:05.241493 kubelet[2468]: I1101 02:22:05.241445 2468 apiserver.go:52] "Watching apiserver" Nov 1 02:22:05.251717 kubelet[2468]: I1101 02:22:05.251680 2468 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 02:22:05.261557 kubelet[2468]: I1101 02:22:05.261519 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:05.261624 kubelet[2468]: I1101 02:22:05.261571 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:05.261661 kubelet[2468]: I1101 02:22:05.261629 2468 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:05.265697 kubelet[2468]: W1101 02:22:05.265685 2468 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:05.265802 kubelet[2468]: E1101 02:22:05.265731 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-c654b621d4\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:05.266323 kubelet[2468]: W1101 02:22:05.266312 2468 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:05.266323 kubelet[2468]: W1101 02:22:05.266320 2468 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 02:22:05.266426 kubelet[2468]: E1101 02:22:05.266338 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-c654b621d4\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:05.266426 kubelet[2468]: E1101 02:22:05.266339 2468 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-c654b621d4\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" Nov 1 02:22:05.279228 kubelet[2468]: I1101 02:22:05.279197 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c654b621d4" podStartSLOduration=3.27917148 podStartE2EDuration="3.27917148s" podCreationTimestamp="2025-11-01 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:22:05.27881161 +0000 UTC m=+1.098532503" watchObservedRunningTime="2025-11-01 02:22:05.27917148 +0000 UTC m=+1.098892372" Nov 1 02:22:05.279331 kubelet[2468]: I1101 02:22:05.279254 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c654b621d4" podStartSLOduration=3.279250088 podStartE2EDuration="3.279250088s" podCreationTimestamp="2025-11-01 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:22:05.273410956 +0000 UTC m=+1.093131848" watchObservedRunningTime="2025-11-01 02:22:05.279250088 +0000 UTC m=+1.098970976" Nov 1 02:22:05.283874 kubelet[2468]: I1101 02:22:05.283812 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c654b621d4" podStartSLOduration=3.283787212 podStartE2EDuration="3.283787212s" podCreationTimestamp="2025-11-01 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:22:05.283758104 +0000 UTC m=+1.103479003" watchObservedRunningTime="2025-11-01 02:22:05.283787212 +0000 UTC m=+1.103508103" Nov 1 02:22:06.565598 sudo[1745]: pam_unix(sudo:session): session closed for user root Nov 1 02:22:06.567030 sshd[1742]: pam_unix(sshd:session): session closed for user core Nov 1 02:22:06.569468 systemd[1]: sshd@6-86.109.11.55:22-147.75.109.163:46464.service: Deactivated successfully. Nov 1 02:22:06.570172 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 02:22:06.570313 systemd[1]: session-9.scope: Consumed 3.377s CPU time. Nov 1 02:22:06.570912 systemd-logind[1563]: Session 9 logged out. Waiting for processes to exit. Nov 1 02:22:06.572048 systemd-logind[1563]: Removed session 9. Nov 1 02:22:07.216733 systemd[1]: Started sshd@7-86.109.11.55:22-193.46.255.99:12490.service. Nov 1 02:22:08.343321 sshd[2611]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99 user=root Nov 1 02:22:09.214642 kubelet[2468]: I1101 02:22:09.214542 2468 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 02:22:09.215475 env[1571]: time="2025-11-01T02:22:09.215276633Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 02:22:09.216138 kubelet[2468]: I1101 02:22:09.215777 2468 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 02:22:09.777875 systemd[1]: Created slice kubepods-besteffort-pod43daf6b4_c66a_4a11_a75c_b6219be7b774.slice. Nov 1 02:22:09.788833 systemd[1]: Created slice kubepods-burstable-pod3b234b4f_d95d_4efc_a022_b02a12cf7819.slice. Nov 1 02:22:09.793064 kubelet[2468]: I1101 02:22:09.793022 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-bpf-maps\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793064 kubelet[2468]: I1101 02:22:09.793041 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43daf6b4-c66a-4a11-a75c-b6219be7b774-kube-proxy\") pod \"kube-proxy-4ntqh\" (UID: \"43daf6b4-c66a-4a11-a75c-b6219be7b774\") " pod="kube-system/kube-proxy-4ntqh" Nov 1 02:22:09.793064 kubelet[2468]: I1101 02:22:09.793053 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-cgroup\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793064 kubelet[2468]: I1101 02:22:09.793061 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-etc-cni-netd\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793218 kubelet[2468]: I1101 02:22:09.793070 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-xtables-lock\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793218 kubelet[2468]: I1101 02:22:09.793078 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43daf6b4-c66a-4a11-a75c-b6219be7b774-xtables-lock\") pod \"kube-proxy-4ntqh\" (UID: \"43daf6b4-c66a-4a11-a75c-b6219be7b774\") " pod="kube-system/kube-proxy-4ntqh" Nov 1 02:22:09.793218 kubelet[2468]: I1101 02:22:09.793087 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-hubble-tls\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793218 kubelet[2468]: I1101 02:22:09.793097 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43daf6b4-c66a-4a11-a75c-b6219be7b774-lib-modules\") pod \"kube-proxy-4ntqh\" (UID: \"43daf6b4-c66a-4a11-a75c-b6219be7b774\") " pod="kube-system/kube-proxy-4ntqh" Nov 1 02:22:09.793218 kubelet[2468]: I1101 02:22:09.793106 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-kernel\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793305 kubelet[2468]: I1101 02:22:09.793115 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-kube-api-access-kw8zb\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793305 kubelet[2468]: I1101 02:22:09.793142 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-hostproc\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793305 kubelet[2468]: I1101 02:22:09.793159 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-net\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793305 kubelet[2468]: I1101 02:22:09.793175 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwq2g\" (UniqueName: \"kubernetes.io/projected/43daf6b4-c66a-4a11-a75c-b6219be7b774-kube-api-access-dwq2g\") pod \"kube-proxy-4ntqh\" (UID: \"43daf6b4-c66a-4a11-a75c-b6219be7b774\") " pod="kube-system/kube-proxy-4ntqh" Nov 1 02:22:09.793305 kubelet[2468]: I1101 02:22:09.793223 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b234b4f-d95d-4efc-a022-b02a12cf7819-clustermesh-secrets\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793425 kubelet[2468]: I1101 02:22:09.793242 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-run\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793425 kubelet[2468]: I1101 02:22:09.793252 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-lib-modules\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793425 kubelet[2468]: I1101 02:22:09.793261 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-config-path\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.793425 kubelet[2468]: I1101 02:22:09.793270 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cni-path\") pod \"cilium-6g76p\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " pod="kube-system/cilium-6g76p" Nov 1 02:22:09.895559 kubelet[2468]: I1101 02:22:09.895473 2468 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 02:22:09.985546 sshd[2611]: Failed password for root from 193.46.255.99 port 12490 ssh2 Nov 1 02:22:10.088773 env[1571]: time="2025-11-01T02:22:10.088679829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ntqh,Uid:43daf6b4-c66a-4a11-a75c-b6219be7b774,Namespace:kube-system,Attempt:0,}" Nov 1 02:22:10.090924 env[1571]: time="2025-11-01T02:22:10.090835195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g76p,Uid:3b234b4f-d95d-4efc-a022-b02a12cf7819,Namespace:kube-system,Attempt:0,}" Nov 1 02:22:10.115778 env[1571]: time="2025-11-01T02:22:10.115605011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:10.115778 env[1571]: time="2025-11-01T02:22:10.115708127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:10.116316 env[1571]: time="2025-11-01T02:22:10.115764286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:10.116316 env[1571]: time="2025-11-01T02:22:10.116201963Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b811206871c08494cc35b307207b611bc9b173ae4371a655792f11a7a3bac3a pid=2628 runtime=io.containerd.runc.v2 Nov 1 02:22:10.120204 env[1571]: time="2025-11-01T02:22:10.120005455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:10.120204 env[1571]: time="2025-11-01T02:22:10.120144473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:10.120569 env[1571]: time="2025-11-01T02:22:10.120216101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:10.120798 env[1571]: time="2025-11-01T02:22:10.120646648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c pid=2636 runtime=io.containerd.runc.v2 Nov 1 02:22:10.140766 systemd[1]: Started cri-containerd-4b811206871c08494cc35b307207b611bc9b173ae4371a655792f11a7a3bac3a.scope. Nov 1 02:22:10.144168 systemd[1]: Started cri-containerd-a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c.scope. Nov 1 02:22:10.159968 env[1571]: time="2025-11-01T02:22:10.159936110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4ntqh,Uid:43daf6b4-c66a-4a11-a75c-b6219be7b774,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b811206871c08494cc35b307207b611bc9b173ae4371a655792f11a7a3bac3a\"" Nov 1 02:22:10.161405 env[1571]: time="2025-11-01T02:22:10.161374379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g76p,Uid:3b234b4f-d95d-4efc-a022-b02a12cf7819,Namespace:kube-system,Attempt:0,} returns sandbox id \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\"" Nov 1 02:22:10.161862 env[1571]: time="2025-11-01T02:22:10.161841078Z" level=info msg="CreateContainer within sandbox \"4b811206871c08494cc35b307207b611bc9b173ae4371a655792f11a7a3bac3a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 02:22:10.162288 env[1571]: time="2025-11-01T02:22:10.162270013Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 02:22:10.169285 env[1571]: time="2025-11-01T02:22:10.169254389Z" level=info msg="CreateContainer within sandbox \"4b811206871c08494cc35b307207b611bc9b173ae4371a655792f11a7a3bac3a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2533c89d9097f3608b04485758eb7ef0f66849faa98e10a44f6f2e07151cadf5\"" Nov 1 02:22:10.169645 env[1571]: time="2025-11-01T02:22:10.169629926Z" level=info msg="StartContainer for \"2533c89d9097f3608b04485758eb7ef0f66849faa98e10a44f6f2e07151cadf5\"" Nov 1 02:22:10.179578 systemd[1]: Started cri-containerd-2533c89d9097f3608b04485758eb7ef0f66849faa98e10a44f6f2e07151cadf5.scope. Nov 1 02:22:10.196268 env[1571]: time="2025-11-01T02:22:10.196234234Z" level=info msg="StartContainer for \"2533c89d9097f3608b04485758eb7ef0f66849faa98e10a44f6f2e07151cadf5\" returns successfully" Nov 1 02:22:10.285673 systemd[1]: Created slice kubepods-besteffort-pode6342996_fdc2_49d5_944f_d53ae6386f15.slice. Nov 1 02:22:10.298468 kubelet[2468]: I1101 02:22:10.298436 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8db8b\" (UniqueName: \"kubernetes.io/projected/e6342996-fdc2-49d5-944f-d53ae6386f15-kube-api-access-8db8b\") pod \"cilium-operator-6c4d7847fc-h6p9n\" (UID: \"e6342996-fdc2-49d5-944f-d53ae6386f15\") " pod="kube-system/cilium-operator-6c4d7847fc-h6p9n" Nov 1 02:22:10.298939 kubelet[2468]: I1101 02:22:10.298490 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6342996-fdc2-49d5-944f-d53ae6386f15-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-h6p9n\" (UID: \"e6342996-fdc2-49d5-944f-d53ae6386f15\") " pod="kube-system/cilium-operator-6c4d7847fc-h6p9n" Nov 1 02:22:10.301011 kubelet[2468]: I1101 02:22:10.300961 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4ntqh" podStartSLOduration=1.300945227 podStartE2EDuration="1.300945227s" podCreationTimestamp="2025-11-01 02:22:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:22:10.300539585 +0000 UTC m=+6.120260492" watchObservedRunningTime="2025-11-01 02:22:10.300945227 +0000 UTC m=+6.120666126" Nov 1 02:22:10.590966 env[1571]: time="2025-11-01T02:22:10.590845363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h6p9n,Uid:e6342996-fdc2-49d5-944f-d53ae6386f15,Namespace:kube-system,Attempt:0,}" Nov 1 02:22:10.606807 env[1571]: time="2025-11-01T02:22:10.606732768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:10.606807 env[1571]: time="2025-11-01T02:22:10.606784261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:10.606807 env[1571]: time="2025-11-01T02:22:10.606803429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:10.607067 env[1571]: time="2025-11-01T02:22:10.606999321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed pid=2836 runtime=io.containerd.runc.v2 Nov 1 02:22:10.618077 systemd[1]: Started cri-containerd-b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed.scope. Nov 1 02:22:10.661414 env[1571]: time="2025-11-01T02:22:10.661337218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-h6p9n,Uid:e6342996-fdc2-49d5-944f-d53ae6386f15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed\"" Nov 1 02:22:11.878473 update_engine[1565]: I1101 02:22:11.878412 1565 update_attempter.cc:509] Updating boot flags... Nov 1 02:22:12.694255 sshd[2611]: Failed password for root from 193.46.255.99 port 12490 ssh2 Nov 1 02:22:14.004117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316463976.mount: Deactivated successfully. Nov 1 02:22:15.537069 sshd[2611]: Failed password for root from 193.46.255.99 port 12490 ssh2 Nov 1 02:22:15.716855 env[1571]: time="2025-11-01T02:22:15.716800039Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:15.717325 env[1571]: time="2025-11-01T02:22:15.717271643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:15.718125 env[1571]: time="2025-11-01T02:22:15.718089759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:15.718587 env[1571]: time="2025-11-01T02:22:15.718542729Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Nov 1 02:22:15.719562 env[1571]: time="2025-11-01T02:22:15.719519033Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 02:22:15.720328 env[1571]: time="2025-11-01T02:22:15.720314963Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 02:22:15.725914 env[1571]: time="2025-11-01T02:22:15.725891821Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\"" Nov 1 02:22:15.726277 env[1571]: time="2025-11-01T02:22:15.726262830Z" level=info msg="StartContainer for \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\"" Nov 1 02:22:15.750355 systemd[1]: Started cri-containerd-eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe.scope. Nov 1 02:22:15.761726 env[1571]: time="2025-11-01T02:22:15.761697064Z" level=info msg="StartContainer for \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\" returns successfully" Nov 1 02:22:15.767451 systemd[1]: cri-containerd-eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe.scope: Deactivated successfully. Nov 1 02:22:16.729977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe-rootfs.mount: Deactivated successfully. Nov 1 02:22:16.867312 env[1571]: time="2025-11-01T02:22:16.867158229Z" level=info msg="shim disconnected" id=eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe Nov 1 02:22:16.867312 env[1571]: time="2025-11-01T02:22:16.867268185Z" level=warning msg="cleaning up after shim disconnected" id=eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe namespace=k8s.io Nov 1 02:22:16.867312 env[1571]: time="2025-11-01T02:22:16.867296338Z" level=info msg="cleaning up dead shim" Nov 1 02:22:16.882790 env[1571]: time="2025-11-01T02:22:16.882680355Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:22:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2980 runtime=io.containerd.runc.v2\n" Nov 1 02:22:17.293100 env[1571]: time="2025-11-01T02:22:17.293063675Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 02:22:17.298639 env[1571]: time="2025-11-01T02:22:17.298575755Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\"" Nov 1 02:22:17.298848 env[1571]: time="2025-11-01T02:22:17.298823330Z" level=info msg="StartContainer for \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\"" Nov 1 02:22:17.308538 systemd[1]: Started cri-containerd-522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a.scope. Nov 1 02:22:17.322517 env[1571]: time="2025-11-01T02:22:17.322435214Z" level=info msg="StartContainer for \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\" returns successfully" Nov 1 02:22:17.329527 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 02:22:17.329732 systemd[1]: Stopped systemd-sysctl.service. Nov 1 02:22:17.329880 systemd[1]: Stopping systemd-sysctl.service... Nov 1 02:22:17.330826 systemd[1]: Starting systemd-sysctl.service... Nov 1 02:22:17.331019 systemd[1]: cri-containerd-522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a.scope: Deactivated successfully. Nov 1 02:22:17.335210 systemd[1]: Finished systemd-sysctl.service. Nov 1 02:22:17.380191 env[1571]: time="2025-11-01T02:22:17.380039076Z" level=info msg="shim disconnected" id=522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a Nov 1 02:22:17.380191 env[1571]: time="2025-11-01T02:22:17.380138865Z" level=warning msg="cleaning up after shim disconnected" id=522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a namespace=k8s.io Nov 1 02:22:17.380191 env[1571]: time="2025-11-01T02:22:17.380166487Z" level=info msg="cleaning up dead shim" Nov 1 02:22:17.395809 env[1571]: time="2025-11-01T02:22:17.395696975Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:22:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3043 runtime=io.containerd.runc.v2\n" Nov 1 02:22:17.725620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a-rootfs.mount: Deactivated successfully. Nov 1 02:22:17.834894 env[1571]: time="2025-11-01T02:22:17.834844150Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:17.835505 env[1571]: time="2025-11-01T02:22:17.835460736Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:17.836154 env[1571]: time="2025-11-01T02:22:17.836137389Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 02:22:17.836775 env[1571]: time="2025-11-01T02:22:17.836762003Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Nov 1 02:22:17.837816 env[1571]: time="2025-11-01T02:22:17.837800756Z" level=info msg="CreateContainer within sandbox \"b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 02:22:17.842585 env[1571]: time="2025-11-01T02:22:17.842548166Z" level=info msg="CreateContainer within sandbox \"b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\"" Nov 1 02:22:17.842842 env[1571]: time="2025-11-01T02:22:17.842779528Z" level=info msg="StartContainer for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\"" Nov 1 02:22:17.843871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094977637.mount: Deactivated successfully. Nov 1 02:22:17.852225 systemd[1]: Started cri-containerd-92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf.scope. Nov 1 02:22:17.863230 env[1571]: time="2025-11-01T02:22:17.863202397Z" level=info msg="StartContainer for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" returns successfully" Nov 1 02:22:17.899745 sshd[2611]: Received disconnect from 193.46.255.99 port 12490:11: [preauth] Nov 1 02:22:17.899745 sshd[2611]: Disconnected from authenticating user root 193.46.255.99 port 12490 [preauth] Nov 1 02:22:17.899915 sshd[2611]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99 user=root Nov 1 02:22:17.900502 systemd[1]: sshd@7-86.109.11.55:22-193.46.255.99:12490.service: Deactivated successfully. Nov 1 02:22:18.081066 systemd[1]: Started sshd@8-86.109.11.55:22-193.46.255.99:31626.service. Nov 1 02:22:18.303381 env[1571]: time="2025-11-01T02:22:18.303271190Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 02:22:18.323299 env[1571]: time="2025-11-01T02:22:18.323170507Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\"" Nov 1 02:22:18.324207 env[1571]: time="2025-11-01T02:22:18.324145817Z" level=info msg="StartContainer for \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\"" Nov 1 02:22:18.340611 kubelet[2468]: I1101 02:22:18.340495 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-h6p9n" podStartSLOduration=1.165497499 podStartE2EDuration="8.340472999s" podCreationTimestamp="2025-11-01 02:22:10 +0000 UTC" firstStartedPulling="2025-11-01 02:22:10.662081746 +0000 UTC m=+6.481802648" lastFinishedPulling="2025-11-01 02:22:17.837057259 +0000 UTC m=+13.656778148" observedRunningTime="2025-11-01 02:22:18.340153457 +0000 UTC m=+14.159874372" watchObservedRunningTime="2025-11-01 02:22:18.340472999 +0000 UTC m=+14.160193899" Nov 1 02:22:18.344520 systemd[1]: Started cri-containerd-617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f.scope. Nov 1 02:22:18.362082 env[1571]: time="2025-11-01T02:22:18.362047512Z" level=info msg="StartContainer for \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\" returns successfully" Nov 1 02:22:18.363716 systemd[1]: cri-containerd-617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f.scope: Deactivated successfully. Nov 1 02:22:18.503188 env[1571]: time="2025-11-01T02:22:18.503124638Z" level=info msg="shim disconnected" id=617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f Nov 1 02:22:18.503188 env[1571]: time="2025-11-01T02:22:18.503157078Z" level=warning msg="cleaning up after shim disconnected" id=617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f namespace=k8s.io Nov 1 02:22:18.503188 env[1571]: time="2025-11-01T02:22:18.503164709Z" level=info msg="cleaning up dead shim" Nov 1 02:22:18.507840 env[1571]: time="2025-11-01T02:22:18.507816093Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:22:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3153 runtime=io.containerd.runc.v2\n" Nov 1 02:22:19.220086 sshd[3106]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99 user=root Nov 1 02:22:19.305089 env[1571]: time="2025-11-01T02:22:19.305049264Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 02:22:19.310517 env[1571]: time="2025-11-01T02:22:19.310455262Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\"" Nov 1 02:22:19.310826 env[1571]: time="2025-11-01T02:22:19.310766108Z" level=info msg="StartContainer for \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\"" Nov 1 02:22:19.320895 systemd[1]: Started cri-containerd-a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052.scope. Nov 1 02:22:19.334911 env[1571]: time="2025-11-01T02:22:19.334875544Z" level=info msg="StartContainer for \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\" returns successfully" Nov 1 02:22:19.335492 systemd[1]: cri-containerd-a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052.scope: Deactivated successfully. Nov 1 02:22:19.349819 env[1571]: time="2025-11-01T02:22:19.349768349Z" level=info msg="shim disconnected" id=a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052 Nov 1 02:22:19.349819 env[1571]: time="2025-11-01T02:22:19.349816423Z" level=warning msg="cleaning up after shim disconnected" id=a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052 namespace=k8s.io Nov 1 02:22:19.350009 env[1571]: time="2025-11-01T02:22:19.349826863Z" level=info msg="cleaning up dead shim" Nov 1 02:22:19.356353 env[1571]: time="2025-11-01T02:22:19.356319353Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:22:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3207 runtime=io.containerd.runc.v2\n" Nov 1 02:22:19.730036 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052-rootfs.mount: Deactivated successfully. Nov 1 02:22:20.307193 env[1571]: time="2025-11-01T02:22:20.307150334Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 02:22:20.312923 env[1571]: time="2025-11-01T02:22:20.312893678Z" level=info msg="CreateContainer within sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\"" Nov 1 02:22:20.313207 env[1571]: time="2025-11-01T02:22:20.313176195Z" level=info msg="StartContainer for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\"" Nov 1 02:22:20.324139 systemd[1]: Started cri-containerd-7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979.scope. Nov 1 02:22:20.340205 env[1571]: time="2025-11-01T02:22:20.340146335Z" level=info msg="StartContainer for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" returns successfully" Nov 1 02:22:20.408419 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 02:22:20.420727 kubelet[2468]: I1101 02:22:20.420712 2468 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 02:22:20.436468 systemd[1]: Created slice kubepods-burstable-podcb1240e5_8012_4fd5_b3db_448a1c36b4f3.slice. Nov 1 02:22:20.444110 systemd[1]: Created slice kubepods-burstable-pod2c3fe7bd_3c8f_4c71_ad03_6e99be66f5c2.slice. Nov 1 02:22:20.468711 kubelet[2468]: I1101 02:22:20.468689 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfdb\" (UniqueName: \"kubernetes.io/projected/2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2-kube-api-access-vhfdb\") pod \"coredns-668d6bf9bc-hbrt6\" (UID: \"2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2\") " pod="kube-system/coredns-668d6bf9bc-hbrt6" Nov 1 02:22:20.468711 kubelet[2468]: I1101 02:22:20.468714 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqx8\" (UniqueName: \"kubernetes.io/projected/cb1240e5-8012-4fd5-b3db-448a1c36b4f3-kube-api-access-vqqx8\") pod \"coredns-668d6bf9bc-56v2w\" (UID: \"cb1240e5-8012-4fd5-b3db-448a1c36b4f3\") " pod="kube-system/coredns-668d6bf9bc-56v2w" Nov 1 02:22:20.468857 kubelet[2468]: I1101 02:22:20.468725 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2-config-volume\") pod \"coredns-668d6bf9bc-hbrt6\" (UID: \"2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2\") " pod="kube-system/coredns-668d6bf9bc-hbrt6" Nov 1 02:22:20.468857 kubelet[2468]: I1101 02:22:20.468747 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb1240e5-8012-4fd5-b3db-448a1c36b4f3-config-volume\") pod \"coredns-668d6bf9bc-56v2w\" (UID: \"cb1240e5-8012-4fd5-b3db-448a1c36b4f3\") " pod="kube-system/coredns-668d6bf9bc-56v2w" Nov 1 02:22:20.570409 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 02:22:20.707271 sshd[3106]: Failed password for root from 193.46.255.99 port 31626 ssh2 Nov 1 02:22:20.740386 env[1571]: time="2025-11-01T02:22:20.740282976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-56v2w,Uid:cb1240e5-8012-4fd5-b3db-448a1c36b4f3,Namespace:kube-system,Attempt:0,}" Nov 1 02:22:20.747119 env[1571]: time="2025-11-01T02:22:20.747037684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbrt6,Uid:2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2,Namespace:kube-system,Attempt:0,}" Nov 1 02:22:21.353138 kubelet[2468]: I1101 02:22:21.353014 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6g76p" podStartSLOduration=6.795686611 podStartE2EDuration="12.352977203s" podCreationTimestamp="2025-11-01 02:22:09 +0000 UTC" firstStartedPulling="2025-11-01 02:22:10.162010021 +0000 UTC m=+5.981730914" lastFinishedPulling="2025-11-01 02:22:15.719300618 +0000 UTC m=+11.539021506" observedRunningTime="2025-11-01 02:22:21.351778733 +0000 UTC m=+17.171499800" watchObservedRunningTime="2025-11-01 02:22:21.352977203 +0000 UTC m=+17.172698149" Nov 1 02:22:21.651816 sshd[3106]: pam_faillock(sshd:auth): Consecutive login failures for user root account temporarily locked Nov 1 02:22:22.186039 systemd-networkd[1321]: cilium_host: Link UP Nov 1 02:22:22.186235 systemd-networkd[1321]: cilium_net: Link UP Nov 1 02:22:22.193445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 02:22:22.193784 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 02:22:22.201155 systemd-networkd[1321]: cilium_net: Gained carrier Nov 1 02:22:22.201408 systemd-networkd[1321]: cilium_host: Gained carrier Nov 1 02:22:22.287638 systemd-networkd[1321]: cilium_vxlan: Link UP Nov 1 02:22:22.287643 systemd-networkd[1321]: cilium_vxlan: Gained carrier Nov 1 02:22:22.431419 kernel: NET: Registered PF_ALG protocol family Nov 1 02:22:22.467460 systemd-networkd[1321]: cilium_host: Gained IPv6LL Nov 1 02:22:22.627452 systemd-networkd[1321]: cilium_net: Gained IPv6LL Nov 1 02:22:22.924025 systemd-networkd[1321]: lxc_health: Link UP Nov 1 02:22:22.953377 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 02:22:22.953595 systemd-networkd[1321]: lxc_health: Gained carrier Nov 1 02:22:23.294979 systemd-networkd[1321]: lxc6ea7e303aa43: Link UP Nov 1 02:22:23.312373 kernel: eth0: renamed from tmp1edfa Nov 1 02:22:23.348882 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 02:22:23.349006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6ea7e303aa43: link becomes ready Nov 1 02:22:23.349042 kernel: eth0: renamed from tmp323d7 Nov 1 02:22:23.371407 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc24b42abab74c: link becomes ready Nov 1 02:22:23.371354 systemd-networkd[1321]: lxc24b42abab74c: Link UP Nov 1 02:22:23.371547 systemd-networkd[1321]: lxc6ea7e303aa43: Gained carrier Nov 1 02:22:23.371767 systemd-networkd[1321]: lxc24b42abab74c: Gained carrier Nov 1 02:22:23.745486 sshd[3106]: Failed password for root from 193.46.255.99 port 31626 ssh2 Nov 1 02:22:23.810503 systemd-networkd[1321]: cilium_vxlan: Gained IPv6LL Nov 1 02:22:24.194538 systemd-networkd[1321]: lxc_health: Gained IPv6LL Nov 1 02:22:24.450469 systemd-networkd[1321]: lxc24b42abab74c: Gained IPv6LL Nov 1 02:22:25.218486 systemd-networkd[1321]: lxc6ea7e303aa43: Gained IPv6LL Nov 1 02:22:25.652031 env[1571]: time="2025-11-01T02:22:25.651990776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:25.652031 env[1571]: time="2025-11-01T02:22:25.652016797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:25.652031 env[1571]: time="2025-11-01T02:22:25.652025938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:25.652320 env[1571]: time="2025-11-01T02:22:25.652108758Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/323d7773df609edd913432a47bae90489cbccb56e84da08971fdfbf443f17336 pid=3892 runtime=io.containerd.runc.v2 Nov 1 02:22:25.652391 env[1571]: time="2025-11-01T02:22:25.652352712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:22:25.652424 env[1571]: time="2025-11-01T02:22:25.652384422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:22:25.652424 env[1571]: time="2025-11-01T02:22:25.652399644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:22:25.652510 env[1571]: time="2025-11-01T02:22:25.652487064Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edfab444a4178290f7526e6166ab01337ba2fcad509c1f635240a67bbb3271a pid=3893 runtime=io.containerd.runc.v2 Nov 1 02:22:25.662278 systemd[1]: Started cri-containerd-1edfab444a4178290f7526e6166ab01337ba2fcad509c1f635240a67bbb3271a.scope. Nov 1 02:22:25.663096 systemd[1]: Started cri-containerd-323d7773df609edd913432a47bae90489cbccb56e84da08971fdfbf443f17336.scope. Nov 1 02:22:25.688946 env[1571]: time="2025-11-01T02:22:25.688911763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-56v2w,Uid:cb1240e5-8012-4fd5-b3db-448a1c36b4f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1edfab444a4178290f7526e6166ab01337ba2fcad509c1f635240a67bbb3271a\"" Nov 1 02:22:25.689482 env[1571]: time="2025-11-01T02:22:25.689459575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbrt6,Uid:2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"323d7773df609edd913432a47bae90489cbccb56e84da08971fdfbf443f17336\"" Nov 1 02:22:25.690413 env[1571]: time="2025-11-01T02:22:25.690395578Z" level=info msg="CreateContainer within sandbox \"1edfab444a4178290f7526e6166ab01337ba2fcad509c1f635240a67bbb3271a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 02:22:25.690618 env[1571]: time="2025-11-01T02:22:25.690600288Z" level=info msg="CreateContainer within sandbox \"323d7773df609edd913432a47bae90489cbccb56e84da08971fdfbf443f17336\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 02:22:25.702633 env[1571]: time="2025-11-01T02:22:25.702580369Z" level=info msg="CreateContainer within sandbox \"1edfab444a4178290f7526e6166ab01337ba2fcad509c1f635240a67bbb3271a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7470c94931ad2982454fbe309253d9661ebda3bf61a4e5f1c8d91f49a5e98223\"" Nov 1 02:22:25.702882 env[1571]: time="2025-11-01T02:22:25.702862535Z" level=info msg="StartContainer for \"7470c94931ad2982454fbe309253d9661ebda3bf61a4e5f1c8d91f49a5e98223\"" Nov 1 02:22:25.704545 env[1571]: time="2025-11-01T02:22:25.704525246Z" level=info msg="CreateContainer within sandbox \"323d7773df609edd913432a47bae90489cbccb56e84da08971fdfbf443f17336\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b02ec949545d171d8ac4a12f683081487feae389944778f247ed85a4f306dd9\"" Nov 1 02:22:25.704757 env[1571]: time="2025-11-01T02:22:25.704736306Z" level=info msg="StartContainer for \"5b02ec949545d171d8ac4a12f683081487feae389944778f247ed85a4f306dd9\"" Nov 1 02:22:25.711924 systemd[1]: Started cri-containerd-7470c94931ad2982454fbe309253d9661ebda3bf61a4e5f1c8d91f49a5e98223.scope. Nov 1 02:22:25.713969 systemd[1]: Started cri-containerd-5b02ec949545d171d8ac4a12f683081487feae389944778f247ed85a4f306dd9.scope. Nov 1 02:22:25.728032 env[1571]: time="2025-11-01T02:22:25.727997335Z" level=info msg="StartContainer for \"7470c94931ad2982454fbe309253d9661ebda3bf61a4e5f1c8d91f49a5e98223\" returns successfully" Nov 1 02:22:25.728459 env[1571]: time="2025-11-01T02:22:25.728442173Z" level=info msg="StartContainer for \"5b02ec949545d171d8ac4a12f683081487feae389944778f247ed85a4f306dd9\" returns successfully" Nov 1 02:22:25.919703 sshd[3106]: Failed password for root from 193.46.255.99 port 31626 ssh2 Nov 1 02:22:26.347657 kubelet[2468]: I1101 02:22:26.347533 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hbrt6" podStartSLOduration=16.347498264 podStartE2EDuration="16.347498264s" podCreationTimestamp="2025-11-01 02:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:22:26.346781267 +0000 UTC m=+22.166502218" watchObservedRunningTime="2025-11-01 02:22:26.347498264 +0000 UTC m=+22.167219198" Nov 1 02:22:26.361282 kubelet[2468]: I1101 02:22:26.361247 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-56v2w" podStartSLOduration=16.361237075 podStartE2EDuration="16.361237075s" podCreationTimestamp="2025-11-01 02:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:22:26.360996945 +0000 UTC m=+22.180717837" watchObservedRunningTime="2025-11-01 02:22:26.361237075 +0000 UTC m=+22.180957966" Nov 1 02:22:26.513680 sshd[3106]: Received disconnect from 193.46.255.99 port 31626:11: [preauth] Nov 1 02:22:26.513680 sshd[3106]: Disconnected from authenticating user root 193.46.255.99 port 31626 [preauth] Nov 1 02:22:26.514206 sshd[3106]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99 user=root Nov 1 02:22:26.516405 systemd[1]: sshd@8-86.109.11.55:22-193.46.255.99:31626.service: Deactivated successfully. Nov 1 02:22:26.689741 systemd[1]: Started sshd@9-86.109.11.55:22-193.46.255.99:13362.service. Nov 1 02:22:27.822544 sshd[4064]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99 user=root Nov 1 02:22:29.740928 sshd[4064]: Failed password for root from 193.46.255.99 port 13362 ssh2 Nov 1 02:22:31.586716 sshd[4064]: Failed password for root from 193.46.255.99 port 13362 ssh2 Nov 1 02:22:34.627135 sshd[4064]: Failed password for root from 193.46.255.99 port 13362 ssh2 Nov 1 02:22:35.120954 sshd[4064]: Received disconnect from 193.46.255.99 port 13362:11: [preauth] Nov 1 02:22:35.120954 sshd[4064]: Disconnected from authenticating user root 193.46.255.99 port 13362 [preauth] Nov 1 02:22:35.121524 sshd[4064]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.99 user=root Nov 1 02:22:35.123797 systemd[1]: sshd@9-86.109.11.55:22-193.46.255.99:13362.service: Deactivated successfully. Nov 1 02:22:37.621915 kubelet[2468]: I1101 02:22:37.621778 2468 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 02:27:51.157731 systemd[1]: Started sshd@10-86.109.11.55:22-147.75.109.163:40400.service. Nov 1 02:27:51.195349 sshd[4111]: Accepted publickey for core from 147.75.109.163 port 40400 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:27:51.196167 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:27:51.199054 systemd-logind[1563]: New session 10 of user core. Nov 1 02:27:51.199628 systemd[1]: Started session-10.scope. Nov 1 02:27:51.287329 sshd[4111]: pam_unix(sshd:session): session closed for user core Nov 1 02:27:51.288673 systemd[1]: sshd@10-86.109.11.55:22-147.75.109.163:40400.service: Deactivated successfully. Nov 1 02:27:51.289095 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 02:27:51.289373 systemd-logind[1563]: Session 10 logged out. Waiting for processes to exit. Nov 1 02:27:51.289834 systemd-logind[1563]: Removed session 10. Nov 1 02:27:56.290590 systemd[1]: Started sshd@11-86.109.11.55:22-147.75.109.163:40410.service. Nov 1 02:27:56.328616 sshd[4144]: Accepted publickey for core from 147.75.109.163 port 40410 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:27:56.329353 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:27:56.331898 systemd-logind[1563]: New session 11 of user core. Nov 1 02:27:56.332496 systemd[1]: Started session-11.scope. Nov 1 02:27:56.481248 sshd[4144]: pam_unix(sshd:session): session closed for user core Nov 1 02:27:56.482674 systemd[1]: sshd@11-86.109.11.55:22-147.75.109.163:40410.service: Deactivated successfully. Nov 1 02:27:56.483126 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 02:27:56.483506 systemd-logind[1563]: Session 11 logged out. Waiting for processes to exit. Nov 1 02:27:56.484007 systemd-logind[1563]: Removed session 11. Nov 1 02:28:01.490699 systemd[1]: Started sshd@12-86.109.11.55:22-147.75.109.163:46292.service. Nov 1 02:28:01.561175 sshd[4173]: Accepted publickey for core from 147.75.109.163 port 46292 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:01.563560 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:01.571500 systemd-logind[1563]: New session 12 of user core. Nov 1 02:28:01.573422 systemd[1]: Started session-12.scope. Nov 1 02:28:01.673000 sshd[4173]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:01.674548 systemd[1]: sshd@12-86.109.11.55:22-147.75.109.163:46292.service: Deactivated successfully. Nov 1 02:28:01.674984 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 02:28:01.675329 systemd-logind[1563]: Session 12 logged out. Waiting for processes to exit. Nov 1 02:28:01.675948 systemd-logind[1563]: Removed session 12. Nov 1 02:28:06.682438 systemd[1]: Started sshd@13-86.109.11.55:22-147.75.109.163:46298.service. Nov 1 02:28:06.720117 sshd[4202]: Accepted publickey for core from 147.75.109.163 port 46298 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:06.720955 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:06.723831 systemd-logind[1563]: New session 13 of user core. Nov 1 02:28:06.724443 systemd[1]: Started session-13.scope. Nov 1 02:28:06.870541 sshd[4202]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:06.872538 systemd[1]: sshd@13-86.109.11.55:22-147.75.109.163:46298.service: Deactivated successfully. Nov 1 02:28:06.872921 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 02:28:06.873295 systemd-logind[1563]: Session 13 logged out. Waiting for processes to exit. Nov 1 02:28:06.873980 systemd[1]: Started sshd@14-86.109.11.55:22-147.75.109.163:46312.service. Nov 1 02:28:06.874452 systemd-logind[1563]: Removed session 13. Nov 1 02:28:06.912724 sshd[4227]: Accepted publickey for core from 147.75.109.163 port 46312 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:06.913498 sshd[4227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:06.916083 systemd-logind[1563]: New session 14 of user core. Nov 1 02:28:06.916701 systemd[1]: Started session-14.scope. Nov 1 02:28:07.038520 sshd[4227]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:07.040449 systemd[1]: sshd@14-86.109.11.55:22-147.75.109.163:46312.service: Deactivated successfully. Nov 1 02:28:07.040836 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 02:28:07.041188 systemd-logind[1563]: Session 14 logged out. Waiting for processes to exit. Nov 1 02:28:07.041864 systemd[1]: Started sshd@15-86.109.11.55:22-147.75.109.163:46322.service. Nov 1 02:28:07.042299 systemd-logind[1563]: Removed session 14. Nov 1 02:28:07.080171 sshd[4252]: Accepted publickey for core from 147.75.109.163 port 46322 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:07.081086 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:07.083834 systemd-logind[1563]: New session 15 of user core. Nov 1 02:28:07.084370 systemd[1]: Started session-15.scope. Nov 1 02:28:07.199211 sshd[4252]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:07.200755 systemd[1]: sshd@15-86.109.11.55:22-147.75.109.163:46322.service: Deactivated successfully. Nov 1 02:28:07.201206 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 02:28:07.201619 systemd-logind[1563]: Session 15 logged out. Waiting for processes to exit. Nov 1 02:28:07.202195 systemd-logind[1563]: Removed session 15. Nov 1 02:28:12.208965 systemd[1]: Started sshd@16-86.109.11.55:22-147.75.109.163:33876.service. Nov 1 02:28:12.247460 sshd[4280]: Accepted publickey for core from 147.75.109.163 port 33876 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:12.250858 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:12.262450 systemd-logind[1563]: New session 16 of user core. Nov 1 02:28:12.265201 systemd[1]: Started session-16.scope. Nov 1 02:28:12.426037 sshd[4280]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:12.427977 systemd[1]: sshd@16-86.109.11.55:22-147.75.109.163:33876.service: Deactivated successfully. Nov 1 02:28:12.428548 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 02:28:12.429094 systemd-logind[1563]: Session 16 logged out. Waiting for processes to exit. Nov 1 02:28:12.429909 systemd-logind[1563]: Removed session 16. Nov 1 02:28:17.435765 systemd[1]: Started sshd@17-86.109.11.55:22-147.75.109.163:33880.service. Nov 1 02:28:17.474136 sshd[4305]: Accepted publickey for core from 147.75.109.163 port 33880 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:17.477569 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:17.488606 systemd-logind[1563]: New session 17 of user core. Nov 1 02:28:17.491302 systemd[1]: Started session-17.scope. Nov 1 02:28:17.595289 sshd[4305]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:17.597198 systemd[1]: sshd@17-86.109.11.55:22-147.75.109.163:33880.service: Deactivated successfully. Nov 1 02:28:17.597654 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 02:28:17.598130 systemd-logind[1563]: Session 17 logged out. Waiting for processes to exit. Nov 1 02:28:17.598894 systemd[1]: Started sshd@18-86.109.11.55:22-147.75.109.163:33890.service. Nov 1 02:28:17.599423 systemd-logind[1563]: Removed session 17. Nov 1 02:28:17.662562 sshd[4329]: Accepted publickey for core from 147.75.109.163 port 33890 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:17.663958 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:17.668413 systemd-logind[1563]: New session 18 of user core. Nov 1 02:28:17.669552 systemd[1]: Started session-18.scope. Nov 1 02:28:17.773361 sshd[4329]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:17.775495 systemd[1]: sshd@18-86.109.11.55:22-147.75.109.163:33890.service: Deactivated successfully. Nov 1 02:28:17.775942 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 02:28:17.776342 systemd-logind[1563]: Session 18 logged out. Waiting for processes to exit. Nov 1 02:28:17.777108 systemd[1]: Started sshd@19-86.109.11.55:22-147.75.109.163:33902.service. Nov 1 02:28:17.777707 systemd-logind[1563]: Removed session 18. Nov 1 02:28:17.818214 sshd[4349]: Accepted publickey for core from 147.75.109.163 port 33902 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:17.819191 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:17.822305 systemd-logind[1563]: New session 19 of user core. Nov 1 02:28:17.823040 systemd[1]: Started session-19.scope. Nov 1 02:28:18.615631 sshd[4349]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:18.623145 systemd[1]: sshd@19-86.109.11.55:22-147.75.109.163:33902.service: Deactivated successfully. Nov 1 02:28:18.624531 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 02:28:18.625699 systemd-logind[1563]: Session 19 logged out. Waiting for processes to exit. Nov 1 02:28:18.628320 systemd[1]: Started sshd@20-86.109.11.55:22-147.75.109.163:33910.service. Nov 1 02:28:18.630671 systemd-logind[1563]: Removed session 19. Nov 1 02:28:18.695341 sshd[4380]: Accepted publickey for core from 147.75.109.163 port 33910 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:18.696450 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:18.699927 systemd-logind[1563]: New session 20 of user core. Nov 1 02:28:18.700680 systemd[1]: Started session-20.scope. Nov 1 02:28:18.878342 sshd[4380]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:18.880062 systemd[1]: sshd@20-86.109.11.55:22-147.75.109.163:33910.service: Deactivated successfully. Nov 1 02:28:18.880434 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 02:28:18.880919 systemd-logind[1563]: Session 20 logged out. Waiting for processes to exit. Nov 1 02:28:18.881499 systemd[1]: Started sshd@21-86.109.11.55:22-147.75.109.163:33914.service. Nov 1 02:28:18.881990 systemd-logind[1563]: Removed session 20. Nov 1 02:28:18.918865 sshd[4406]: Accepted publickey for core from 147.75.109.163 port 33914 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:18.919765 sshd[4406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:18.922802 systemd-logind[1563]: New session 21 of user core. Nov 1 02:28:18.923454 systemd[1]: Started session-21.scope. Nov 1 02:28:19.079721 sshd[4406]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:19.081443 systemd[1]: sshd@21-86.109.11.55:22-147.75.109.163:33914.service: Deactivated successfully. Nov 1 02:28:19.081940 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 02:28:19.082336 systemd-logind[1563]: Session 21 logged out. Waiting for processes to exit. Nov 1 02:28:19.082933 systemd-logind[1563]: Removed session 21. Nov 1 02:28:24.089918 systemd[1]: Started sshd@22-86.109.11.55:22-147.75.109.163:58698.service. Nov 1 02:28:24.127631 sshd[4433]: Accepted publickey for core from 147.75.109.163 port 58698 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:24.128451 sshd[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:24.131066 systemd-logind[1563]: New session 22 of user core. Nov 1 02:28:24.131656 systemd[1]: Started session-22.scope. Nov 1 02:28:24.215643 sshd[4433]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:24.216987 systemd[1]: sshd@22-86.109.11.55:22-147.75.109.163:58698.service: Deactivated successfully. Nov 1 02:28:24.217405 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 02:28:24.217783 systemd-logind[1563]: Session 22 logged out. Waiting for processes to exit. Nov 1 02:28:24.218242 systemd-logind[1563]: Removed session 22. Nov 1 02:28:29.227372 systemd[1]: Started sshd@23-86.109.11.55:22-147.75.109.163:58712.service. Nov 1 02:28:29.269790 sshd[4459]: Accepted publickey for core from 147.75.109.163 port 58712 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:29.270644 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:29.272999 systemd-logind[1563]: New session 23 of user core. Nov 1 02:28:29.273558 systemd[1]: Started session-23.scope. Nov 1 02:28:29.355846 sshd[4459]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:29.357107 systemd[1]: sshd@23-86.109.11.55:22-147.75.109.163:58712.service: Deactivated successfully. Nov 1 02:28:29.357602 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 02:28:29.357985 systemd-logind[1563]: Session 23 logged out. Waiting for processes to exit. Nov 1 02:28:29.358355 systemd-logind[1563]: Removed session 23. Nov 1 02:28:34.361306 systemd[1]: Started sshd@24-86.109.11.55:22-147.75.109.163:60208.service. Nov 1 02:28:34.434022 sshd[4482]: Accepted publickey for core from 147.75.109.163 port 60208 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:34.435667 sshd[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:34.440514 systemd-logind[1563]: New session 24 of user core. Nov 1 02:28:34.441629 systemd[1]: Started session-24.scope. Nov 1 02:28:34.533346 sshd[4482]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:34.535088 systemd[1]: sshd@24-86.109.11.55:22-147.75.109.163:60208.service: Deactivated successfully. Nov 1 02:28:34.535460 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 02:28:34.535814 systemd-logind[1563]: Session 24 logged out. Waiting for processes to exit. Nov 1 02:28:34.536404 systemd[1]: Started sshd@25-86.109.11.55:22-147.75.109.163:60214.service. Nov 1 02:28:34.536848 systemd-logind[1563]: Removed session 24. Nov 1 02:28:34.573735 sshd[4506]: Accepted publickey for core from 147.75.109.163 port 60214 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:34.574573 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:34.577296 systemd-logind[1563]: New session 25 of user core. Nov 1 02:28:34.577917 systemd[1]: Started session-25.scope. Nov 1 02:28:35.968460 env[1571]: time="2025-11-01T02:28:35.968318333Z" level=info msg="StopContainer for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" with timeout 30 (s)" Nov 1 02:28:35.969380 env[1571]: time="2025-11-01T02:28:35.969076772Z" level=info msg="Stop container \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" with signal terminated" Nov 1 02:28:35.987437 systemd[1]: cri-containerd-92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf.scope: Deactivated successfully. Nov 1 02:28:36.003435 env[1571]: time="2025-11-01T02:28:36.003385187Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 02:28:36.007741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf-rootfs.mount: Deactivated successfully. Nov 1 02:28:36.008154 env[1571]: time="2025-11-01T02:28:36.008130755Z" level=info msg="StopContainer for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" with timeout 2 (s)" Nov 1 02:28:36.008323 env[1571]: time="2025-11-01T02:28:36.008300248Z" level=info msg="Stop container \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" with signal terminated" Nov 1 02:28:36.012840 systemd-networkd[1321]: lxc_health: Link DOWN Nov 1 02:28:36.012845 systemd-networkd[1321]: lxc_health: Lost carrier Nov 1 02:28:36.026856 env[1571]: time="2025-11-01T02:28:36.026827306Z" level=info msg="shim disconnected" id=92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf Nov 1 02:28:36.026925 env[1571]: time="2025-11-01T02:28:36.026858104Z" level=warning msg="cleaning up after shim disconnected" id=92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf namespace=k8s.io Nov 1 02:28:36.026925 env[1571]: time="2025-11-01T02:28:36.026866543Z" level=info msg="cleaning up dead shim" Nov 1 02:28:36.031780 env[1571]: time="2025-11-01T02:28:36.031755654Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4571 runtime=io.containerd.runc.v2\n" Nov 1 02:28:36.032664 env[1571]: time="2025-11-01T02:28:36.032615224Z" level=info msg="StopContainer for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" returns successfully" Nov 1 02:28:36.033077 env[1571]: time="2025-11-01T02:28:36.033029064Z" level=info msg="StopPodSandbox for \"b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed\"" Nov 1 02:28:36.033139 env[1571]: time="2025-11-01T02:28:36.033075080Z" level=info msg="Container to stop \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 02:28:36.034799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed-shm.mount: Deactivated successfully. Nov 1 02:28:36.037517 systemd[1]: cri-containerd-b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed.scope: Deactivated successfully. Nov 1 02:28:36.049280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed-rootfs.mount: Deactivated successfully. Nov 1 02:28:36.081565 env[1571]: time="2025-11-01T02:28:36.081446542Z" level=info msg="shim disconnected" id=b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed Nov 1 02:28:36.081853 env[1571]: time="2025-11-01T02:28:36.081578069Z" level=warning msg="cleaning up after shim disconnected" id=b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed namespace=k8s.io Nov 1 02:28:36.081853 env[1571]: time="2025-11-01T02:28:36.081626206Z" level=info msg="cleaning up dead shim" Nov 1 02:28:36.097180 systemd[1]: cri-containerd-7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979.scope: Deactivated successfully. Nov 1 02:28:36.097940 systemd[1]: cri-containerd-7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979.scope: Consumed 6.487s CPU time. Nov 1 02:28:36.098913 env[1571]: time="2025-11-01T02:28:36.098812172Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4605 runtime=io.containerd.runc.v2\n" Nov 1 02:28:36.099723 env[1571]: time="2025-11-01T02:28:36.099643873Z" level=info msg="TearDown network for sandbox \"b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed\" successfully" Nov 1 02:28:36.099723 env[1571]: time="2025-11-01T02:28:36.099700802Z" level=info msg="StopPodSandbox for \"b30976051e956c8ad60d66e4cc969f2b552a5ff2dc9722fe83288cba70f90bed\" returns successfully" Nov 1 02:28:36.135861 env[1571]: time="2025-11-01T02:28:36.135760945Z" level=info msg="shim disconnected" id=7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979 Nov 1 02:28:36.136264 env[1571]: time="2025-11-01T02:28:36.135869137Z" level=warning msg="cleaning up after shim disconnected" id=7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979 namespace=k8s.io Nov 1 02:28:36.136264 env[1571]: time="2025-11-01T02:28:36.135899671Z" level=info msg="cleaning up dead shim" Nov 1 02:28:36.137021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979-rootfs.mount: Deactivated successfully. Nov 1 02:28:36.152018 env[1571]: time="2025-11-01T02:28:36.151938260Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4629 runtime=io.containerd.runc.v2\n" Nov 1 02:28:36.154150 env[1571]: time="2025-11-01T02:28:36.154076117Z" level=info msg="StopContainer for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" returns successfully" Nov 1 02:28:36.155031 env[1571]: time="2025-11-01T02:28:36.154970374Z" level=info msg="StopPodSandbox for \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\"" Nov 1 02:28:36.155198 env[1571]: time="2025-11-01T02:28:36.155103837Z" level=info msg="Container to stop \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 02:28:36.155198 env[1571]: time="2025-11-01T02:28:36.155146376Z" level=info msg="Container to stop \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 02:28:36.155198 env[1571]: time="2025-11-01T02:28:36.155176596Z" level=info msg="Container to stop \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 02:28:36.155550 env[1571]: time="2025-11-01T02:28:36.155206254Z" level=info msg="Container to stop \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 02:28:36.155550 env[1571]: time="2025-11-01T02:28:36.155234740Z" level=info msg="Container to stop \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 02:28:36.167693 systemd[1]: cri-containerd-a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c.scope: Deactivated successfully. Nov 1 02:28:36.215615 env[1571]: time="2025-11-01T02:28:36.215508161Z" level=info msg="shim disconnected" id=a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c Nov 1 02:28:36.215615 env[1571]: time="2025-11-01T02:28:36.215618388Z" level=warning msg="cleaning up after shim disconnected" id=a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c namespace=k8s.io Nov 1 02:28:36.216110 env[1571]: time="2025-11-01T02:28:36.215647752Z" level=info msg="cleaning up dead shim" Nov 1 02:28:36.231257 env[1571]: time="2025-11-01T02:28:36.231045643Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4659 runtime=io.containerd.runc.v2\n" Nov 1 02:28:36.231841 env[1571]: time="2025-11-01T02:28:36.231740355Z" level=info msg="TearDown network for sandbox \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" successfully" Nov 1 02:28:36.231841 env[1571]: time="2025-11-01T02:28:36.231801161Z" level=info msg="StopPodSandbox for \"a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c\" returns successfully" Nov 1 02:28:36.236664 kubelet[2468]: I1101 02:28:36.236566 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6342996-fdc2-49d5-944f-d53ae6386f15-cilium-config-path\") pod \"e6342996-fdc2-49d5-944f-d53ae6386f15\" (UID: \"e6342996-fdc2-49d5-944f-d53ae6386f15\") " Nov 1 02:28:36.237694 kubelet[2468]: I1101 02:28:36.236677 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8db8b\" (UniqueName: \"kubernetes.io/projected/e6342996-fdc2-49d5-944f-d53ae6386f15-kube-api-access-8db8b\") pod \"e6342996-fdc2-49d5-944f-d53ae6386f15\" (UID: \"e6342996-fdc2-49d5-944f-d53ae6386f15\") " Nov 1 02:28:36.241701 kubelet[2468]: I1101 02:28:36.241594 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6342996-fdc2-49d5-944f-d53ae6386f15-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6342996-fdc2-49d5-944f-d53ae6386f15" (UID: "e6342996-fdc2-49d5-944f-d53ae6386f15"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 02:28:36.243347 kubelet[2468]: I1101 02:28:36.243274 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6342996-fdc2-49d5-944f-d53ae6386f15-kube-api-access-8db8b" (OuterVolumeSpecName: "kube-api-access-8db8b") pod "e6342996-fdc2-49d5-944f-d53ae6386f15" (UID: "e6342996-fdc2-49d5-944f-d53ae6386f15"). InnerVolumeSpecName "kube-api-access-8db8b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 02:28:36.273467 systemd[1]: Removed slice kubepods-besteffort-pode6342996_fdc2_49d5_944f_d53ae6386f15.slice. Nov 1 02:28:36.337997 kubelet[2468]: I1101 02:28:36.337929 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-xtables-lock\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338305 kubelet[2468]: I1101 02:28:36.338018 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-cgroup\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338305 kubelet[2468]: I1101 02:28:36.338068 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-net\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338305 kubelet[2468]: I1101 02:28:36.338048 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.338305 kubelet[2468]: I1101 02:28:36.338108 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-run\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338305 kubelet[2468]: I1101 02:28:36.338157 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cni-path\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338305 kubelet[2468]: I1101 02:28:36.338143 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.338958 kubelet[2468]: I1101 02:28:36.338204 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-kernel\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338958 kubelet[2468]: I1101 02:28:36.338247 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-etc-cni-netd\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.338958 kubelet[2468]: I1101 02:28:36.338199 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.338958 kubelet[2468]: I1101 02:28:36.338284 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.338958 kubelet[2468]: I1101 02:28:36.338206 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.339481 kubelet[2468]: I1101 02:28:36.338323 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-hubble-tls\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.339481 kubelet[2468]: I1101 02:28:36.338235 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cni-path" (OuterVolumeSpecName: "cni-path") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.339481 kubelet[2468]: I1101 02:28:36.338243 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.339481 kubelet[2468]: I1101 02:28:36.338410 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-hostproc\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.339481 kubelet[2468]: I1101 02:28:36.338475 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-lib-modules\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.339954 kubelet[2468]: I1101 02:28:36.338531 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-hostproc" (OuterVolumeSpecName: "hostproc") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.339954 kubelet[2468]: I1101 02:28:36.338563 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-config-path\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.339954 kubelet[2468]: I1101 02:28:36.338587 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.339954 kubelet[2468]: I1101 02:28:36.338711 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b234b4f-d95d-4efc-a022-b02a12cf7819-clustermesh-secrets\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.339954 kubelet[2468]: I1101 02:28:36.338802 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-bpf-maps\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.338872 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.338905 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-kube-api-access-kw8zb\") pod \"3b234b4f-d95d-4efc-a022-b02a12cf7819\" (UID: \"3b234b4f-d95d-4efc-a022-b02a12cf7819\") " Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.339060 2468 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-hostproc\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.339115 2468 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-lib-modules\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.339171 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6342996-fdc2-49d5-944f-d53ae6386f15-cilium-config-path\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.339215 2468 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-bpf-maps\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.340470 kubelet[2468]: I1101 02:28:36.339258 2468 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-xtables-lock\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339300 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-cgroup\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339345 2468 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-net\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339412 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-run\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339460 2468 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-cni-path\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339507 2468 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339555 2468 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8db8b\" (UniqueName: \"kubernetes.io/projected/e6342996-fdc2-49d5-944f-d53ae6386f15-kube-api-access-8db8b\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.341116 kubelet[2468]: I1101 02:28:36.339603 2468 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3b234b4f-d95d-4efc-a022-b02a12cf7819-etc-cni-netd\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.343979 kubelet[2468]: I1101 02:28:36.343903 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 02:28:36.344729 kubelet[2468]: I1101 02:28:36.344622 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 02:28:36.344960 kubelet[2468]: I1101 02:28:36.344902 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b234b4f-d95d-4efc-a022-b02a12cf7819-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 02:28:36.345135 kubelet[2468]: I1101 02:28:36.345071 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-kube-api-access-kw8zb" (OuterVolumeSpecName: "kube-api-access-kw8zb") pod "3b234b4f-d95d-4efc-a022-b02a12cf7819" (UID: "3b234b4f-d95d-4efc-a022-b02a12cf7819"). InnerVolumeSpecName "kube-api-access-kw8zb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 02:28:36.383824 kubelet[2468]: I1101 02:28:36.383761 2468 scope.go:117] "RemoveContainer" containerID="7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979" Nov 1 02:28:36.386505 env[1571]: time="2025-11-01T02:28:36.386405392Z" level=info msg="RemoveContainer for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\"" Nov 1 02:28:36.392991 env[1571]: time="2025-11-01T02:28:36.392899751Z" level=info msg="RemoveContainer for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" returns successfully" Nov 1 02:28:36.393416 systemd[1]: Removed slice kubepods-burstable-pod3b234b4f_d95d_4efc_a022_b02a12cf7819.slice. Nov 1 02:28:36.393662 systemd[1]: kubepods-burstable-pod3b234b4f_d95d_4efc_a022_b02a12cf7819.slice: Consumed 6.547s CPU time. Nov 1 02:28:36.394020 kubelet[2468]: I1101 02:28:36.393411 2468 scope.go:117] "RemoveContainer" containerID="a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052" Nov 1 02:28:36.395839 env[1571]: time="2025-11-01T02:28:36.395759738Z" level=info msg="RemoveContainer for \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\"" Nov 1 02:28:36.399691 env[1571]: time="2025-11-01T02:28:36.399578707Z" level=info msg="RemoveContainer for \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\" returns successfully" Nov 1 02:28:36.400009 kubelet[2468]: I1101 02:28:36.399942 2468 scope.go:117] "RemoveContainer" containerID="617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f" Nov 1 02:28:36.402420 env[1571]: time="2025-11-01T02:28:36.402328321Z" level=info msg="RemoveContainer for \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\"" Nov 1 02:28:36.406245 env[1571]: time="2025-11-01T02:28:36.406180335Z" level=info msg="RemoveContainer for \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\" returns successfully" Nov 1 02:28:36.406628 kubelet[2468]: I1101 02:28:36.406551 2468 scope.go:117] "RemoveContainer" containerID="522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a" Nov 1 02:28:36.409230 env[1571]: time="2025-11-01T02:28:36.409139853Z" level=info msg="RemoveContainer for \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\"" Nov 1 02:28:36.413157 env[1571]: time="2025-11-01T02:28:36.413066228Z" level=info msg="RemoveContainer for \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\" returns successfully" Nov 1 02:28:36.413467 kubelet[2468]: I1101 02:28:36.413381 2468 scope.go:117] "RemoveContainer" containerID="eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe" Nov 1 02:28:36.415881 env[1571]: time="2025-11-01T02:28:36.415779471Z" level=info msg="RemoveContainer for \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\"" Nov 1 02:28:36.419867 env[1571]: time="2025-11-01T02:28:36.419748033Z" level=info msg="RemoveContainer for \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\" returns successfully" Nov 1 02:28:36.420217 kubelet[2468]: I1101 02:28:36.420152 2468 scope.go:117] "RemoveContainer" containerID="7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979" Nov 1 02:28:36.420843 env[1571]: time="2025-11-01T02:28:36.420664612Z" level=error msg="ContainerStatus for \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\": not found" Nov 1 02:28:36.421198 kubelet[2468]: E1101 02:28:36.421123 2468 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\": not found" containerID="7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979" Nov 1 02:28:36.421385 kubelet[2468]: I1101 02:28:36.421204 2468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979"} err="failed to get container status \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\": rpc error: code = NotFound desc = an error occurred when try to find container \"7856e90fa6c80cb129748c63f8eb878e88e2570440c7e9283a34d41a18446979\": not found" Nov 1 02:28:36.421521 kubelet[2468]: I1101 02:28:36.421393 2468 scope.go:117] "RemoveContainer" containerID="a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052" Nov 1 02:28:36.421981 env[1571]: time="2025-11-01T02:28:36.421794796Z" level=error msg="ContainerStatus for \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\": not found" Nov 1 02:28:36.422221 kubelet[2468]: E1101 02:28:36.422168 2468 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\": not found" containerID="a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052" Nov 1 02:28:36.422388 kubelet[2468]: I1101 02:28:36.422239 2468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052"} err="failed to get container status \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\": rpc error: code = NotFound desc = an error occurred when try to find container \"a02df8b7c6bd123ebbe991b3c091aa482a0bc58d2a47877d2c324ce19ad24052\": not found" Nov 1 02:28:36.422388 kubelet[2468]: I1101 02:28:36.422284 2468 scope.go:117] "RemoveContainer" containerID="617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f" Nov 1 02:28:36.422838 env[1571]: time="2025-11-01T02:28:36.422678995Z" level=error msg="ContainerStatus for \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\": not found" Nov 1 02:28:36.423101 kubelet[2468]: E1101 02:28:36.423027 2468 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\": not found" containerID="617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f" Nov 1 02:28:36.423315 kubelet[2468]: I1101 02:28:36.423084 2468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f"} err="failed to get container status \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\": rpc error: code = NotFound desc = an error occurred when try to find container \"617261b8fb9f7cbd15b24902fd95f80e4ee6bbbd62ff5de3e6a1146a871e8a8f\": not found" Nov 1 02:28:36.423315 kubelet[2468]: I1101 02:28:36.423139 2468 scope.go:117] "RemoveContainer" containerID="522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a" Nov 1 02:28:36.423729 env[1571]: time="2025-11-01T02:28:36.423609970Z" level=error msg="ContainerStatus for \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\": not found" Nov 1 02:28:36.423943 kubelet[2468]: E1101 02:28:36.423898 2468 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\": not found" containerID="522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a" Nov 1 02:28:36.424150 kubelet[2468]: I1101 02:28:36.423948 2468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a"} err="failed to get container status \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\": rpc error: code = NotFound desc = an error occurred when try to find container \"522817f491e81fc4299d3b70b474e49672adabf3dd4f2cab0bace0d0c2cd510a\": not found" Nov 1 02:28:36.424150 kubelet[2468]: I1101 02:28:36.423988 2468 scope.go:117] "RemoveContainer" containerID="eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe" Nov 1 02:28:36.424545 env[1571]: time="2025-11-01T02:28:36.424337427Z" level=error msg="ContainerStatus for \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\": not found" Nov 1 02:28:36.424778 kubelet[2468]: E1101 02:28:36.424676 2468 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\": not found" containerID="eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe" Nov 1 02:28:36.424991 kubelet[2468]: I1101 02:28:36.424754 2468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe"} err="failed to get container status \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\": rpc error: code = NotFound desc = an error occurred when try to find container \"eff7e0559d2817e38f2d1b55dac0aaf443b444580daff94af6aea6046e7dfebe\": not found" Nov 1 02:28:36.424991 kubelet[2468]: I1101 02:28:36.424820 2468 scope.go:117] "RemoveContainer" containerID="92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf" Nov 1 02:28:36.428352 env[1571]: time="2025-11-01T02:28:36.428262399Z" level=info msg="RemoveContainer for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\"" Nov 1 02:28:36.435571 env[1571]: time="2025-11-01T02:28:36.435463843Z" level=info msg="RemoveContainer for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" returns successfully" Nov 1 02:28:36.435897 kubelet[2468]: I1101 02:28:36.435842 2468 scope.go:117] "RemoveContainer" containerID="92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf" Nov 1 02:28:36.436532 env[1571]: time="2025-11-01T02:28:36.436374636Z" level=error msg="ContainerStatus for \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\": not found" Nov 1 02:28:36.436772 kubelet[2468]: E1101 02:28:36.436720 2468 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\": not found" containerID="92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf" Nov 1 02:28:36.436902 kubelet[2468]: I1101 02:28:36.436796 2468 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf"} err="failed to get container status \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\": rpc error: code = NotFound desc = an error occurred when try to find container \"92953a9acf2d556ed6d3a71bcbdaf66809de2efbe245ef6f62e452fc2f955caf\": not found" Nov 1 02:28:36.440132 kubelet[2468]: I1101 02:28:36.440032 2468 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kw8zb\" (UniqueName: \"kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-kube-api-access-kw8zb\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.440132 kubelet[2468]: I1101 02:28:36.440088 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b234b4f-d95d-4efc-a022-b02a12cf7819-cilium-config-path\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.440132 kubelet[2468]: I1101 02:28:36.440118 2468 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3b234b4f-d95d-4efc-a022-b02a12cf7819-clustermesh-secrets\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.440567 kubelet[2468]: I1101 02:28:36.440146 2468 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3b234b4f-d95d-4efc-a022-b02a12cf7819-hubble-tls\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:36.988212 systemd[1]: var-lib-kubelet-pods-e6342996\x2dfdc2\x2d49d5\x2d944f\x2dd53ae6386f15-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8db8b.mount: Deactivated successfully. Nov 1 02:28:36.988265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c-rootfs.mount: Deactivated successfully. Nov 1 02:28:36.988299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a49d025f8ca388443f3d20c11391caff13e1eed377bb7e6864028ec596f78b1c-shm.mount: Deactivated successfully. Nov 1 02:28:36.988333 systemd[1]: var-lib-kubelet-pods-3b234b4f\x2dd95d\x2d4efc\x2da022\x2db02a12cf7819-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkw8zb.mount: Deactivated successfully. Nov 1 02:28:36.988370 systemd[1]: var-lib-kubelet-pods-3b234b4f\x2dd95d\x2d4efc\x2da022\x2db02a12cf7819-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 02:28:36.988404 systemd[1]: var-lib-kubelet-pods-3b234b4f\x2dd95d\x2d4efc\x2da022\x2db02a12cf7819-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 02:28:37.914512 sshd[4506]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:37.916350 systemd[1]: sshd@25-86.109.11.55:22-147.75.109.163:60214.service: Deactivated successfully. Nov 1 02:28:37.916708 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 02:28:37.917070 systemd-logind[1563]: Session 25 logged out. Waiting for processes to exit. Nov 1 02:28:37.917677 systemd[1]: Started sshd@26-86.109.11.55:22-147.75.109.163:60224.service. Nov 1 02:28:37.918132 systemd-logind[1563]: Removed session 25. Nov 1 02:28:37.987808 sshd[4677]: Accepted publickey for core from 147.75.109.163 port 60224 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:37.989782 sshd[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:37.995192 systemd-logind[1563]: New session 26 of user core. Nov 1 02:28:37.996494 systemd[1]: Started session-26.scope. Nov 1 02:28:38.260253 kubelet[2468]: I1101 02:28:38.260181 2468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b234b4f-d95d-4efc-a022-b02a12cf7819" path="/var/lib/kubelet/pods/3b234b4f-d95d-4efc-a022-b02a12cf7819/volumes" Nov 1 02:28:38.260579 kubelet[2468]: I1101 02:28:38.260545 2468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6342996-fdc2-49d5-944f-d53ae6386f15" path="/var/lib/kubelet/pods/e6342996-fdc2-49d5-944f-d53ae6386f15/volumes" Nov 1 02:28:38.676907 sshd[4677]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:38.679292 systemd[1]: sshd@26-86.109.11.55:22-147.75.109.163:60224.service: Deactivated successfully. Nov 1 02:28:38.679723 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 02:28:38.680126 systemd-logind[1563]: Session 26 logged out. Waiting for processes to exit. Nov 1 02:28:38.680872 systemd[1]: Started sshd@27-86.109.11.55:22-147.75.109.163:60234.service. Nov 1 02:28:38.681327 systemd-logind[1563]: Removed session 26. Nov 1 02:28:38.685687 kubelet[2468]: I1101 02:28:38.685666 2468 memory_manager.go:355] "RemoveStaleState removing state" podUID="3b234b4f-d95d-4efc-a022-b02a12cf7819" containerName="cilium-agent" Nov 1 02:28:38.685687 kubelet[2468]: I1101 02:28:38.685683 2468 memory_manager.go:355] "RemoveStaleState removing state" podUID="e6342996-fdc2-49d5-944f-d53ae6386f15" containerName="cilium-operator" Nov 1 02:28:38.696152 systemd[1]: Created slice kubepods-burstable-podd3a0b652_d559_4bd8_8d1c_bcbbdee9b267.slice. Nov 1 02:28:38.727807 sshd[4700]: Accepted publickey for core from 147.75.109.163 port 60234 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:38.728647 sshd[4700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:38.731226 systemd-logind[1563]: New session 27 of user core. Nov 1 02:28:38.731732 systemd[1]: Started session-27.scope. Nov 1 02:28:38.857649 kubelet[2468]: I1101 02:28:38.857590 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-etc-cni-netd\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857649 kubelet[2468]: I1101 02:28:38.857613 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-ipsec-secrets\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857649 kubelet[2468]: I1101 02:28:38.857627 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-cgroup\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857649 kubelet[2468]: I1101 02:28:38.857637 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-kernel\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857649 kubelet[2468]: I1101 02:28:38.857646 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hhcq\" (UniqueName: \"kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-kube-api-access-9hhcq\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857826 kubelet[2468]: I1101 02:28:38.857657 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-lib-modules\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857826 kubelet[2468]: I1101 02:28:38.857666 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-config-path\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857826 kubelet[2468]: I1101 02:28:38.857675 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hostproc\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857826 kubelet[2468]: I1101 02:28:38.857684 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-xtables-lock\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857826 kubelet[2468]: I1101 02:28:38.857693 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-net\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857826 kubelet[2468]: I1101 02:28:38.857702 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-run\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857952 kubelet[2468]: I1101 02:28:38.857711 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cni-path\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857952 kubelet[2468]: I1101 02:28:38.857720 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-bpf-maps\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857952 kubelet[2468]: I1101 02:28:38.857728 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-clustermesh-secrets\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.857952 kubelet[2468]: I1101 02:28:38.857737 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hubble-tls\") pod \"cilium-ff8pt\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " pod="kube-system/cilium-ff8pt" Nov 1 02:28:38.861775 sshd[4700]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:38.863428 systemd[1]: sshd@27-86.109.11.55:22-147.75.109.163:60234.service: Deactivated successfully. Nov 1 02:28:38.863769 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 02:28:38.864119 systemd-logind[1563]: Session 27 logged out. Waiting for processes to exit. Nov 1 02:28:38.864718 systemd[1]: Started sshd@28-86.109.11.55:22-147.75.109.163:60246.service. Nov 1 02:28:38.865176 systemd-logind[1563]: Removed session 27. Nov 1 02:28:38.872527 kubelet[2468]: E1101 02:28:38.872488 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-9hhcq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-ff8pt" podUID="d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" Nov 1 02:28:38.935670 sshd[4727]: Accepted publickey for core from 147.75.109.163 port 60246 ssh2: RSA SHA256:LGZ+c0Hq+wiF6pI4hwBSHaiZcbAeE7k627fjfDIAcNQ Nov 1 02:28:38.940020 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 02:28:38.949704 systemd-logind[1563]: New session 28 of user core. Nov 1 02:28:38.952195 systemd[1]: Started session-28.scope. Nov 1 02:28:39.373929 kubelet[2468]: E1101 02:28:39.373794 2468 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 02:28:39.562539 kubelet[2468]: I1101 02:28:39.562408 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-run\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.562539 kubelet[2468]: I1101 02:28:39.562505 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-xtables-lock\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.562990 kubelet[2468]: I1101 02:28:39.562572 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-clustermesh-secrets\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.562990 kubelet[2468]: I1101 02:28:39.562587 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.562990 kubelet[2468]: I1101 02:28:39.562602 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.562990 kubelet[2468]: I1101 02:28:39.562623 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-cgroup\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.562990 kubelet[2468]: I1101 02:28:39.562682 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.563986 kubelet[2468]: I1101 02:28:39.562769 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cni-path\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.563986 kubelet[2468]: I1101 02:28:39.562848 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.563986 kubelet[2468]: I1101 02:28:39.562867 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-ipsec-secrets\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.563986 kubelet[2468]: I1101 02:28:39.562936 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-etc-cni-netd\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.563986 kubelet[2468]: I1101 02:28:39.563024 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hhcq\" (UniqueName: \"kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-kube-api-access-9hhcq\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.563986 kubelet[2468]: I1101 02:28:39.563067 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.564896 kubelet[2468]: I1101 02:28:39.563123 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-config-path\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.564896 kubelet[2468]: I1101 02:28:39.563208 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-net\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.564896 kubelet[2468]: I1101 02:28:39.563310 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hubble-tls\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.564896 kubelet[2468]: I1101 02:28:39.563352 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.564896 kubelet[2468]: I1101 02:28:39.563423 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-bpf-maps\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.565439 kubelet[2468]: I1101 02:28:39.563495 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.565439 kubelet[2468]: I1101 02:28:39.563596 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-lib-modules\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.565439 kubelet[2468]: I1101 02:28:39.563681 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.565439 kubelet[2468]: I1101 02:28:39.563710 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-kernel\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.565439 kubelet[2468]: I1101 02:28:39.563804 2468 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hostproc\") pod \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\" (UID: \"d3a0b652-d559-4bd8-8d1c-bcbbdee9b267\") " Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.563868 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.563935 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.564030 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-cgroup\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.564097 2468 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cni-path\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.564152 2468 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-etc-cni-netd\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.564203 2468 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-net\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.565983 kubelet[2468]: I1101 02:28:39.564252 2468 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-bpf-maps\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.566711 kubelet[2468]: I1101 02:28:39.564304 2468 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-lib-modules\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.566711 kubelet[2468]: I1101 02:28:39.564377 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-run\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.566711 kubelet[2468]: I1101 02:28:39.564426 2468 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-xtables-lock\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.567693 kubelet[2468]: I1101 02:28:39.567658 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 02:28:39.568189 kubelet[2468]: I1101 02:28:39.568121 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 02:28:39.568189 kubelet[2468]: I1101 02:28:39.568160 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 02:28:39.568264 kubelet[2468]: I1101 02:28:39.568216 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-kube-api-access-9hhcq" (OuterVolumeSpecName: "kube-api-access-9hhcq") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "kube-api-access-9hhcq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 02:28:39.568264 kubelet[2468]: I1101 02:28:39.568230 2468 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" (UID: "d3a0b652-d559-4bd8-8d1c-bcbbdee9b267"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 02:28:39.569737 systemd[1]: var-lib-kubelet-pods-d3a0b652\x2dd559\x2d4bd8\x2d8d1c\x2dbcbbdee9b267-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9hhcq.mount: Deactivated successfully. Nov 1 02:28:39.569804 systemd[1]: var-lib-kubelet-pods-d3a0b652\x2dd559\x2d4bd8\x2d8d1c\x2dbcbbdee9b267-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 02:28:39.569857 systemd[1]: var-lib-kubelet-pods-d3a0b652\x2dd559\x2d4bd8\x2d8d1c\x2dbcbbdee9b267-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 02:28:39.569907 systemd[1]: var-lib-kubelet-pods-d3a0b652\x2dd559\x2d4bd8\x2d8d1c\x2dbcbbdee9b267-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.664803 2468 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-clustermesh-secrets\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.664877 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-ipsec-secrets\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.664916 2468 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9hhcq\" (UniqueName: \"kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-kube-api-access-9hhcq\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.664946 2468 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-cilium-config-path\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.664977 2468 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hubble-tls\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.665009 2468 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-host-proc-sys-kernel\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:39.665047 kubelet[2468]: I1101 02:28:39.665038 2468 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267-hostproc\") on node \"ci-3510.3.8-n-c654b621d4\" DevicePath \"\"" Nov 1 02:28:40.273245 systemd[1]: Removed slice kubepods-burstable-podd3a0b652_d559_4bd8_8d1c_bcbbdee9b267.slice. Nov 1 02:28:40.434912 systemd[1]: Created slice kubepods-burstable-poda2844445_9802_49af_bb10_bbe2239aebc8.slice. Nov 1 02:28:40.572884 kubelet[2468]: I1101 02:28:40.572810 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-lib-modules\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.573691 kubelet[2468]: I1101 02:28:40.572909 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-host-proc-sys-kernel\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.573691 kubelet[2468]: I1101 02:28:40.572964 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9dv\" (UniqueName: \"kubernetes.io/projected/a2844445-9802-49af-bb10-bbe2239aebc8-kube-api-access-7v9dv\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.573691 kubelet[2468]: I1101 02:28:40.573017 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-cilium-run\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.573691 kubelet[2468]: I1101 02:28:40.573164 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-etc-cni-netd\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.573691 kubelet[2468]: I1101 02:28:40.573249 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-xtables-lock\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574270 kubelet[2468]: I1101 02:28:40.573307 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2844445-9802-49af-bb10-bbe2239aebc8-cilium-config-path\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574270 kubelet[2468]: I1101 02:28:40.573355 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2844445-9802-49af-bb10-bbe2239aebc8-clustermesh-secrets\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574270 kubelet[2468]: I1101 02:28:40.573423 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-cilium-cgroup\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574270 kubelet[2468]: I1101 02:28:40.573469 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-cni-path\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574270 kubelet[2468]: I1101 02:28:40.573586 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-hostproc\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574270 kubelet[2468]: I1101 02:28:40.573675 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-bpf-maps\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574903 kubelet[2468]: I1101 02:28:40.573727 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2844445-9802-49af-bb10-bbe2239aebc8-cilium-ipsec-secrets\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574903 kubelet[2468]: I1101 02:28:40.573774 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2844445-9802-49af-bb10-bbe2239aebc8-hubble-tls\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.574903 kubelet[2468]: I1101 02:28:40.573852 2468 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2844445-9802-49af-bb10-bbe2239aebc8-host-proc-sys-net\") pod \"cilium-zk2wx\" (UID: \"a2844445-9802-49af-bb10-bbe2239aebc8\") " pod="kube-system/cilium-zk2wx" Nov 1 02:28:40.737717 env[1571]: time="2025-11-01T02:28:40.737581995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zk2wx,Uid:a2844445-9802-49af-bb10-bbe2239aebc8,Namespace:kube-system,Attempt:0,}" Nov 1 02:28:40.754162 env[1571]: time="2025-11-01T02:28:40.754089824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 02:28:40.754162 env[1571]: time="2025-11-01T02:28:40.754146966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 02:28:40.754162 env[1571]: time="2025-11-01T02:28:40.754154102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 02:28:40.754281 env[1571]: time="2025-11-01T02:28:40.754235219Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606 pid=4769 runtime=io.containerd.runc.v2 Nov 1 02:28:40.761960 systemd[1]: Started cri-containerd-554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606.scope. Nov 1 02:28:40.771518 env[1571]: time="2025-11-01T02:28:40.771466824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zk2wx,Uid:a2844445-9802-49af-bb10-bbe2239aebc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\"" Nov 1 02:28:40.772581 env[1571]: time="2025-11-01T02:28:40.772565452Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 02:28:40.798820 env[1571]: time="2025-11-01T02:28:40.798696776Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5\"" Nov 1 02:28:40.799629 env[1571]: time="2025-11-01T02:28:40.799513849Z" level=info msg="StartContainer for \"5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5\"" Nov 1 02:28:40.832548 systemd[1]: Started cri-containerd-5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5.scope. Nov 1 02:28:40.864594 env[1571]: time="2025-11-01T02:28:40.864542145Z" level=info msg="StartContainer for \"5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5\" returns successfully" Nov 1 02:28:40.876373 systemd[1]: cri-containerd-5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5.scope: Deactivated successfully. Nov 1 02:28:40.903923 env[1571]: time="2025-11-01T02:28:40.903832697Z" level=info msg="shim disconnected" id=5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5 Nov 1 02:28:40.903923 env[1571]: time="2025-11-01T02:28:40.903893206Z" level=warning msg="cleaning up after shim disconnected" id=5298ff5acab0c612af48ac151c4d2768e7b00d3c6cd4e969e6e92189da68b2d5 namespace=k8s.io Nov 1 02:28:40.903923 env[1571]: time="2025-11-01T02:28:40.903908431Z" level=info msg="cleaning up dead shim" Nov 1 02:28:40.912589 env[1571]: time="2025-11-01T02:28:40.912510318Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4857 runtime=io.containerd.runc.v2\n" Nov 1 02:28:41.411061 env[1571]: time="2025-11-01T02:28:41.410953647Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 02:28:41.427017 env[1571]: time="2025-11-01T02:28:41.426921405Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18\"" Nov 1 02:28:41.427999 env[1571]: time="2025-11-01T02:28:41.427921329Z" level=info msg="StartContainer for \"d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18\"" Nov 1 02:28:41.460902 systemd[1]: Started cri-containerd-d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18.scope. Nov 1 02:28:41.491060 env[1571]: time="2025-11-01T02:28:41.491007040Z" level=info msg="StartContainer for \"d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18\" returns successfully" Nov 1 02:28:41.501332 systemd[1]: cri-containerd-d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18.scope: Deactivated successfully. Nov 1 02:28:41.542145 env[1571]: time="2025-11-01T02:28:41.542033646Z" level=info msg="shim disconnected" id=d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18 Nov 1 02:28:41.542145 env[1571]: time="2025-11-01T02:28:41.542110844Z" level=warning msg="cleaning up after shim disconnected" id=d1c8cab18ef9d76bafb955f96ccab092660d548d0c2329311ac087a0e1491f18 namespace=k8s.io Nov 1 02:28:41.542145 env[1571]: time="2025-11-01T02:28:41.542143959Z" level=info msg="cleaning up dead shim" Nov 1 02:28:41.555954 env[1571]: time="2025-11-01T02:28:41.555834308Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4919 runtime=io.containerd.runc.v2\n" Nov 1 02:28:42.259180 kubelet[2468]: E1101 02:28:42.259038 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-hbrt6" podUID="2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2" Nov 1 02:28:42.264576 kubelet[2468]: I1101 02:28:42.264504 2468 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a0b652-d559-4bd8-8d1c-bcbbdee9b267" path="/var/lib/kubelet/pods/d3a0b652-d559-4bd8-8d1c-bcbbdee9b267/volumes" Nov 1 02:28:42.418193 env[1571]: time="2025-11-01T02:28:42.418103462Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 02:28:42.430654 env[1571]: time="2025-11-01T02:28:42.430631598Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684\"" Nov 1 02:28:42.431215 env[1571]: time="2025-11-01T02:28:42.431133296Z" level=info msg="StartContainer for \"b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684\"" Nov 1 02:28:42.431809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426252766.mount: Deactivated successfully. Nov 1 02:28:42.441106 systemd[1]: Started cri-containerd-b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684.scope. Nov 1 02:28:42.454100 env[1571]: time="2025-11-01T02:28:42.454073572Z" level=info msg="StartContainer for \"b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684\" returns successfully" Nov 1 02:28:42.455599 systemd[1]: cri-containerd-b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684.scope: Deactivated successfully. Nov 1 02:28:42.482316 env[1571]: time="2025-11-01T02:28:42.482219328Z" level=info msg="shim disconnected" id=b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684 Nov 1 02:28:42.482316 env[1571]: time="2025-11-01T02:28:42.482308220Z" level=warning msg="cleaning up after shim disconnected" id=b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684 namespace=k8s.io Nov 1 02:28:42.482913 env[1571]: time="2025-11-01T02:28:42.482335203Z" level=info msg="cleaning up dead shim" Nov 1 02:28:42.499664 env[1571]: time="2025-11-01T02:28:42.499543577Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4975 runtime=io.containerd.runc.v2\n" Nov 1 02:28:42.686750 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b99ff49e4daa56a233ed121bd60d1d654fc6c2d596521150cedfc08b045d0684-rootfs.mount: Deactivated successfully. Nov 1 02:28:42.892141 kubelet[2468]: I1101 02:28:42.892000 2468 setters.go:602] "Node became not ready" node="ci-3510.3.8-n-c654b621d4" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T02:28:42Z","lastTransitionTime":"2025-11-01T02:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 02:28:43.425219 env[1571]: time="2025-11-01T02:28:43.425092641Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 02:28:43.444022 env[1571]: time="2025-11-01T02:28:43.443853113Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c\"" Nov 1 02:28:43.445154 env[1571]: time="2025-11-01T02:28:43.445030687Z" level=info msg="StartContainer for \"e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c\"" Nov 1 02:28:43.459146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512117706.mount: Deactivated successfully. Nov 1 02:28:43.470995 systemd[1]: Started cri-containerd-e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c.scope. Nov 1 02:28:43.486867 env[1571]: time="2025-11-01T02:28:43.486837266Z" level=info msg="StartContainer for \"e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c\" returns successfully" Nov 1 02:28:43.487572 systemd[1]: cri-containerd-e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c.scope: Deactivated successfully. Nov 1 02:28:43.515935 env[1571]: time="2025-11-01T02:28:43.515870956Z" level=info msg="shim disconnected" id=e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c Nov 1 02:28:43.515935 env[1571]: time="2025-11-01T02:28:43.515908673Z" level=warning msg="cleaning up after shim disconnected" id=e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c namespace=k8s.io Nov 1 02:28:43.515935 env[1571]: time="2025-11-01T02:28:43.515917987Z" level=info msg="cleaning up dead shim" Nov 1 02:28:43.521736 env[1571]: time="2025-11-01T02:28:43.521673432Z" level=warning msg="cleanup warnings time=\"2025-11-01T02:28:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5029 runtime=io.containerd.runc.v2\n" Nov 1 02:28:43.687371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e06902e68e4dad7cac6f3eccdcaf4a0546b00c8c49d6c48f8fd036b8b62f832c-rootfs.mount: Deactivated successfully. Nov 1 02:28:44.259998 kubelet[2468]: E1101 02:28:44.259893 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-hbrt6" podUID="2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2" Nov 1 02:28:44.375250 kubelet[2468]: E1101 02:28:44.375128 2468 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 02:28:44.434398 env[1571]: time="2025-11-01T02:28:44.434277773Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 02:28:44.451881 env[1571]: time="2025-11-01T02:28:44.451831263Z" level=info msg="CreateContainer within sandbox \"554dd866908e8cee1a4b630277a44497a17ec97bc325b7cfe9b4666238639606\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2deb74c7386b86783b2f6a9f22cad9278d5c357553cb2d2a6137e90cdcf6a74\"" Nov 1 02:28:44.452311 env[1571]: time="2025-11-01T02:28:44.452292156Z" level=info msg="StartContainer for \"b2deb74c7386b86783b2f6a9f22cad9278d5c357553cb2d2a6137e90cdcf6a74\"" Nov 1 02:28:44.461319 systemd[1]: Started cri-containerd-b2deb74c7386b86783b2f6a9f22cad9278d5c357553cb2d2a6137e90cdcf6a74.scope. Nov 1 02:28:44.475055 env[1571]: time="2025-11-01T02:28:44.475024393Z" level=info msg="StartContainer for \"b2deb74c7386b86783b2f6a9f22cad9278d5c357553cb2d2a6137e90cdcf6a74\" returns successfully" Nov 1 02:28:44.624371 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Nov 1 02:28:45.477517 kubelet[2468]: I1101 02:28:45.477396 2468 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zk2wx" podStartSLOduration=5.477333808 podStartE2EDuration="5.477333808s" podCreationTimestamp="2025-11-01 02:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 02:28:45.477169909 +0000 UTC m=+401.296890870" watchObservedRunningTime="2025-11-01 02:28:45.477333808 +0000 UTC m=+401.297054747" Nov 1 02:28:46.259079 kubelet[2468]: E1101 02:28:46.259050 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-hbrt6" podUID="2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2" Nov 1 02:28:47.725659 systemd-networkd[1321]: lxc_health: Link UP Nov 1 02:28:47.745257 systemd-networkd[1321]: lxc_health: Gained carrier Nov 1 02:28:47.745401 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 02:28:48.258376 kubelet[2468]: E1101 02:28:48.258316 2468 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-hbrt6" podUID="2c3fe7bd-3c8f-4c71-ad03-6e99be66f5c2" Nov 1 02:28:49.538533 systemd-networkd[1321]: lxc_health: Gained IPv6LL Nov 1 02:28:53.620143 sshd[4727]: pam_unix(sshd:session): session closed for user core Nov 1 02:28:53.622076 systemd[1]: sshd@28-86.109.11.55:22-147.75.109.163:60246.service: Deactivated successfully. Nov 1 02:28:53.622691 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 02:28:53.623218 systemd-logind[1563]: Session 28 logged out. Waiting for processes to exit. Nov 1 02:28:53.623947 systemd-logind[1563]: Removed session 28.