Sep 5 00:20:19.505125 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:03:18 -00 2025 Sep 5 00:20:19.505141 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 5 00:20:19.505147 kernel: BIOS-provided physical RAM map: Sep 5 00:20:19.505153 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Sep 5 00:20:19.505157 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Sep 5 00:20:19.505161 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Sep 5 00:20:19.505166 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Sep 5 00:20:19.505170 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Sep 5 00:20:19.505174 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b24fff] usable Sep 5 00:20:19.505179 kernel: BIOS-e820: [mem 0x0000000081b25000-0x0000000081b25fff] ACPI NVS Sep 5 00:20:19.505183 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] reserved Sep 5 00:20:19.505187 kernel: BIOS-e820: [mem 0x0000000081b27000-0x000000008afccfff] usable Sep 5 00:20:19.505193 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Sep 5 00:20:19.505197 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Sep 5 00:20:19.505202 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Sep 5 00:20:19.505207 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Sep 5 00:20:19.505213 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Sep 5 00:20:19.505218 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Sep 5 00:20:19.505223 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 5 00:20:19.505228 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Sep 5 00:20:19.505232 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Sep 5 00:20:19.505237 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Sep 5 00:20:19.505242 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Sep 5 00:20:19.505247 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Sep 5 00:20:19.505252 kernel: NX (Execute Disable) protection: active Sep 5 00:20:19.505256 kernel: APIC: Static calls initialized Sep 5 00:20:19.505261 kernel: SMBIOS 3.2.1 present. Sep 5 00:20:19.505266 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Sep 5 00:20:19.505272 kernel: tsc: Detected 3400.000 MHz processor Sep 5 00:20:19.505277 kernel: tsc: Detected 3399.906 MHz TSC Sep 5 00:20:19.505282 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:20:19.505287 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:20:19.505292 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Sep 5 00:20:19.505297 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 23), built from 10 variable MTRRs Sep 5 00:20:19.505302 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:20:19.505307 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Sep 5 00:20:19.505312 kernel: Using GB pages for direct mapping Sep 5 00:20:19.505317 kernel: ACPI: Early table checksum verification disabled Sep 5 00:20:19.505323 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Sep 5 00:20:19.505328 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Sep 5 00:20:19.505335 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Sep 5 00:20:19.505341 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Sep 5 00:20:19.505346 kernel: ACPI: FACS 0x000000008C66CF80 000040 Sep 5 00:20:19.505351 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Sep 5 00:20:19.505357 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Sep 5 00:20:19.505362 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Sep 5 00:20:19.505368 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Sep 5 00:20:19.505373 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Sep 5 00:20:19.505378 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Sep 5 00:20:19.505383 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Sep 5 00:20:19.505388 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Sep 5 00:20:19.505394 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:20:19.505400 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Sep 5 00:20:19.505405 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Sep 5 00:20:19.505410 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:20:19.505415 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:20:19.505420 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Sep 5 00:20:19.505426 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Sep 5 00:20:19.505431 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:20:19.505436 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Sep 5 00:20:19.505466 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Sep 5 00:20:19.505472 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:20:19.505477 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Sep 5 00:20:19.505482 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Sep 5 00:20:19.505488 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Sep 5 00:20:19.505507 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Sep 5 00:20:19.505512 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Sep 5 00:20:19.505517 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Sep 5 00:20:19.505524 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Sep 5 00:20:19.505529 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Sep 5 00:20:19.505534 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Sep 5 00:20:19.505539 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Sep 5 00:20:19.505544 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Sep 5 00:20:19.505549 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Sep 5 00:20:19.505554 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Sep 5 00:20:19.505560 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Sep 5 00:20:19.505565 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Sep 5 00:20:19.505571 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Sep 5 00:20:19.505576 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Sep 5 00:20:19.505581 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Sep 5 00:20:19.505586 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Sep 5 00:20:19.505591 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Sep 5 00:20:19.505596 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Sep 5 00:20:19.505601 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Sep 5 00:20:19.505606 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Sep 5 00:20:19.505611 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Sep 5 00:20:19.505617 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Sep 5 00:20:19.505623 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Sep 5 00:20:19.505628 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Sep 5 00:20:19.505633 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Sep 5 00:20:19.505638 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Sep 5 00:20:19.505643 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Sep 5 00:20:19.505648 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Sep 5 00:20:19.505653 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Sep 5 00:20:19.505659 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Sep 5 00:20:19.505664 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Sep 5 00:20:19.505670 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Sep 5 00:20:19.505675 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Sep 5 00:20:19.505680 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Sep 5 00:20:19.505685 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Sep 5 00:20:19.505690 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Sep 5 00:20:19.505695 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Sep 5 00:20:19.505700 kernel: No NUMA configuration found Sep 5 00:20:19.505706 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Sep 5 00:20:19.505711 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Sep 5 00:20:19.505717 kernel: Zone ranges: Sep 5 00:20:19.505722 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:20:19.505727 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Sep 5 00:20:19.505733 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Sep 5 00:20:19.505738 kernel: Movable zone start for each node Sep 5 00:20:19.505743 kernel: Early memory node ranges Sep 5 00:20:19.505748 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Sep 5 00:20:19.505753 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Sep 5 00:20:19.505758 kernel: node 0: [mem 0x0000000040400000-0x0000000081b24fff] Sep 5 00:20:19.505764 kernel: node 0: [mem 0x0000000081b27000-0x000000008afccfff] Sep 5 00:20:19.505770 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Sep 5 00:20:19.505775 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Sep 5 00:20:19.505780 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Sep 5 00:20:19.505788 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Sep 5 00:20:19.505795 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:20:19.505800 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Sep 5 00:20:19.505806 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Sep 5 00:20:19.505812 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Sep 5 00:20:19.505818 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Sep 5 00:20:19.505823 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Sep 5 00:20:19.505829 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Sep 5 00:20:19.505835 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Sep 5 00:20:19.505840 kernel: ACPI: PM-Timer IO Port: 0x1808 Sep 5 00:20:19.505845 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Sep 5 00:20:19.505851 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Sep 5 00:20:19.505856 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Sep 5 00:20:19.505863 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Sep 5 00:20:19.505868 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Sep 5 00:20:19.505874 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Sep 5 00:20:19.505879 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Sep 5 00:20:19.505885 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Sep 5 00:20:19.505890 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Sep 5 00:20:19.505895 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Sep 5 00:20:19.505901 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Sep 5 00:20:19.505906 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Sep 5 00:20:19.505913 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Sep 5 00:20:19.505918 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Sep 5 00:20:19.505923 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Sep 5 00:20:19.505929 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Sep 5 00:20:19.505934 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Sep 5 00:20:19.505940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:20:19.505945 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:20:19.505951 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:20:19.505956 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:20:19.505963 kernel: TSC deadline timer available Sep 5 00:20:19.505968 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Sep 5 00:20:19.505974 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Sep 5 00:20:19.505979 kernel: Booting paravirtualized kernel on bare hardware Sep 5 00:20:19.505985 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:20:19.505991 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 5 00:20:19.505996 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 5 00:20:19.506002 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 5 00:20:19.506007 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 5 00:20:19.506014 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 5 00:20:19.506020 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:20:19.506025 kernel: random: crng init done Sep 5 00:20:19.506031 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Sep 5 00:20:19.506036 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Sep 5 00:20:19.506042 kernel: Fallback order for Node 0: 0 Sep 5 00:20:19.506047 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Sep 5 00:20:19.506053 kernel: Policy zone: Normal Sep 5 00:20:19.506059 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:20:19.506065 kernel: software IO TLB: area num 16. Sep 5 00:20:19.506070 kernel: Memory: 32718256K/33452980K available (14336K kernel code, 2293K rwdata, 22868K rodata, 43508K init, 1568K bss, 734464K reserved, 0K cma-reserved) Sep 5 00:20:19.506076 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 5 00:20:19.506082 kernel: ftrace: allocating 37943 entries in 149 pages Sep 5 00:20:19.506087 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:20:19.506093 kernel: Dynamic Preempt: voluntary Sep 5 00:20:19.506098 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:20:19.506104 kernel: rcu: RCU event tracing is enabled. Sep 5 00:20:19.506111 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 5 00:20:19.506117 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:20:19.506122 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:20:19.506128 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:20:19.506133 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:20:19.506139 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 5 00:20:19.506144 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Sep 5 00:20:19.506150 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:20:19.506155 kernel: Console: colour VGA+ 80x25 Sep 5 00:20:19.506161 kernel: printk: console [tty0] enabled Sep 5 00:20:19.506167 kernel: printk: console [ttyS1] enabled Sep 5 00:20:19.506172 kernel: ACPI: Core revision 20230628 Sep 5 00:20:19.506178 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Sep 5 00:20:19.506183 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:20:19.506189 kernel: DMAR: Host address width 39 Sep 5 00:20:19.506194 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Sep 5 00:20:19.506200 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Sep 5 00:20:19.506206 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Sep 5 00:20:19.506211 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Sep 5 00:20:19.506218 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Sep 5 00:20:19.506223 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Sep 5 00:20:19.506229 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Sep 5 00:20:19.506234 kernel: x2apic enabled Sep 5 00:20:19.506240 kernel: APIC: Switched APIC routing to: cluster x2apic Sep 5 00:20:19.506246 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Sep 5 00:20:19.506251 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Sep 5 00:20:19.506257 kernel: CPU0: Thermal monitoring enabled (TM1) Sep 5 00:20:19.506262 kernel: process: using mwait in idle threads Sep 5 00:20:19.506269 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Sep 5 00:20:19.506274 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Sep 5 00:20:19.506279 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:20:19.506285 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Sep 5 00:20:19.506290 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Sep 5 00:20:19.506296 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Sep 5 00:20:19.506301 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Sep 5 00:20:19.506307 kernel: RETBleed: Mitigation: Enhanced IBRS Sep 5 00:20:19.506312 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:20:19.506318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:20:19.506323 kernel: TAA: Mitigation: TSX disabled Sep 5 00:20:19.506329 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Sep 5 00:20:19.506335 kernel: SRBDS: Mitigation: Microcode Sep 5 00:20:19.506340 kernel: GDS: Mitigation: Microcode Sep 5 00:20:19.506346 kernel: active return thunk: its_return_thunk Sep 5 00:20:19.506351 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 5 00:20:19.506357 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:20:19.506362 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:20:19.506367 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:20:19.506373 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Sep 5 00:20:19.506378 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Sep 5 00:20:19.506384 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:20:19.506390 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Sep 5 00:20:19.506395 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Sep 5 00:20:19.506401 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Sep 5 00:20:19.506406 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:20:19.506412 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:20:19.506417 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:20:19.506423 kernel: landlock: Up and running. Sep 5 00:20:19.506428 kernel: SELinux: Initializing. Sep 5 00:20:19.506433 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:20:19.506439 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:20:19.506462 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Sep 5 00:20:19.506469 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 5 00:20:19.506474 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 5 00:20:19.506480 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 5 00:20:19.506501 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Sep 5 00:20:19.506507 kernel: ... version: 4 Sep 5 00:20:19.506512 kernel: ... bit width: 48 Sep 5 00:20:19.506518 kernel: ... generic registers: 4 Sep 5 00:20:19.506523 kernel: ... value mask: 0000ffffffffffff Sep 5 00:20:19.506529 kernel: ... max period: 00007fffffffffff Sep 5 00:20:19.506535 kernel: ... fixed-purpose events: 3 Sep 5 00:20:19.506541 kernel: ... event mask: 000000070000000f Sep 5 00:20:19.506546 kernel: signal: max sigframe size: 2032 Sep 5 00:20:19.506552 kernel: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Sep 5 00:20:19.506557 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:20:19.506563 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:20:19.506568 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Sep 5 00:20:19.506574 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:20:19.506579 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:20:19.506586 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 #9 #10 #11 #12 #13 #14 #15 Sep 5 00:20:19.506592 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Sep 5 00:20:19.506597 kernel: smp: Brought up 1 node, 16 CPUs Sep 5 00:20:19.506603 kernel: smpboot: Max logical packages: 1 Sep 5 00:20:19.506608 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Sep 5 00:20:19.506614 kernel: devtmpfs: initialized Sep 5 00:20:19.506619 kernel: x86/mm: Memory block size: 128MB Sep 5 00:20:19.506625 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b25000-0x81b25fff] (4096 bytes) Sep 5 00:20:19.506631 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Sep 5 00:20:19.506637 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:20:19.506643 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 5 00:20:19.506648 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:20:19.506654 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:20:19.506659 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:20:19.506665 kernel: audit: type=2000 audit(1757031614.042:1): state=initialized audit_enabled=0 res=1 Sep 5 00:20:19.506670 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:20:19.506675 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:20:19.506681 kernel: cpuidle: using governor menu Sep 5 00:20:19.506687 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:20:19.506693 kernel: dca service started, version 1.12.1 Sep 5 00:20:19.506698 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Sep 5 00:20:19.506704 kernel: PCI: Using configuration type 1 for base access Sep 5 00:20:19.506709 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Sep 5 00:20:19.506715 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:20:19.506720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:20:19.506726 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:20:19.506731 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:20:19.506738 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:20:19.506743 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:20:19.506749 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:20:19.506754 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:20:19.506760 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Sep 5 00:20:19.506765 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:20:19.506771 kernel: ACPI: SSDT 0xFFFF8B1DC1E76800 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Sep 5 00:20:19.506776 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:20:19.506782 kernel: ACPI: SSDT 0xFFFF8B1DC10B0000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Sep 5 00:20:19.506788 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:20:19.506794 kernel: ACPI: SSDT 0xFFFF8B1DC109B800 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Sep 5 00:20:19.506799 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:20:19.506805 kernel: ACPI: SSDT 0xFFFF8B1DC10B5800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Sep 5 00:20:19.506810 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:20:19.506815 kernel: ACPI: SSDT 0xFFFF8B1DC10BB000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Sep 5 00:20:19.506821 kernel: ACPI: Dynamic OEM Table Load: Sep 5 00:20:19.506826 kernel: ACPI: SSDT 0xFFFF8B1DC178E400 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Sep 5 00:20:19.506832 kernel: ACPI: _OSC evaluated successfully for all CPUs Sep 5 00:20:19.506839 kernel: ACPI: Interpreter enabled Sep 5 00:20:19.506844 kernel: ACPI: PM: (supports S0 S5) Sep 5 00:20:19.506850 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:20:19.506855 kernel: HEST: Enabling Firmware First mode for corrected errors. Sep 5 00:20:19.506861 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Sep 5 00:20:19.506866 kernel: HEST: Table parsing has been initialized. Sep 5 00:20:19.506872 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Sep 5 00:20:19.506877 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:20:19.506883 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:20:19.506889 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Sep 5 00:20:19.506895 kernel: ACPI: \_SB_.PCI0.XDCI.USBC: New power resource Sep 5 00:20:19.506901 kernel: ACPI: \_SB_.PCI0.SAT0.VOL0.V0PR: New power resource Sep 5 00:20:19.506906 kernel: ACPI: \_SB_.PCI0.SAT0.VOL1.V1PR: New power resource Sep 5 00:20:19.506911 kernel: ACPI: \_SB_.PCI0.SAT0.VOL2.V2PR: New power resource Sep 5 00:20:19.506917 kernel: ACPI: \_SB_.PCI0.CNVW.WRST: New power resource Sep 5 00:20:19.506923 kernel: ACPI: \_TZ_.FN00: New power resource Sep 5 00:20:19.506928 kernel: ACPI: \_TZ_.FN01: New power resource Sep 5 00:20:19.506934 kernel: ACPI: \_TZ_.FN02: New power resource Sep 5 00:20:19.506939 kernel: ACPI: \_TZ_.FN03: New power resource Sep 5 00:20:19.506945 kernel: ACPI: \_TZ_.FN04: New power resource Sep 5 00:20:19.506951 kernel: ACPI: \PIN_: New power resource Sep 5 00:20:19.506956 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Sep 5 00:20:19.507035 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:20:19.507088 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Sep 5 00:20:19.507138 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Sep 5 00:20:19.507146 kernel: PCI host bridge to bus 0000:00 Sep 5 00:20:19.507199 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:20:19.507244 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:20:19.507289 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:20:19.507332 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Sep 5 00:20:19.507376 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Sep 5 00:20:19.507418 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Sep 5 00:20:19.507501 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Sep 5 00:20:19.507581 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Sep 5 00:20:19.507635 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.507689 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Sep 5 00:20:19.507741 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Sep 5 00:20:19.507797 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Sep 5 00:20:19.507849 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Sep 5 00:20:19.507903 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Sep 5 00:20:19.507953 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Sep 5 00:20:19.508002 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Sep 5 00:20:19.508055 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Sep 5 00:20:19.508106 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Sep 5 00:20:19.508156 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Sep 5 00:20:19.508212 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Sep 5 00:20:19.508263 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:20:19.508319 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Sep 5 00:20:19.508369 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:20:19.508423 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Sep 5 00:20:19.508499 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Sep 5 00:20:19.508565 kernel: pci 0000:00:16.0: PME# supported from D3hot Sep 5 00:20:19.508622 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Sep 5 00:20:19.508681 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Sep 5 00:20:19.508733 kernel: pci 0000:00:16.1: PME# supported from D3hot Sep 5 00:20:19.508787 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Sep 5 00:20:19.508836 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Sep 5 00:20:19.508885 kernel: pci 0000:00:16.4: PME# supported from D3hot Sep 5 00:20:19.508941 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Sep 5 00:20:19.508992 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Sep 5 00:20:19.509042 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Sep 5 00:20:19.509090 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Sep 5 00:20:19.509140 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Sep 5 00:20:19.509189 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Sep 5 00:20:19.509238 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Sep 5 00:20:19.509290 kernel: pci 0000:00:17.0: PME# supported from D3hot Sep 5 00:20:19.509347 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Sep 5 00:20:19.509398 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.509457 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Sep 5 00:20:19.509543 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.509597 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Sep 5 00:20:19.509648 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.509703 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Sep 5 00:20:19.509754 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.509810 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Sep 5 00:20:19.509861 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.509915 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Sep 5 00:20:19.509965 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Sep 5 00:20:19.510019 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Sep 5 00:20:19.510072 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Sep 5 00:20:19.510124 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Sep 5 00:20:19.510173 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Sep 5 00:20:19.510229 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Sep 5 00:20:19.510278 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Sep 5 00:20:19.510335 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Sep 5 00:20:19.510388 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Sep 5 00:20:19.510439 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Sep 5 00:20:19.510532 kernel: pci 0000:01:00.0: PME# supported from D3cold Sep 5 00:20:19.510583 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 00:20:19.510635 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 00:20:19.510692 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Sep 5 00:20:19.510743 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Sep 5 00:20:19.510794 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Sep 5 00:20:19.510843 kernel: pci 0000:01:00.1: PME# supported from D3cold Sep 5 00:20:19.510897 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Sep 5 00:20:19.510949 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Sep 5 00:20:19.510999 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 00:20:19.511049 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 00:20:19.511098 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:20:19.511148 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 00:20:19.511203 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Sep 5 00:20:19.511258 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Sep 5 00:20:19.511309 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Sep 5 00:20:19.511359 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Sep 5 00:20:19.511413 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Sep 5 00:20:19.511486 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.511551 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 00:20:19.511601 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 00:20:19.511650 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 00:20:19.511709 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Sep 5 00:20:19.511760 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Sep 5 00:20:19.511812 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Sep 5 00:20:19.511862 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Sep 5 00:20:19.511913 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Sep 5 00:20:19.511964 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Sep 5 00:20:19.512015 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 00:20:19.512068 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 00:20:19.512118 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 00:20:19.512169 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 00:20:19.512240 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Sep 5 00:20:19.512307 kernel: pci 0000:06:00.0: enabling Extended Tags Sep 5 00:20:19.512373 kernel: pci 0000:06:00.0: supports D1 D2 Sep 5 00:20:19.512454 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 00:20:19.512507 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 00:20:19.512560 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 00:20:19.512611 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:20:19.512669 kernel: pci_bus 0000:07: extended config space not accessible Sep 5 00:20:19.512731 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Sep 5 00:20:19.512787 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Sep 5 00:20:19.512840 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Sep 5 00:20:19.512895 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Sep 5 00:20:19.512952 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:20:19.513005 kernel: pci 0000:07:00.0: supports D1 D2 Sep 5 00:20:19.513059 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 00:20:19.513111 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 00:20:19.513162 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 00:20:19.513215 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:20:19.513223 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Sep 5 00:20:19.513231 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Sep 5 00:20:19.513238 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Sep 5 00:20:19.513243 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Sep 5 00:20:19.513249 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Sep 5 00:20:19.513255 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Sep 5 00:20:19.513261 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Sep 5 00:20:19.513267 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Sep 5 00:20:19.513273 kernel: iommu: Default domain type: Translated Sep 5 00:20:19.513279 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:20:19.513285 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:20:19.513292 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:20:19.513298 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Sep 5 00:20:19.513304 kernel: e820: reserve RAM buffer [mem 0x81b25000-0x83ffffff] Sep 5 00:20:19.513309 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Sep 5 00:20:19.513315 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Sep 5 00:20:19.513321 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Sep 5 00:20:19.513327 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Sep 5 00:20:19.513378 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Sep 5 00:20:19.513435 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Sep 5 00:20:19.513523 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:20:19.513533 kernel: vgaarb: loaded Sep 5 00:20:19.513539 kernel: clocksource: Switched to clocksource tsc-early Sep 5 00:20:19.513545 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:20:19.513551 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:20:19.513557 kernel: pnp: PnP ACPI init Sep 5 00:20:19.513608 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Sep 5 00:20:19.513663 kernel: pnp 00:02: [dma 0 disabled] Sep 5 00:20:19.513714 kernel: pnp 00:03: [dma 0 disabled] Sep 5 00:20:19.513766 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Sep 5 00:20:19.513813 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Sep 5 00:20:19.513863 kernel: system 00:05: [mem 0xfed10000-0xfed17fff] has been reserved Sep 5 00:20:19.513910 kernel: system 00:05: [mem 0xfed18000-0xfed18fff] has been reserved Sep 5 00:20:19.513961 kernel: system 00:05: [mem 0xfed19000-0xfed19fff] has been reserved Sep 5 00:20:19.514009 kernel: system 00:05: [mem 0xe0000000-0xefffffff] has been reserved Sep 5 00:20:19.514056 kernel: system 00:05: [mem 0xfed20000-0xfed3ffff] has been reserved Sep 5 00:20:19.514101 kernel: system 00:05: [mem 0xfed90000-0xfed93fff] could not be reserved Sep 5 00:20:19.514147 kernel: system 00:05: [mem 0xfed45000-0xfed8ffff] has been reserved Sep 5 00:20:19.514193 kernel: system 00:05: [mem 0xfee00000-0xfeefffff] could not be reserved Sep 5 00:20:19.514244 kernel: system 00:06: [io 0x1800-0x18fe] could not be reserved Sep 5 00:20:19.514292 kernel: system 00:06: [mem 0xfd000000-0xfd69ffff] has been reserved Sep 5 00:20:19.514341 kernel: system 00:06: [mem 0xfd6c0000-0xfd6cffff] has been reserved Sep 5 00:20:19.514387 kernel: system 00:06: [mem 0xfd6f0000-0xfdffffff] has been reserved Sep 5 00:20:19.514432 kernel: system 00:06: [mem 0xfe000000-0xfe01ffff] could not be reserved Sep 5 00:20:19.514506 kernel: system 00:06: [mem 0xfe200000-0xfe7fffff] has been reserved Sep 5 00:20:19.514568 kernel: system 00:06: [mem 0xff000000-0xffffffff] has been reserved Sep 5 00:20:19.514618 kernel: system 00:07: [io 0x2000-0x20fe] has been reserved Sep 5 00:20:19.514629 kernel: pnp: PnP ACPI: found 9 devices Sep 5 00:20:19.514635 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:20:19.514641 kernel: NET: Registered PF_INET protocol family Sep 5 00:20:19.514647 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:20:19.514653 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Sep 5 00:20:19.514659 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:20:19.514665 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:20:19.514671 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Sep 5 00:20:19.514677 kernel: TCP: Hash tables configured (established 262144 bind 65536) Sep 5 00:20:19.514684 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 00:20:19.514690 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 5 00:20:19.514696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:20:19.514702 kernel: NET: Registered PF_XDP protocol family Sep 5 00:20:19.514753 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Sep 5 00:20:19.514805 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Sep 5 00:20:19.514857 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Sep 5 00:20:19.514909 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 00:20:19.514962 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 00:20:19.515017 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Sep 5 00:20:19.515070 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Sep 5 00:20:19.515121 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Sep 5 00:20:19.515171 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Sep 5 00:20:19.515222 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:20:19.515273 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Sep 5 00:20:19.515327 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Sep 5 00:20:19.515377 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Sep 5 00:20:19.515429 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Sep 5 00:20:19.515482 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Sep 5 00:20:19.515533 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Sep 5 00:20:19.515584 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Sep 5 00:20:19.515638 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Sep 5 00:20:19.515690 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Sep 5 00:20:19.515741 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Sep 5 00:20:19.515793 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:20:19.515845 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Sep 5 00:20:19.515895 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Sep 5 00:20:19.515946 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Sep 5 00:20:19.515992 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Sep 5 00:20:19.516037 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:20:19.516084 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:20:19.516129 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:20:19.516174 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Sep 5 00:20:19.516219 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Sep 5 00:20:19.516270 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Sep 5 00:20:19.516317 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Sep 5 00:20:19.516368 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Sep 5 00:20:19.516418 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Sep 5 00:20:19.516474 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Sep 5 00:20:19.516521 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Sep 5 00:20:19.516574 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Sep 5 00:20:19.516620 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Sep 5 00:20:19.516669 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Sep 5 00:20:19.516721 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Sep 5 00:20:19.516729 kernel: PCI: CLS 64 bytes, default 64 Sep 5 00:20:19.516735 kernel: DMAR: No ATSR found Sep 5 00:20:19.516742 kernel: DMAR: No SATC found Sep 5 00:20:19.516748 kernel: DMAR: dmar0: Using Queued invalidation Sep 5 00:20:19.516798 kernel: pci 0000:00:00.0: Adding to iommu group 0 Sep 5 00:20:19.516850 kernel: pci 0000:00:01.0: Adding to iommu group 1 Sep 5 00:20:19.516901 kernel: pci 0000:00:08.0: Adding to iommu group 2 Sep 5 00:20:19.516952 kernel: pci 0000:00:12.0: Adding to iommu group 3 Sep 5 00:20:19.517004 kernel: pci 0000:00:14.0: Adding to iommu group 4 Sep 5 00:20:19.517056 kernel: pci 0000:00:14.2: Adding to iommu group 4 Sep 5 00:20:19.517106 kernel: pci 0000:00:15.0: Adding to iommu group 5 Sep 5 00:20:19.517158 kernel: pci 0000:00:15.1: Adding to iommu group 5 Sep 5 00:20:19.517208 kernel: pci 0000:00:16.0: Adding to iommu group 6 Sep 5 00:20:19.517259 kernel: pci 0000:00:16.1: Adding to iommu group 6 Sep 5 00:20:19.517309 kernel: pci 0000:00:16.4: Adding to iommu group 6 Sep 5 00:20:19.517359 kernel: pci 0000:00:17.0: Adding to iommu group 7 Sep 5 00:20:19.517426 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Sep 5 00:20:19.517515 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Sep 5 00:20:19.517566 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Sep 5 00:20:19.517616 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Sep 5 00:20:19.517666 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Sep 5 00:20:19.517716 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Sep 5 00:20:19.517765 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Sep 5 00:20:19.517844 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Sep 5 00:20:19.517896 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Sep 5 00:20:19.517948 kernel: pci 0000:01:00.0: Adding to iommu group 1 Sep 5 00:20:19.517999 kernel: pci 0000:01:00.1: Adding to iommu group 1 Sep 5 00:20:19.518051 kernel: pci 0000:03:00.0: Adding to iommu group 15 Sep 5 00:20:19.518102 kernel: pci 0000:04:00.0: Adding to iommu group 16 Sep 5 00:20:19.518152 kernel: pci 0000:06:00.0: Adding to iommu group 17 Sep 5 00:20:19.518206 kernel: pci 0000:07:00.0: Adding to iommu group 17 Sep 5 00:20:19.518214 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Sep 5 00:20:19.518222 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 5 00:20:19.518229 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Sep 5 00:20:19.518234 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Sep 5 00:20:19.518240 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Sep 5 00:20:19.518246 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Sep 5 00:20:19.518252 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Sep 5 00:20:19.518304 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Sep 5 00:20:19.518313 kernel: Initialise system trusted keyrings Sep 5 00:20:19.518320 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Sep 5 00:20:19.518326 kernel: Key type asymmetric registered Sep 5 00:20:19.518332 kernel: Asymmetric key parser 'x509' registered Sep 5 00:20:19.518338 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:20:19.518344 kernel: io scheduler mq-deadline registered Sep 5 00:20:19.518349 kernel: io scheduler kyber registered Sep 5 00:20:19.518355 kernel: io scheduler bfq registered Sep 5 00:20:19.518405 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Sep 5 00:20:19.518460 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Sep 5 00:20:19.518549 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Sep 5 00:20:19.518600 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Sep 5 00:20:19.518650 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Sep 5 00:20:19.518700 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Sep 5 00:20:19.518754 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Sep 5 00:20:19.518763 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Sep 5 00:20:19.518769 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Sep 5 00:20:19.518775 kernel: pstore: Using crash dump compression: deflate Sep 5 00:20:19.518782 kernel: pstore: Registered erst as persistent store backend Sep 5 00:20:19.518788 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:20:19.518794 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:20:19.518800 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:20:19.518806 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Sep 5 00:20:19.518812 kernel: hpet_acpi_add: no address or irqs in _CRS Sep 5 00:20:19.518862 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Sep 5 00:20:19.518871 kernel: i8042: PNP: No PS/2 controller found. Sep 5 00:20:19.518919 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Sep 5 00:20:19.518965 kernel: rtc_cmos rtc_cmos: registered as rtc0 Sep 5 00:20:19.519012 kernel: rtc_cmos rtc_cmos: setting system clock to 2025-09-05T00:20:18 UTC (1757031618) Sep 5 00:20:19.519058 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Sep 5 00:20:19.519066 kernel: intel_pstate: Intel P-state driver initializing Sep 5 00:20:19.519072 kernel: intel_pstate: Disabling energy efficiency optimization Sep 5 00:20:19.519078 kernel: intel_pstate: HWP enabled Sep 5 00:20:19.519084 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:20:19.519092 kernel: Segment Routing with IPv6 Sep 5 00:20:19.519098 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:20:19.519104 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:20:19.519110 kernel: Key type dns_resolver registered Sep 5 00:20:19.519115 kernel: microcode: Current revision: 0x00000100 Sep 5 00:20:19.519121 kernel: microcode: Updated early from: 0x000000f4 Sep 5 00:20:19.519127 kernel: microcode: Microcode Update Driver: v2.2. Sep 5 00:20:19.519151 kernel: IPI shorthand broadcast: enabled Sep 5 00:20:19.519157 kernel: sched_clock: Marking stable (2497251610, 1442268580)->(4549100252, -609580062) Sep 5 00:20:19.519165 kernel: registered taskstats version 1 Sep 5 00:20:19.519184 kernel: Loading compiled-in X.509 certificates Sep 5 00:20:19.519190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: f395d469db1520f53594f6c4948c5f8002e6cc8b' Sep 5 00:20:19.519196 kernel: Key type .fscrypt registered Sep 5 00:20:19.519201 kernel: Key type fscrypt-provisioning registered Sep 5 00:20:19.519207 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:20:19.519213 kernel: ima: No architecture policies found Sep 5 00:20:19.519219 kernel: clk: Disabling unused clocks Sep 5 00:20:19.519224 kernel: Freeing unused kernel image (initmem) memory: 43508K Sep 5 00:20:19.519231 kernel: Write protecting the kernel read-only data: 38912k Sep 5 00:20:19.519237 kernel: Freeing unused kernel image (rodata/data gap) memory: 1708K Sep 5 00:20:19.519243 kernel: Run /init as init process Sep 5 00:20:19.519249 kernel: with arguments: Sep 5 00:20:19.519255 kernel: /init Sep 5 00:20:19.519261 kernel: with environment: Sep 5 00:20:19.519266 kernel: HOME=/ Sep 5 00:20:19.519272 kernel: TERM=linux Sep 5 00:20:19.519278 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:20:19.519285 systemd[1]: Successfully made /usr/ read-only. Sep 5 00:20:19.519293 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:20:19.519299 systemd[1]: Detected architecture x86-64. Sep 5 00:20:19.519305 systemd[1]: Running in initrd. Sep 5 00:20:19.519311 systemd[1]: No hostname configured, using default hostname. Sep 5 00:20:19.519317 systemd[1]: Hostname set to . Sep 5 00:20:19.519323 systemd[1]: Initializing machine ID from random generator. Sep 5 00:20:19.519330 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:20:19.519336 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:20:19.519342 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:20:19.519349 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:20:19.519355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:20:19.519361 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:20:19.519367 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:20:19.519375 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:20:19.519381 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:20:19.519387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:20:19.519393 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:20:19.519399 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:20:19.519409 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:20:19.519415 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:20:19.519451 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:20:19.519459 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:20:19.519480 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:20:19.519488 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:20:19.519495 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 00:20:19.519501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:20:19.519537 kernel: tsc: Refined TSC clocksource calibration: 3407.998 MHz Sep 5 00:20:19.519546 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd208cfc, max_idle_ns: 440795283699 ns Sep 5 00:20:19.519554 kernel: clocksource: Switched to clocksource tsc Sep 5 00:20:19.519561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:20:19.519582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:20:19.519608 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:20:19.519617 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:20:19.519623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:20:19.519654 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:20:19.519676 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:20:19.519683 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:20:19.519737 systemd-journald[266]: Collecting audit messages is disabled. Sep 5 00:20:19.519816 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:20:19.519841 systemd-journald[266]: Journal started Sep 5 00:20:19.519894 systemd-journald[266]: Runtime Journal (/run/log/journal/0ec89340bda046369807a0036480a961) is 8M, max 639.9M, 631.9M free. Sep 5 00:20:19.531366 systemd-modules-load[268]: Inserted module 'overlay' Sep 5 00:20:19.568141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:20:19.568153 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:20:19.568161 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:20:19.569038 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:20:19.569183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:20:19.569272 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:20:19.570222 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:20:19.574821 systemd-modules-load[268]: Inserted module 'br_netfilter' Sep 5 00:20:19.575482 kernel: Bridge firewalling registered Sep 5 00:20:19.575646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:20:19.595182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:20:19.676431 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:20:19.714018 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:20:19.727050 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:20:19.769671 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:20:19.781082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:20:19.805767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:20:19.820842 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:20:19.843204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:20:19.864179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:20:19.899810 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:20:19.912730 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:20:19.939755 dracut-cmdline[308]: dracut-dracut-053 Sep 5 00:20:19.939755 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=564344e0ae537bb1f195be96fecdd60e9e7ec1fe4e3ba9f8a7a8da5d9135455e Sep 5 00:20:19.972954 systemd-resolved[312]: Positive Trust Anchors: Sep 5 00:20:20.012541 kernel: SCSI subsystem initialized Sep 5 00:20:19.972962 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:20:20.046483 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:20:20.046497 kernel: iscsi: registered transport (tcp) Sep 5 00:20:19.972995 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:20:20.106875 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:20:20.106894 kernel: QLogic iSCSI HBA Driver Sep 5 00:20:19.975113 systemd-resolved[312]: Defaulting to hostname 'linux'. Sep 5 00:20:19.975863 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:20:20.001586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:20:20.083073 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:20:20.128665 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:20:20.236476 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:20:20.236496 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:20:20.245293 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:20:20.281512 kernel: raid6: avx2x4 gen() 47163 MB/s Sep 5 00:20:20.302513 kernel: raid6: avx2x2 gen() 53819 MB/s Sep 5 00:20:20.328578 kernel: raid6: avx2x1 gen() 45218 MB/s Sep 5 00:20:20.328596 kernel: raid6: using algorithm avx2x2 gen() 53819 MB/s Sep 5 00:20:20.355674 kernel: raid6: .... xor() 32443 MB/s, rmw enabled Sep 5 00:20:20.355690 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:20:20.376480 kernel: xor: automatically using best checksumming function avx Sep 5 00:20:20.475479 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:20:20.481436 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:20:20.489887 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:20:20.537119 systemd-udevd[496]: Using default interface naming scheme 'v255'. Sep 5 00:20:20.540727 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:20:20.558697 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:20:20.602756 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Sep 5 00:20:20.619626 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:20:20.643779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:20:20.705846 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:20:20.744565 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 5 00:20:20.744581 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 5 00:20:20.744588 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:20:20.744599 kernel: libata version 3.00 loaded. Sep 5 00:20:20.724619 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:20:20.776598 kernel: ACPI: bus type USB registered Sep 5 00:20:20.776614 kernel: usbcore: registered new interface driver usbfs Sep 5 00:20:20.776625 kernel: usbcore: registered new interface driver hub Sep 5 00:20:20.776636 kernel: usbcore: registered new device driver usb Sep 5 00:20:20.776651 kernel: PTP clock support registered Sep 5 00:20:20.746263 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:20:21.048505 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:20:21.048522 kernel: AES CTR mode by8 optimization enabled Sep 5 00:20:21.048531 kernel: ahci 0000:00:17.0: version 3.0 Sep 5 00:20:21.048626 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Sep 5 00:20:21.048636 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 00:20:21.048706 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Sep 5 00:20:21.048774 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Sep 5 00:20:21.048844 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Sep 5 00:20:21.048853 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Sep 5 00:20:21.048919 kernel: scsi host0: ahci Sep 5 00:20:21.048987 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Sep 5 00:20:21.049055 kernel: scsi host1: ahci Sep 5 00:20:21.049121 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Sep 5 00:20:21.049186 kernel: scsi host2: ahci Sep 5 00:20:21.049251 kernel: igb 0000:03:00.0: added PHC on eth0 Sep 5 00:20:21.049322 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 00:20:21.049389 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:1e Sep 5 00:20:21.049461 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Sep 5 00:20:21.049530 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 00:20:21.049596 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Sep 5 00:20:21.049659 kernel: scsi host3: ahci Sep 5 00:20:21.049725 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Sep 5 00:20:21.049791 kernel: scsi host4: ahci Sep 5 00:20:21.049854 kernel: hub 1-0:1.0: USB hub found Sep 5 00:20:21.049929 kernel: scsi host5: ahci Sep 5 00:20:21.049994 kernel: hub 1-0:1.0: 16 ports detected Sep 5 00:20:21.050063 kernel: igb 0000:04:00.0: added PHC on eth1 Sep 5 00:20:21.050131 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Sep 5 00:20:21.050198 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:70:d2:1f Sep 5 00:20:21.050265 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Sep 5 00:20:21.050329 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Sep 5 00:20:21.050393 kernel: scsi host6: ahci Sep 5 00:20:21.050460 kernel: hub 2-0:1.0: USB hub found Sep 5 00:20:21.050535 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 127 Sep 5 00:20:21.050543 kernel: hub 2-0:1.0: 10 ports detected Sep 5 00:20:21.050614 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 127 Sep 5 00:20:21.050623 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Sep 5 00:20:21.050689 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 127 Sep 5 00:20:21.050698 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 127 Sep 5 00:20:21.050705 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 127 Sep 5 00:20:21.050713 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 127 Sep 5 00:20:21.050720 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 127 Sep 5 00:20:20.746334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:20:20.929529 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:20:21.059552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:20:21.059655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:20:21.129494 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Sep 5 00:20:21.129584 kernel: mlx5_core 0000:01:00.0: firmware version: 14.28.2006 Sep 5 00:20:21.129661 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 00:20:21.106076 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:20:21.156574 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Sep 5 00:20:21.158679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:20:21.159322 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:20:21.193130 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:20:21.206455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:20:21.237482 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:20:21.243560 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:20:21.298567 kernel: hub 1-14:1.0: USB hub found Sep 5 00:20:21.298671 kernel: hub 1-14:1.0: 4 ports detected Sep 5 00:20:21.255776 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:20:21.288607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:20:21.323810 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:20:21.433521 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:20:21.433534 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 5 00:20:21.433541 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:20:21.433548 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:20:21.433555 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 00:20:21.433562 kernel: ata7: SATA link down (SStatus 0 SControl 300) Sep 5 00:20:21.433573 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Sep 5 00:20:21.433580 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 00:20:21.433587 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 00:20:21.433681 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Sep 5 00:20:21.433690 kernel: mlx5_core 0000:01:00.0: Port module event: module 0, Cable plugged Sep 5 00:20:21.433766 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 00:20:21.433774 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Sep 5 00:20:21.440647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:20:21.491498 kernel: ata1.00: Features: NCQ-prio Sep 5 00:20:21.491510 kernel: ata2.00: Features: NCQ-prio Sep 5 00:20:21.491518 kernel: ata2.00: configured for UDMA/133 Sep 5 00:20:21.491526 kernel: ata1.00: configured for UDMA/133 Sep 5 00:20:21.491533 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 00:20:21.491617 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Sep 5 00:20:21.497844 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 00:20:21.497862 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 00:20:21.497947 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:20:21.510046 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 5 00:20:21.510133 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Sep 5 00:20:21.515452 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 5 00:20:21.527563 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Sep 5 00:20:21.527651 kernel: sd 1:0:0:0: [sdb] Write Protect is off Sep 5 00:20:21.527719 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Sep 5 00:20:21.532795 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Sep 5 00:20:21.532883 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 00:20:21.538517 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Sep 5 00:20:21.538604 kernel: sd 1:0:0:0: [sdb] Preferred minimum I/O size 4096 bytes Sep 5 00:20:21.538672 kernel: sd 0:0:0:0: [sda] Preferred minimum I/O size 4096 bytes Sep 5 00:20:21.562787 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:20:21.562803 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Sep 5 00:20:21.562823 kernel: ata1.00: Enabling discard_zeroes_data Sep 5 00:20:21.593446 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 5 00:20:21.593535 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:20:21.604748 kernel: GPT:9289727 != 937703087 Sep 5 00:20:21.611025 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:20:21.614880 kernel: GPT:9289727 != 937703087 Sep 5 00:20:21.620292 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:20:21.625545 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 00:20:21.630715 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Sep 5 00:20:21.630804 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 00:20:21.657341 kernel: mlx5_core 0000:01:00.1: firmware version: 14.28.2006 Sep 5 00:20:21.657458 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Sep 5 00:20:21.678449 kernel: BTRFS: device fsid 185ffa67-4184-4488-b7c8-7c0711a63b2d devid 1 transid 38 /dev/sdb3 scanned by (udev-worker) (565) Sep 5 00:20:21.678480 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sdb6 scanned by (udev-worker) (551) Sep 5 00:20:21.682897 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Micron_5300_MTFDDAK480TDT ROOT. Sep 5 00:20:21.694449 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 00:20:21.708453 kernel: usbcore: registered new interface driver usbhid Sep 5 00:20:21.708473 kernel: usbhid: USB HID core driver Sep 5 00:20:21.723512 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Sep 5 00:20:21.740428 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Micron_5300_MTFDDAK480TDT EFI-SYSTEM. Sep 5 00:20:21.757638 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 00:20:21.800892 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Sep 5 00:20:21.800993 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Sep 5 00:20:21.813804 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Sep 5 00:20:21.823570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Micron_5300_MTFDDAK480TDT USR-A. Sep 5 00:20:21.841822 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 00:20:21.883594 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:20:21.923527 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:20:21.923539 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 00:20:21.923547 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:20:21.923554 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 00:20:21.923603 disk-uuid[707]: Primary Header is updated. Sep 5 00:20:21.923603 disk-uuid[707]: Secondary Entries is updated. Sep 5 00:20:21.923603 disk-uuid[707]: Secondary Header is updated. Sep 5 00:20:21.961526 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Sep 5 00:20:21.961628 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Sep 5 00:20:22.180542 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Sep 5 00:20:22.192447 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth1 Sep 5 00:20:22.205652 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth0 Sep 5 00:20:22.907910 kernel: ata2.00: Enabling discard_zeroes_data Sep 5 00:20:22.916001 disk-uuid[708]: The operation has completed successfully. Sep 5 00:20:22.925564 kernel: sdb: sdb1 sdb2 sdb3 sdb4 sdb6 sdb7 sdb9 Sep 5 00:20:22.962551 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:20:22.962601 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:20:22.993779 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:20:23.018558 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 5 00:20:23.018617 sh[732]: Success Sep 5 00:20:23.057760 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:20:23.091580 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:20:23.099741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:20:23.154089 kernel: BTRFS info (device dm-0): first mount of filesystem 185ffa67-4184-4488-b7c8-7c0711a63b2d Sep 5 00:20:23.154104 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:20:23.154112 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:20:23.161120 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:20:23.166966 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:20:23.180502 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 00:20:23.183148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:20:23.192929 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:20:23.204626 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:20:23.229514 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:20:23.265479 kernel: BTRFS info (device sdb6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 5 00:20:23.265500 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:20:23.272448 kernel: BTRFS info (device sdb6): using free space tree Sep 5 00:20:23.282447 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 00:20:23.282463 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 00:20:23.301449 kernel: BTRFS info (device sdb6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 5 00:20:23.301992 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:20:23.302608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:20:23.356195 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:20:23.365654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:20:23.392775 systemd-networkd[912]: lo: Link UP Sep 5 00:20:23.392779 systemd-networkd[912]: lo: Gained carrier Sep 5 00:20:23.395604 systemd-networkd[912]: Enumeration completed Sep 5 00:20:23.420765 ignition[791]: Ignition 2.20.0 Sep 5 00:20:23.395678 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:20:23.420769 ignition[791]: Stage: fetch-offline Sep 5 00:20:23.396272 systemd-networkd[912]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:20:23.420792 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:20:23.416590 systemd[1]: Reached target network.target - Network. Sep 5 00:20:23.420797 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 5 00:20:23.423183 unknown[791]: fetched base config from "system" Sep 5 00:20:23.420850 ignition[791]: parsed url from cmdline: "" Sep 5 00:20:23.423187 unknown[791]: fetched user config from "system" Sep 5 00:20:23.420852 ignition[791]: no config URL provided Sep 5 00:20:23.423457 systemd-networkd[912]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:20:23.420854 ignition[791]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:20:23.432812 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:20:23.420877 ignition[791]: parsing config with SHA512: 5446650ba44f16f48d9f353db4ec0c869a7384722f5242fa1091e48bc8f5d0050a1b3a8157d2d4d1d2e2f45993d886d6179b6acd3be4e5fd982a3a7c233813ef Sep 5 00:20:23.451677 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:20:23.423395 ignition[791]: fetch-offline: fetch-offline passed Sep 5 00:20:23.453702 systemd-networkd[912]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:20:23.423398 ignition[791]: POST message to Packet Timeline Sep 5 00:20:23.471982 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:20:23.423400 ignition[791]: POST Status error: resource requires networking Sep 5 00:20:23.636690 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 5 00:20:23.627868 systemd-networkd[912]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:20:23.423440 ignition[791]: Ignition finished successfully Sep 5 00:20:23.498337 ignition[925]: Ignition 2.20.0 Sep 5 00:20:23.498341 ignition[925]: Stage: kargs Sep 5 00:20:23.498442 ignition[925]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:20:23.498452 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 5 00:20:23.498945 ignition[925]: kargs: kargs passed Sep 5 00:20:23.498947 ignition[925]: POST message to Packet Timeline Sep 5 00:20:23.498958 ignition[925]: GET https://metadata.packet.net/metadata: attempt #1 Sep 5 00:20:23.499314 ignition[925]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38458->[::1]:53: read: connection refused Sep 5 00:20:23.699936 ignition[925]: GET https://metadata.packet.net/metadata: attempt #2 Sep 5 00:20:23.700338 ignition[925]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:50879->[::1]:53: read: connection refused Sep 5 00:20:23.817484 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 5 00:20:23.818999 systemd-networkd[912]: eno1: Link UP Sep 5 00:20:23.819140 systemd-networkd[912]: eno2: Link UP Sep 5 00:20:23.819265 systemd-networkd[912]: enp1s0f0np0: Link UP Sep 5 00:20:23.819418 systemd-networkd[912]: enp1s0f0np0: Gained carrier Sep 5 00:20:23.828690 systemd-networkd[912]: enp1s0f1np1: Link UP Sep 5 00:20:23.860646 systemd-networkd[912]: enp1s0f0np0: DHCPv4 address 139.178.90.135/31, gateway 139.178.90.134 acquired from 145.40.83.140 Sep 5 00:20:24.100522 ignition[925]: GET https://metadata.packet.net/metadata: attempt #3 Sep 5 00:20:24.102122 ignition[925]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38806->[::1]:53: read: connection refused Sep 5 00:20:24.679236 systemd-networkd[912]: enp1s0f1np1: Gained carrier Sep 5 00:20:24.902777 ignition[925]: GET https://metadata.packet.net/metadata: attempt #4 Sep 5 00:20:24.903973 ignition[925]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:55354->[::1]:53: read: connection refused Sep 5 00:20:25.126911 systemd-networkd[912]: enp1s0f0np0: Gained IPv6LL Sep 5 00:20:26.086942 systemd-networkd[912]: enp1s0f1np1: Gained IPv6LL Sep 5 00:20:26.505718 ignition[925]: GET https://metadata.packet.net/metadata: attempt #5 Sep 5 00:20:26.506790 ignition[925]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:36622->[::1]:53: read: connection refused Sep 5 00:20:29.709303 ignition[925]: GET https://metadata.packet.net/metadata: attempt #6 Sep 5 00:20:30.782767 ignition[925]: GET result: OK Sep 5 00:20:31.167842 ignition[925]: Ignition finished successfully Sep 5 00:20:31.173236 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:20:31.196742 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:20:31.202883 ignition[944]: Ignition 2.20.0 Sep 5 00:20:31.202888 ignition[944]: Stage: disks Sep 5 00:20:31.202992 ignition[944]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:20:31.202999 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 5 00:20:31.203542 ignition[944]: disks: disks passed Sep 5 00:20:31.203546 ignition[944]: POST message to Packet Timeline Sep 5 00:20:31.203558 ignition[944]: GET https://metadata.packet.net/metadata: attempt #1 Sep 5 00:20:32.242883 ignition[944]: GET result: OK Sep 5 00:20:33.129273 ignition[944]: Ignition finished successfully Sep 5 00:20:33.133021 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:20:33.148773 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:20:33.167830 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:20:33.188701 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:20:33.209778 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:20:33.230752 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:20:33.260722 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:20:33.297937 systemd-fsck[966]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:20:33.307900 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:20:33.336663 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:20:33.411369 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:20:33.428676 kernel: EXT4-fs (sdb9): mounted filesystem 86dd2c20-900e-43ec-8fda-e9f0f484a013 r/w with ordered data mode. Quota mode: none. Sep 5 00:20:33.420897 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:20:33.443651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:20:33.467489 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sdb6 scanned by mount (975) Sep 5 00:20:33.485047 kernel: BTRFS info (device sdb6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 5 00:20:33.485063 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:20:33.490930 kernel: BTRFS info (device sdb6): using free space tree Sep 5 00:20:33.506091 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 00:20:33.506107 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 00:20:33.507730 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:20:33.517228 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 5 00:20:33.527287 systemd[1]: Starting flatcar-static-network.service - Flatcar Static Network Agent... Sep 5 00:20:33.557677 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:20:33.557701 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:20:33.619772 coreos-metadata[992]: Sep 05 00:20:33.595 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 00:20:33.582560 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:20:33.658528 coreos-metadata[993]: Sep 05 00:20:33.595 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 00:20:33.608782 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:20:33.638719 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:20:33.689499 initrd-setup-root[1007]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:20:33.699569 initrd-setup-root[1014]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:20:33.709549 initrd-setup-root[1021]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:20:33.719524 initrd-setup-root[1028]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:20:33.749580 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:20:33.766697 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:20:33.771384 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:20:33.802730 kernel: BTRFS info (device sdb6): last unmount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 5 00:20:33.803151 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:20:33.825551 ignition[1095]: INFO : Ignition 2.20.0 Sep 5 00:20:33.825551 ignition[1095]: INFO : Stage: mount Sep 5 00:20:33.839647 ignition[1095]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:20:33.839647 ignition[1095]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 5 00:20:33.839647 ignition[1095]: INFO : mount: mount passed Sep 5 00:20:33.839647 ignition[1095]: INFO : POST message to Packet Timeline Sep 5 00:20:33.839647 ignition[1095]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 5 00:20:33.833212 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:20:34.617336 coreos-metadata[992]: Sep 05 00:20:34.617 INFO Fetch successful Sep 5 00:20:34.651372 coreos-metadata[992]: Sep 05 00:20:34.651 INFO wrote hostname ci-4230.2.2-n-de5468c6d2 to /sysroot/etc/hostname Sep 5 00:20:34.652569 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 00:20:34.828246 ignition[1095]: INFO : GET result: OK Sep 5 00:20:35.287166 ignition[1095]: INFO : Ignition finished successfully Sep 5 00:20:35.289275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:20:35.950257 coreos-metadata[993]: Sep 05 00:20:35.950 INFO Fetch successful Sep 5 00:20:35.989752 systemd[1]: flatcar-static-network.service: Deactivated successfully. Sep 5 00:20:35.989807 systemd[1]: Finished flatcar-static-network.service - Flatcar Static Network Agent. Sep 5 00:20:36.016651 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:20:36.027645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:20:36.087526 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sdb6 scanned by mount (1122) Sep 5 00:20:36.087555 kernel: BTRFS info (device sdb6): first mount of filesystem 66b85247-a711-4bbf-a14c-62367abde12c Sep 5 00:20:36.095616 kernel: BTRFS info (device sdb6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:20:36.101501 kernel: BTRFS info (device sdb6): using free space tree Sep 5 00:20:36.116660 kernel: BTRFS info (device sdb6): enabling ssd optimizations Sep 5 00:20:36.116676 kernel: BTRFS info (device sdb6): auto enabling async discard Sep 5 00:20:36.118553 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:20:36.143413 ignition[1139]: INFO : Ignition 2.20.0 Sep 5 00:20:36.143413 ignition[1139]: INFO : Stage: files Sep 5 00:20:36.159715 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:20:36.159715 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 5 00:20:36.159715 ignition[1139]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:20:36.159715 ignition[1139]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:20:36.159715 ignition[1139]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:20:36.159715 ignition[1139]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:20:36.159715 ignition[1139]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:20:36.159715 ignition[1139]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:20:36.159715 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:20:36.159715 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:20:36.147081 unknown[1139]: wrote ssh authorized keys file for user: core Sep 5 00:20:36.293678 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:20:36.364947 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:20:36.364947 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:20:36.397758 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 5 00:20:36.665805 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 5 00:20:36.817251 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 5 00:20:36.817251 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:20:36.848751 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:20:37.532726 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 5 00:20:38.331711 ignition[1139]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:20:38.331711 ignition[1139]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:20:38.360768 ignition[1139]: INFO : files: files passed Sep 5 00:20:38.360768 ignition[1139]: INFO : POST message to Packet Timeline Sep 5 00:20:38.360768 ignition[1139]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 5 00:20:39.296138 ignition[1139]: INFO : GET result: OK Sep 5 00:20:39.827407 ignition[1139]: INFO : Ignition finished successfully Sep 5 00:20:39.830868 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:20:39.860738 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:20:39.872167 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:20:39.882956 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:20:39.883013 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:20:39.922650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:20:39.940962 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:20:39.971718 initrd-setup-root-after-ignition[1177]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:20:39.971718 initrd-setup-root-after-ignition[1177]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:20:39.985826 initrd-setup-root-after-ignition[1181]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:20:39.975920 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:20:40.050668 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:20:40.050756 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:20:40.059108 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:20:40.089830 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:20:40.111036 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:20:40.120817 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:20:40.199852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:20:40.225820 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:20:40.241224 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:20:40.269761 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:20:40.269910 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:20:40.299826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:20:40.299998 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:20:40.329204 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:20:40.351175 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:20:40.370191 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:20:40.391169 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:20:40.413176 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:20:40.434080 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:20:40.454038 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:20:40.475114 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:20:40.496203 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:20:40.517175 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:20:40.535972 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:20:40.536398 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:20:40.564171 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:20:40.585195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:20:40.606954 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:20:40.607425 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:20:40.630074 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:20:40.630508 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:20:40.662089 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:20:40.662570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:20:40.682289 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:20:40.701936 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:20:40.705683 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:20:40.723068 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:20:40.743197 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:20:40.763160 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:20:40.763495 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:20:40.784228 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:20:40.784552 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:20:40.807210 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:20:40.807665 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:20:40.826169 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:20:40.922741 ignition[1202]: INFO : Ignition 2.20.0 Sep 5 00:20:40.922741 ignition[1202]: INFO : Stage: umount Sep 5 00:20:40.922741 ignition[1202]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:20:40.922741 ignition[1202]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Sep 5 00:20:40.922741 ignition[1202]: INFO : umount: umount passed Sep 5 00:20:40.922741 ignition[1202]: INFO : POST message to Packet Timeline Sep 5 00:20:40.922741 ignition[1202]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Sep 5 00:20:40.826599 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:20:40.844184 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 5 00:20:40.844610 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 5 00:20:40.877745 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:20:40.896397 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:20:40.904872 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:20:40.904971 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:20:40.941825 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:20:40.941896 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:20:40.977909 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:20:40.978825 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:20:40.978910 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:20:41.000068 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:20:41.000233 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:20:41.937658 ignition[1202]: INFO : GET result: OK Sep 5 00:20:42.844289 ignition[1202]: INFO : Ignition finished successfully Sep 5 00:20:42.847704 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:20:42.848002 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:20:42.863803 systemd[1]: Stopped target network.target - Network. Sep 5 00:20:42.879707 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:20:42.879886 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:20:42.897858 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:20:42.898035 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:20:42.916850 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:20:42.917024 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:20:42.934887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:20:42.935060 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:20:42.953854 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:20:42.954038 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:20:42.973209 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:20:42.991923 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:20:43.010595 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:20:43.010885 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:20:43.033256 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 00:20:43.033374 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:20:43.033426 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:20:43.050365 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 00:20:43.050818 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:20:43.050849 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:20:43.077626 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:20:43.079688 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:20:43.079750 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:20:43.108898 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:20:43.109073 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:20:43.130231 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:20:43.130397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:20:43.149843 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:20:43.150017 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:20:43.170290 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:20:43.194063 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 00:20:43.194269 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:20:43.195392 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:20:43.195776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:20:43.222972 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:20:43.223004 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:20:43.241770 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:20:43.241805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:20:43.268663 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:20:43.268735 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:20:43.300160 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:20:43.300340 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:20:43.339650 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:20:43.339821 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:20:43.400658 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:20:43.671617 systemd-journald[266]: Received SIGTERM from PID 1 (systemd). Sep 5 00:20:43.438532 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:20:43.438573 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:20:43.461704 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:20:43.461770 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:20:43.480843 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:20:43.480994 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:20:43.502963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:20:43.503143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:20:43.527722 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 5 00:20:43.527908 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 00:20:43.529231 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:20:43.529499 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:20:43.545646 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:20:43.545888 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:20:43.568826 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:20:43.599685 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:20:43.624216 systemd[1]: Switching root. Sep 5 00:20:43.816640 systemd-journald[266]: Journal stopped Sep 5 00:20:45.576625 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:20:45.576641 kernel: SELinux: policy capability open_perms=1 Sep 5 00:20:45.576649 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:20:45.576655 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:20:45.576662 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:20:45.576668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:20:45.576675 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:20:45.576681 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:20:45.576687 kernel: audit: type=1403 audit(1757031643.907:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:20:45.576694 systemd[1]: Successfully loaded SELinux policy in 75.307ms. Sep 5 00:20:45.576703 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.566ms. Sep 5 00:20:45.576710 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 00:20:45.576717 systemd[1]: Detected architecture x86-64. Sep 5 00:20:45.576724 systemd[1]: Detected first boot. Sep 5 00:20:45.576731 systemd[1]: Hostname set to . Sep 5 00:20:45.576739 systemd[1]: Initializing machine ID from random generator. Sep 5 00:20:45.576746 zram_generator::config[1259]: No configuration found. Sep 5 00:20:45.576754 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:20:45.576761 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 00:20:45.576768 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:20:45.576775 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:20:45.576782 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:20:45.576790 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:20:45.576797 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:20:45.576804 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:20:45.576811 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:20:45.576818 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:20:45.576825 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:20:45.576833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:20:45.576842 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:20:45.576849 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:20:45.576856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:20:45.576863 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:20:45.576870 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:20:45.576877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:20:45.576884 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:20:45.576891 systemd[1]: Expecting device dev-ttyS1.device - /dev/ttyS1... Sep 5 00:20:45.576899 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:20:45.576906 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:20:45.576913 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:20:45.576922 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:20:45.576930 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:20:45.576937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:20:45.576944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:20:45.576951 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:20:45.576959 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:20:45.576967 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:20:45.576974 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:20:45.576981 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 00:20:45.576988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:20:45.576997 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:20:45.577004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:20:45.577011 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:20:45.577019 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:20:45.577026 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:20:45.577033 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:20:45.577040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:20:45.577048 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:20:45.577056 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:20:45.577064 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:20:45.577071 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:20:45.577079 systemd[1]: Reached target machines.target - Containers. Sep 5 00:20:45.577086 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:20:45.577093 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:20:45.577101 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:20:45.577108 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:20:45.577116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:20:45.577124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:20:45.577131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:20:45.577138 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:20:45.577146 kernel: ACPI: bus type drm_connector registered Sep 5 00:20:45.577153 kernel: fuse: init (API version 7.39) Sep 5 00:20:45.577160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:20:45.577167 kernel: loop: module loaded Sep 5 00:20:45.577173 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:20:45.577182 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:20:45.577190 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:20:45.577197 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:20:45.577204 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:20:45.577212 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:20:45.577219 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:20:45.577238 systemd-journald[1363]: Collecting audit messages is disabled. Sep 5 00:20:45.577256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:20:45.577264 systemd-journald[1363]: Journal started Sep 5 00:20:45.577279 systemd-journald[1363]: Runtime Journal (/run/log/journal/dfa4bbf53ea14b34a5088528411058ac) is 8M, max 639.9M, 631.9M free. Sep 5 00:20:44.399415 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:20:44.411344 systemd[1]: Unnecessary job was removed for dev-sdb6.device - /dev/sdb6. Sep 5 00:20:44.411599 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:20:45.604496 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:20:45.615498 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:20:45.647503 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 00:20:45.667479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:20:45.688664 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:20:45.688693 systemd[1]: Stopped verity-setup.service. Sep 5 00:20:45.714485 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:20:45.722484 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:20:45.731928 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:20:45.741631 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:20:45.752720 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:20:45.763725 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:20:45.773715 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:20:45.783694 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:20:45.793804 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:20:45.804838 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:20:45.815878 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:20:45.816066 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:20:45.828083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:20:45.828383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:20:45.840440 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:20:45.840968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:20:45.852418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:20:45.852972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:20:45.866410 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:20:45.866955 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:20:45.878413 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:20:45.879035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:20:45.890686 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:20:45.901491 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:20:45.913585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:20:45.925490 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 00:20:45.937491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:20:45.973935 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:20:46.004758 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:20:46.017587 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:20:46.027673 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:20:46.027693 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:20:46.028349 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 00:20:46.049371 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:20:46.061636 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:20:46.071722 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:20:46.072996 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:20:46.083085 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:20:46.093568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:20:46.094247 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:20:46.096910 systemd-journald[1363]: Time spent on flushing to /var/log/journal/dfa4bbf53ea14b34a5088528411058ac is 12.857ms for 1375 entries. Sep 5 00:20:46.096910 systemd-journald[1363]: System Journal (/var/log/journal/dfa4bbf53ea14b34a5088528411058ac) is 8M, max 195.6M, 187.6M free. Sep 5 00:20:46.119939 systemd-journald[1363]: Received client request to flush runtime journal. Sep 5 00:20:46.111581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:20:46.112317 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:20:46.122257 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:20:46.135245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:20:46.147450 kernel: loop0: detected capacity change from 0 to 229808 Sep 5 00:20:46.153347 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:20:46.167023 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 5 00:20:46.167057 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Sep 5 00:20:46.170479 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:20:46.171169 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:20:46.182599 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:20:46.193630 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:20:46.204707 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:20:46.212449 kernel: loop1: detected capacity change from 0 to 8 Sep 5 00:20:46.220714 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:20:46.231805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:20:46.241664 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:20:46.255772 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:20:46.265515 kernel: loop2: detected capacity change from 0 to 147912 Sep 5 00:20:46.282655 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 00:20:46.294234 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:20:46.304324 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:20:46.304897 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 00:20:46.316671 udevadm[1404]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:20:46.325581 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:20:46.336522 kernel: loop3: detected capacity change from 0 to 138176 Sep 5 00:20:46.354600 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:20:46.362140 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Sep 5 00:20:46.362150 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Sep 5 00:20:46.366337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:20:46.404491 kernel: loop4: detected capacity change from 0 to 229808 Sep 5 00:20:46.429921 kernel: loop5: detected capacity change from 0 to 8 Sep 5 00:20:46.429962 kernel: loop6: detected capacity change from 0 to 147912 Sep 5 00:20:46.458496 kernel: loop7: detected capacity change from 0 to 138176 Sep 5 00:20:46.460301 ldconfig[1394]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:20:46.461777 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:20:46.474397 (sd-merge)[1428]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-packet'. Sep 5 00:20:46.474672 (sd-merge)[1428]: Merged extensions into '/usr'. Sep 5 00:20:46.506663 systemd[1]: Reload requested from client PID 1400 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:20:46.506675 systemd[1]: Reloading... Sep 5 00:20:46.536535 zram_generator::config[1456]: No configuration found. Sep 5 00:20:46.611607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:20:46.665267 systemd[1]: Reloading finished in 158 ms. Sep 5 00:20:46.681598 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:20:46.692828 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:20:46.716320 systemd[1]: Starting ensure-sysext.service... Sep 5 00:20:46.726922 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:20:46.740272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:20:46.757608 systemd-tmpfiles[1513]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:20:46.757778 systemd-tmpfiles[1513]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:20:46.758305 systemd-tmpfiles[1513]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:20:46.758536 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Sep 5 00:20:46.758602 systemd-tmpfiles[1513]: ACLs are not supported, ignoring. Sep 5 00:20:46.759703 systemd[1]: Reload requested from client PID 1512 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:20:46.759725 systemd[1]: Reloading... Sep 5 00:20:46.761084 systemd-tmpfiles[1513]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:20:46.761088 systemd-tmpfiles[1513]: Skipping /boot Sep 5 00:20:46.766956 systemd-tmpfiles[1513]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:20:46.766960 systemd-tmpfiles[1513]: Skipping /boot Sep 5 00:20:46.772287 systemd-udevd[1514]: Using default interface naming scheme 'v255'. Sep 5 00:20:46.790502 zram_generator::config[1543]: No configuration found. Sep 5 00:20:46.837463 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Sep 5 00:20:46.837531 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1572) Sep 5 00:20:46.837567 kernel: ACPI: button: Sleep Button [SLPB] Sep 5 00:20:46.849136 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:20:46.856521 kernel: IPMI message handler: version 39.2 Sep 5 00:20:46.856651 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:20:46.856671 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:20:46.872655 kernel: ipmi device interface Sep 5 00:20:46.891452 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Sep 5 00:20:46.891682 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Sep 5 00:20:46.891810 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Sep 5 00:20:46.891934 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Sep 5 00:20:46.897453 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Sep 5 00:20:46.908923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:20:46.926968 kernel: ipmi_si: IPMI System Interface driver Sep 5 00:20:46.927019 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Sep 5 00:20:46.934478 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Sep 5 00:20:46.940605 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Sep 5 00:20:46.946873 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Sep 5 00:20:46.955106 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Sep 5 00:20:46.970349 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Sep 5 00:20:46.970446 kernel: ipmi_si: Adding ACPI-specified kcs state machine Sep 5 00:20:46.980512 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Sep 5 00:20:46.992948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Micron_5300_MTFDDAK480TDT OEM. Sep 5 00:20:47.005460 kernel: iTCO_vendor_support: vendor-support=0 Sep 5 00:20:47.008531 systemd[1]: Condition check resulted in dev-ttyS1.device - /dev/ttyS1 being skipped. Sep 5 00:20:47.008992 systemd[1]: Reloading finished in 249 ms. Sep 5 00:20:47.040326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:20:47.040716 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Sep 5 00:20:47.040911 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Sep 5 00:20:47.040985 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Sep 5 00:20:47.072656 kernel: intel_rapl_common: Found RAPL domain package Sep 5 00:20:47.072705 kernel: intel_rapl_common: Found RAPL domain core Sep 5 00:20:47.072725 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Sep 5 00:20:47.072894 kernel: intel_rapl_common: Found RAPL domain dram Sep 5 00:20:47.112554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:20:47.135426 systemd[1]: Finished ensure-sysext.service. Sep 5 00:20:47.166453 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Sep 5 00:20:47.167316 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Sep 5 00:20:47.174451 kernel: ipmi_ssif: IPMI SSIF Interface driver Sep 5 00:20:47.182511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:20:47.193546 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:20:47.203532 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:20:47.214534 augenrules[1715]: No rules Sep 5 00:20:47.214630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:20:47.215304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:20:47.225048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:20:47.235026 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:20:47.246009 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:20:47.255556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:20:47.256122 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:20:47.266483 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 00:20:47.267241 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:20:47.279856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:20:47.281315 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:20:47.282840 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:20:47.297158 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:20:47.307091 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:20:47.326482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:20:47.327180 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:20:47.339592 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:20:47.339723 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:20:47.339923 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:20:47.340079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:20:47.340187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:20:47.340349 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:20:47.340457 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:20:47.340617 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:20:47.340722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:20:47.340884 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:20:47.340990 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:20:47.341159 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:20:47.341340 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:20:47.354645 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:20:47.354686 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:20:47.354718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:20:47.355301 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:20:47.356113 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:20:47.356137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:20:47.356411 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:20:47.360328 lvm[1743]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:20:47.363240 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:20:47.379474 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:20:47.400005 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:20:47.426179 systemd-resolved[1728]: Positive Trust Anchors: Sep 5 00:20:47.426186 systemd-resolved[1728]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:20:47.426211 systemd-resolved[1728]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:20:47.429219 systemd-networkd[1727]: lo: Link UP Sep 5 00:20:47.429226 systemd-networkd[1727]: lo: Gained carrier Sep 5 00:20:47.429419 systemd-resolved[1728]: Using system hostname 'ci-4230.2.2-n-de5468c6d2'. Sep 5 00:20:47.432099 systemd-networkd[1727]: bond0: netdev ready Sep 5 00:20:47.433157 systemd-networkd[1727]: Enumeration completed Sep 5 00:20:47.436858 systemd-networkd[1727]: enp1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:97:fa:a8.network. Sep 5 00:20:47.469656 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:20:47.480751 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:20:47.490575 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:20:47.500673 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:20:47.513763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:20:47.524480 systemd[1]: Reached target network.target - Network. Sep 5 00:20:47.533481 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:20:47.545491 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:20:47.556524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:20:47.568493 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:20:47.580488 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:20:47.592472 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:20:47.592487 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:20:47.601475 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:20:47.611565 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:20:47.622544 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:20:47.634522 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:20:47.644238 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:20:47.655294 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:20:47.665432 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 00:20:47.687869 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:20:47.698843 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 00:20:47.727653 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:20:47.729914 lvm[1771]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:20:47.739330 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 00:20:47.752388 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:20:47.759482 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Sep 5 00:20:47.773481 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Sep 5 00:20:47.774581 systemd-networkd[1727]: enp1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:97:fa:a9.network. Sep 5 00:20:47.777866 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:20:47.788701 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:20:47.801119 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:20:47.811524 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:20:47.820613 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:20:47.820652 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:20:47.822221 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:20:47.835556 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 00:20:47.847230 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:20:47.857201 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:20:47.861302 coreos-metadata[1776]: Sep 05 00:20:47.861 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 00:20:47.862114 coreos-metadata[1776]: Sep 05 00:20:47.862 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata) Sep 5 00:20:47.867173 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:20:47.868978 jq[1780]: false Sep 5 00:20:47.870022 dbus-daemon[1777]: [system] SELinux support is enabled Sep 5 00:20:47.877674 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:20:47.878326 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:20:47.886763 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:20:47.889268 extend-filesystems[1782]: Found loop4 Sep 5 00:20:47.889268 extend-filesystems[1782]: Found loop5 Sep 5 00:20:47.889268 extend-filesystems[1782]: Found loop6 Sep 5 00:20:47.957598 kernel: EXT4-fs (sdb9): resizing filesystem from 553472 to 116605649 blocks Sep 5 00:20:47.957616 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (1665) Sep 5 00:20:47.957626 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Sep 5 00:20:47.957751 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Sep 5 00:20:47.957762 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Sep 5 00:20:47.957808 extend-filesystems[1782]: Found loop7 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sda Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb1 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb2 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb3 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found usr Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb4 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb6 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb7 Sep 5 00:20:47.957808 extend-filesystems[1782]: Found sdb9 Sep 5 00:20:47.957808 extend-filesystems[1782]: Checking size of /dev/sdb9 Sep 5 00:20:47.957808 extend-filesystems[1782]: Resized partition /dev/sdb9 Sep 5 00:20:48.122491 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Sep 5 00:20:48.122508 kernel: bond0: active interface up! Sep 5 00:20:47.898273 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:20:48.122636 extend-filesystems[1796]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:20:47.938146 systemd-networkd[1727]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Sep 5 00:20:47.939577 systemd-networkd[1727]: enp1s0f0np0: Link UP Sep 5 00:20:47.939724 systemd-networkd[1727]: enp1s0f0np0: Gained carrier Sep 5 00:20:48.142665 sshd_keygen[1805]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:20:47.942080 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:20:48.142797 update_engine[1807]: I20250905 00:20:48.051745 1807 main.cc:92] Flatcar Update Engine starting Sep 5 00:20:48.142797 update_engine[1807]: I20250905 00:20:48.052492 1807 update_check_scheduler.cc:74] Next update check in 7m39s Sep 5 00:20:47.956673 systemd-networkd[1727]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:97:fa:a8.network. Sep 5 00:20:48.152681 jq[1808]: true Sep 5 00:20:47.956831 systemd-networkd[1727]: enp1s0f1np1: Link UP Sep 5 00:20:47.956977 systemd-networkd[1727]: enp1s0f1np1: Gained carrier Sep 5 00:20:47.973039 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:20:47.975630 systemd-networkd[1727]: bond0: Link UP Sep 5 00:20:47.975805 systemd-networkd[1727]: bond0: Gained carrier Sep 5 00:20:47.975925 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:47.976342 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:47.976470 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:47.976607 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:47.980661 systemd[1]: Starting tcsd.service - TCG Core Services Daemon... Sep 5 00:20:48.000806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:20:48.001156 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:20:48.022178 systemd-logind[1802]: Watching system buttons on /dev/input/event3 (Power Button) Sep 5 00:20:48.022191 systemd-logind[1802]: Watching system buttons on /dev/input/event2 (Sleep Button) Sep 5 00:20:48.022201 systemd-logind[1802]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Sep 5 00:20:48.022464 systemd-logind[1802]: New seat seat0. Sep 5 00:20:48.044800 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:20:48.061741 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:20:48.093088 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:20:48.113733 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 00:20:48.152801 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:20:48.152904 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:20:48.153109 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:20:48.153207 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:20:48.169448 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Sep 5 00:20:48.172036 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:20:48.172139 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:20:48.184694 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:20:48.197466 (ntainerd)[1820]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:20:48.198764 jq[1819]: true Sep 5 00:20:48.201982 tar[1817]: linux-amd64/LICENSE Sep 5 00:20:48.202099 tar[1817]: linux-amd64/helm Sep 5 00:20:48.202640 dbus-daemon[1777]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 00:20:48.206104 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Sep 5 00:20:48.206224 systemd[1]: Condition check resulted in tcsd.service - TCG Core Services Daemon being skipped. Sep 5 00:20:48.212302 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:20:48.244632 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:20:48.253520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:20:48.253622 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:20:48.264613 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:20:48.264750 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:20:48.281529 bash[1849]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:20:48.285619 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:20:48.300415 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:20:48.310428 locksmithd[1856]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:20:48.312849 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:20:48.312965 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:20:48.334694 systemd[1]: Starting sshkeys.service... Sep 5 00:20:48.342307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:20:48.356401 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 00:20:48.362512 containerd[1820]: time="2025-09-05T00:20:48.362475370Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 5 00:20:48.375009 containerd[1820]: time="2025-09-05T00:20:48.374962828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375709 containerd[1820]: time="2025-09-05T00:20:48.375691462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375709 containerd[1820]: time="2025-09-05T00:20:48.375708375Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:20:48.375766 containerd[1820]: time="2025-09-05T00:20:48.375718054Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:20:48.375810 containerd[1820]: time="2025-09-05T00:20:48.375801836Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:20:48.375827 containerd[1820]: time="2025-09-05T00:20:48.375812991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375859 containerd[1820]: time="2025-09-05T00:20:48.375850891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375878 containerd[1820]: time="2025-09-05T00:20:48.375859399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375978 containerd[1820]: time="2025-09-05T00:20:48.375969639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375994 containerd[1820]: time="2025-09-05T00:20:48.375978828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375994 containerd[1820]: time="2025-09-05T00:20:48.375986312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:20:48.375994 containerd[1820]: time="2025-09-05T00:20:48.375992124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.376042 containerd[1820]: time="2025-09-05T00:20:48.376034932Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.376156 containerd[1820]: time="2025-09-05T00:20:48.376149167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:20:48.376227 containerd[1820]: time="2025-09-05T00:20:48.376220066Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:20:48.376243 containerd[1820]: time="2025-09-05T00:20:48.376228483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:20:48.376278 containerd[1820]: time="2025-09-05T00:20:48.376271910Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:20:48.376306 containerd[1820]: time="2025-09-05T00:20:48.376300259Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:20:48.377633 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 00:20:48.388380 coreos-metadata[1878]: Sep 05 00:20:48.388 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Sep 5 00:20:48.389060 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:20:48.391906 containerd[1820]: time="2025-09-05T00:20:48.391890848Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:20:48.391940 containerd[1820]: time="2025-09-05T00:20:48.391919943Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:20:48.391940 containerd[1820]: time="2025-09-05T00:20:48.391929999Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:20:48.391971 containerd[1820]: time="2025-09-05T00:20:48.391955216Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:20:48.391971 containerd[1820]: time="2025-09-05T00:20:48.391964861Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:20:48.392051 containerd[1820]: time="2025-09-05T00:20:48.392042160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:20:48.392174 containerd[1820]: time="2025-09-05T00:20:48.392165620Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:20:48.392228 containerd[1820]: time="2025-09-05T00:20:48.392221180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:20:48.392243 containerd[1820]: time="2025-09-05T00:20:48.392231633Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:20:48.392243 containerd[1820]: time="2025-09-05T00:20:48.392239449Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:20:48.392270 containerd[1820]: time="2025-09-05T00:20:48.392247159Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392270 containerd[1820]: time="2025-09-05T00:20:48.392254430Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392270 containerd[1820]: time="2025-09-05T00:20:48.392260952Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392270 containerd[1820]: time="2025-09-05T00:20:48.392268011Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392276180Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392283106Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392289768Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392295672Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392306725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392314128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392321397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392329 containerd[1820]: time="2025-09-05T00:20:48.392328376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392337475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392344716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392351387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392358193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392365034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392373878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392380345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392387590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392394503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392402520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392414239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392421787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392488 containerd[1820]: time="2025-09-05T00:20:48.392428770Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:20:48.392847 containerd[1820]: time="2025-09-05T00:20:48.392836511Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:20:48.392865 containerd[1820]: time="2025-09-05T00:20:48.392853859Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:20:48.392865 containerd[1820]: time="2025-09-05T00:20:48.392861217Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:20:48.392896 containerd[1820]: time="2025-09-05T00:20:48.392868196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:20:48.392896 containerd[1820]: time="2025-09-05T00:20:48.392874826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.392896 containerd[1820]: time="2025-09-05T00:20:48.392882212Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:20:48.392896 containerd[1820]: time="2025-09-05T00:20:48.392889147Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:20:48.392896 containerd[1820]: time="2025-09-05T00:20:48.392895187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:20:48.393078 containerd[1820]: time="2025-09-05T00:20:48.393054642Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:20:48.393156 containerd[1820]: time="2025-09-05T00:20:48.393083587Z" level=info msg="Connect containerd service" Sep 5 00:20:48.393156 containerd[1820]: time="2025-09-05T00:20:48.393100409Z" level=info msg="using legacy CRI server" Sep 5 00:20:48.393156 containerd[1820]: time="2025-09-05T00:20:48.393104566Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:20:48.393203 containerd[1820]: time="2025-09-05T00:20:48.393178531Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:20:48.393490 containerd[1820]: time="2025-09-05T00:20:48.393480592Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:20:48.393609 containerd[1820]: time="2025-09-05T00:20:48.393590454Z" level=info msg="Start subscribing containerd event" Sep 5 00:20:48.393630 containerd[1820]: time="2025-09-05T00:20:48.393622403Z" level=info msg="Start recovering state" Sep 5 00:20:48.393646 containerd[1820]: time="2025-09-05T00:20:48.393624909Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:20:48.393666 containerd[1820]: time="2025-09-05T00:20:48.393659768Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:20:48.393682 containerd[1820]: time="2025-09-05T00:20:48.393662040Z" level=info msg="Start event monitor" Sep 5 00:20:48.393682 containerd[1820]: time="2025-09-05T00:20:48.393674236Z" level=info msg="Start snapshots syncer" Sep 5 00:20:48.393724 containerd[1820]: time="2025-09-05T00:20:48.393679698Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:20:48.393724 containerd[1820]: time="2025-09-05T00:20:48.393687551Z" level=info msg="Start streaming server" Sep 5 00:20:48.393757 containerd[1820]: time="2025-09-05T00:20:48.393728892Z" level=info msg="containerd successfully booted in 0.031713s" Sep 5 00:20:48.399805 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:20:48.428808 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:20:48.437382 systemd[1]: Started serial-getty@ttyS1.service - Serial Getty on ttyS1. Sep 5 00:20:48.446681 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:20:48.526857 tar[1817]: linux-amd64/README.md Sep 5 00:20:48.542627 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:20:48.862255 coreos-metadata[1776]: Sep 05 00:20:48.862 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Sep 5 00:20:49.028504 kernel: EXT4-fs (sdb9): resized filesystem to 116605649 Sep 5 00:20:49.059680 extend-filesystems[1796]: Filesystem at /dev/sdb9 is mounted on /; on-line resizing required Sep 5 00:20:49.059680 extend-filesystems[1796]: old_desc_blocks = 1, new_desc_blocks = 56 Sep 5 00:20:49.059680 extend-filesystems[1796]: The filesystem on /dev/sdb9 is now 116605649 (4k) blocks long. Sep 5 00:20:49.100526 extend-filesystems[1782]: Resized filesystem in /dev/sdb9 Sep 5 00:20:49.060225 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:20:49.060355 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:20:49.382558 systemd-networkd[1727]: bond0: Gained IPv6LL Sep 5 00:20:49.382917 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:49.958739 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:49.958850 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:20:49.960022 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:20:49.971031 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:20:49.996661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:20:50.007181 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:20:50.026392 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:20:50.603592 kernel: mlx5_core 0000:01:00.0: lag map: port 1:1 port 2:2 Sep 5 00:20:50.603732 kernel: mlx5_core 0000:01:00.0: shared_fdb:0 mode:queue_affinity Sep 5 00:20:50.797337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:20:50.808986 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:20:51.315150 kubelet[1912]: E0905 00:20:51.315080 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:20:51.316297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:20:51.316377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:20:51.316595 systemd[1]: kubelet.service: Consumed 625ms CPU time, 276.8M memory peak. Sep 5 00:20:52.175121 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:20:52.192822 systemd[1]: Started sshd@0-139.178.90.135:22-147.75.109.163:54534.service - OpenSSH per-connection server daemon (147.75.109.163:54534). Sep 5 00:20:52.274695 sshd[1930]: Accepted publickey for core from 147.75.109.163 port 54534 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:20:52.276294 sshd-session[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:20:52.283793 systemd-logind[1802]: New session 1 of user core. Sep 5 00:20:52.284677 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:20:52.302821 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:20:52.317232 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:20:52.342860 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:20:52.360388 (systemd)[1934]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:20:52.362369 systemd-logind[1802]: New session c1 of user core. Sep 5 00:20:52.488946 systemd[1934]: Queued start job for default target default.target. Sep 5 00:20:52.501143 systemd[1934]: Created slice app.slice - User Application Slice. Sep 5 00:20:52.501176 systemd[1934]: Reached target paths.target - Paths. Sep 5 00:20:52.501198 systemd[1934]: Reached target timers.target - Timers. Sep 5 00:20:52.501849 systemd[1934]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:20:52.507318 systemd[1934]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:20:52.507347 systemd[1934]: Reached target sockets.target - Sockets. Sep 5 00:20:52.507370 systemd[1934]: Reached target basic.target - Basic System. Sep 5 00:20:52.507392 systemd[1934]: Reached target default.target - Main User Target. Sep 5 00:20:52.507407 systemd[1934]: Startup finished in 140ms. Sep 5 00:20:52.507458 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:20:52.518603 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:20:52.519593 coreos-metadata[1776]: Sep 05 00:20:52.519 INFO Fetch successful Sep 5 00:20:52.570535 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 00:20:52.584380 systemd[1]: Starting packet-phone-home.service - Report Success to Packet... Sep 5 00:20:52.594177 systemd[1]: Started sshd@1-139.178.90.135:22-147.75.109.163:57330.service - OpenSSH per-connection server daemon (147.75.109.163:57330). Sep 5 00:20:52.635801 sshd[1951]: Accepted publickey for core from 147.75.109.163 port 57330 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:20:52.636638 sshd-session[1951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:20:52.640124 systemd-logind[1802]: New session 2 of user core. Sep 5 00:20:52.652732 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:20:52.726769 sshd[1953]: Connection closed by 147.75.109.163 port 57330 Sep 5 00:20:52.727562 sshd-session[1951]: pam_unix(sshd:session): session closed for user core Sep 5 00:20:52.753116 systemd[1]: sshd@1-139.178.90.135:22-147.75.109.163:57330.service: Deactivated successfully. Sep 5 00:20:52.757494 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:20:52.761095 systemd-logind[1802]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:20:52.777000 systemd[1]: Started sshd@2-139.178.90.135:22-147.75.109.163:57334.service - OpenSSH per-connection server daemon (147.75.109.163:57334). Sep 5 00:20:52.792606 systemd-logind[1802]: Removed session 2. Sep 5 00:20:52.832411 sshd[1958]: Accepted publickey for core from 147.75.109.163 port 57334 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:20:52.833256 sshd-session[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:20:52.836624 systemd-logind[1802]: New session 3 of user core. Sep 5 00:20:52.855815 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:20:52.921459 sshd[1961]: Connection closed by 147.75.109.163 port 57334 Sep 5 00:20:52.921617 sshd-session[1958]: pam_unix(sshd:session): session closed for user core Sep 5 00:20:52.923012 systemd[1]: sshd@2-139.178.90.135:22-147.75.109.163:57334.service: Deactivated successfully. Sep 5 00:20:52.923918 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:20:52.924685 systemd-logind[1802]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:20:52.925337 systemd-logind[1802]: Removed session 3. Sep 5 00:20:53.015150 systemd[1]: Finished packet-phone-home.service - Report Success to Packet. Sep 5 00:20:53.458438 login[1887]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 00:20:53.461411 systemd-logind[1802]: New session 4 of user core. Sep 5 00:20:53.462449 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:20:53.464980 login[1886]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 5 00:20:53.467301 systemd-logind[1802]: New session 5 of user core. Sep 5 00:20:53.468031 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:20:53.826744 coreos-metadata[1878]: Sep 05 00:20:53.826 INFO Fetch successful Sep 5 00:20:53.865065 unknown[1878]: wrote ssh authorized keys file for user: core Sep 5 00:20:53.896426 update-ssh-keys[1993]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:20:53.896820 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 00:20:53.897471 systemd[1]: Finished sshkeys.service. Sep 5 00:20:53.898646 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:20:53.898835 systemd[1]: Startup finished in 2.703s (kernel) + 25.041s (initrd) + 10.064s (userspace) = 37.809s. Sep 5 00:20:54.332023 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:21:01.568977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:21:01.580712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:01.822285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:01.824321 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:21:01.845948 kubelet[2005]: E0905 00:21:01.845914 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:21:01.848026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:21:01.848114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:21:01.848292 systemd[1]: kubelet.service: Consumed 131ms CPU time, 118.4M memory peak. Sep 5 00:21:02.943123 systemd[1]: Started sshd@3-139.178.90.135:22-147.75.109.163:43324.service - OpenSSH per-connection server daemon (147.75.109.163:43324). Sep 5 00:21:02.974890 sshd[2022]: Accepted publickey for core from 147.75.109.163 port 43324 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:21:02.975576 sshd-session[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:02.978420 systemd-logind[1802]: New session 6 of user core. Sep 5 00:21:02.990700 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:21:03.043096 sshd[2024]: Connection closed by 147.75.109.163 port 43324 Sep 5 00:21:03.043252 sshd-session[2022]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:03.057760 systemd[1]: sshd@3-139.178.90.135:22-147.75.109.163:43324.service: Deactivated successfully. Sep 5 00:21:03.058616 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:21:03.059398 systemd-logind[1802]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:21:03.060110 systemd[1]: Started sshd@4-139.178.90.135:22-147.75.109.163:43330.service - OpenSSH per-connection server daemon (147.75.109.163:43330). Sep 5 00:21:03.060732 systemd-logind[1802]: Removed session 6. Sep 5 00:21:03.096651 sshd[2029]: Accepted publickey for core from 147.75.109.163 port 43330 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:21:03.097580 sshd-session[2029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:03.101431 systemd-logind[1802]: New session 7 of user core. Sep 5 00:21:03.114738 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:21:03.169348 sshd[2032]: Connection closed by 147.75.109.163 port 43330 Sep 5 00:21:03.170133 sshd-session[2029]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:03.193706 systemd[1]: sshd@4-139.178.90.135:22-147.75.109.163:43330.service: Deactivated successfully. Sep 5 00:21:03.197961 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:21:03.200195 systemd-logind[1802]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:21:03.220422 systemd[1]: Started sshd@5-139.178.90.135:22-147.75.109.163:43346.service - OpenSSH per-connection server daemon (147.75.109.163:43346). Sep 5 00:21:03.223631 systemd-logind[1802]: Removed session 7. Sep 5 00:21:03.276536 sshd[2037]: Accepted publickey for core from 147.75.109.163 port 43346 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:21:03.277465 sshd-session[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:03.281400 systemd-logind[1802]: New session 8 of user core. Sep 5 00:21:03.293691 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:21:03.357166 sshd[2040]: Connection closed by 147.75.109.163 port 43346 Sep 5 00:21:03.357981 sshd-session[2037]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:03.382678 systemd[1]: sshd@5-139.178.90.135:22-147.75.109.163:43346.service: Deactivated successfully. Sep 5 00:21:03.386744 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:21:03.388904 systemd-logind[1802]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:21:03.405801 systemd[1]: Started sshd@6-139.178.90.135:22-147.75.109.163:43358.service - OpenSSH per-connection server daemon (147.75.109.163:43358). Sep 5 00:21:03.406566 systemd-logind[1802]: Removed session 8. Sep 5 00:21:03.435813 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 43358 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:21:03.436552 sshd-session[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:03.439920 systemd-logind[1802]: New session 9 of user core. Sep 5 00:21:03.460763 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:21:03.530671 sudo[2049]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:21:03.530822 sudo[2049]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:21:03.540173 sudo[2049]: pam_unix(sudo:session): session closed for user root Sep 5 00:21:03.541043 sshd[2048]: Connection closed by 147.75.109.163 port 43358 Sep 5 00:21:03.541220 sshd-session[2045]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:03.565446 systemd[1]: sshd@6-139.178.90.135:22-147.75.109.163:43358.service: Deactivated successfully. Sep 5 00:21:03.566641 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:21:03.567797 systemd-logind[1802]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:21:03.568870 systemd[1]: Started sshd@7-139.178.90.135:22-147.75.109.163:43370.service - OpenSSH per-connection server daemon (147.75.109.163:43370). Sep 5 00:21:03.569591 systemd-logind[1802]: Removed session 9. Sep 5 00:21:03.615913 sshd[2054]: Accepted publickey for core from 147.75.109.163 port 43370 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:21:03.616524 sshd-session[2054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:03.619157 systemd-logind[1802]: New session 10 of user core. Sep 5 00:21:03.628707 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:21:03.682836 sudo[2059]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:21:03.683393 sudo[2059]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:21:03.691091 sudo[2059]: pam_unix(sudo:session): session closed for user root Sep 5 00:21:03.700035 sudo[2058]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 00:21:03.700168 sudo[2058]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:21:03.719824 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 00:21:03.739513 augenrules[2081]: No rules Sep 5 00:21:03.740174 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:21:03.740382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 00:21:03.741229 sudo[2058]: pam_unix(sudo:session): session closed for user root Sep 5 00:21:03.742279 sshd[2057]: Connection closed by 147.75.109.163 port 43370 Sep 5 00:21:03.742598 sshd-session[2054]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:03.746797 systemd[1]: sshd@7-139.178.90.135:22-147.75.109.163:43370.service: Deactivated successfully. Sep 5 00:21:03.748333 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:21:03.749240 systemd-logind[1802]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:21:03.751003 systemd[1]: Started sshd@8-139.178.90.135:22-147.75.109.163:43372.service - OpenSSH per-connection server daemon (147.75.109.163:43372). Sep 5 00:21:03.752042 systemd-logind[1802]: Removed session 10. Sep 5 00:21:03.797900 sshd[2089]: Accepted publickey for core from 147.75.109.163 port 43372 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:21:03.798490 sshd-session[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:21:03.801131 systemd-logind[1802]: New session 11 of user core. Sep 5 00:21:03.817696 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:21:03.875390 sudo[2093]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:21:03.876227 sudo[2093]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:21:04.206769 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:21:04.206871 (dockerd)[2120]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:21:04.469364 dockerd[2120]: time="2025-09-05T00:21:04.469275745Z" level=info msg="Starting up" Sep 5 00:21:04.546605 dockerd[2120]: time="2025-09-05T00:21:04.546586293Z" level=info msg="Loading containers: start." Sep 5 00:21:04.678517 kernel: Initializing XFRM netlink socket Sep 5 00:21:04.693333 systemd-timesyncd[1729]: Network configuration changed, trying to establish connection. Sep 5 00:21:04.741820 systemd-networkd[1727]: docker0: Link UP Sep 5 00:21:04.768430 dockerd[2120]: time="2025-09-05T00:21:04.768383805Z" level=info msg="Loading containers: done." Sep 5 00:21:04.777890 dockerd[2120]: time="2025-09-05T00:21:04.777843515Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:21:04.777975 dockerd[2120]: time="2025-09-05T00:21:04.777895067Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Sep 5 00:21:04.777975 dockerd[2120]: time="2025-09-05T00:21:04.777951083Z" level=info msg="Daemon has completed initialization" Sep 5 00:21:04.792149 dockerd[2120]: time="2025-09-05T00:21:04.792076495Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:21:04.792177 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:21:05.615745 containerd[1820]: time="2025-09-05T00:21:05.615654974Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:21:06.170195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815841131.mount: Deactivated successfully. Sep 5 00:21:06.941686 containerd[1820]: time="2025-09-05T00:21:06.941633814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:06.941921 containerd[1820]: time="2025-09-05T00:21:06.941734538Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:21:06.942198 containerd[1820]: time="2025-09-05T00:21:06.942185751Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:06.943802 containerd[1820]: time="2025-09-05T00:21:06.943789297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:06.944426 containerd[1820]: time="2025-09-05T00:21:06.944410748Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.328678172s" Sep 5 00:21:06.944509 containerd[1820]: time="2025-09-05T00:21:06.944433993Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:21:06.944856 containerd[1820]: time="2025-09-05T00:21:06.944845427Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:21:07.984785 containerd[1820]: time="2025-09-05T00:21:07.984757407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:07.985009 containerd[1820]: time="2025-09-05T00:21:07.984969844Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:21:07.985374 containerd[1820]: time="2025-09-05T00:21:07.985362414Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:07.986923 containerd[1820]: time="2025-09-05T00:21:07.986884185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:07.987561 containerd[1820]: time="2025-09-05T00:21:07.987520994Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.042660594s" Sep 5 00:21:07.987561 containerd[1820]: time="2025-09-05T00:21:07.987536986Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:21:07.987801 containerd[1820]: time="2025-09-05T00:21:07.987788234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:21:08.834850 containerd[1820]: time="2025-09-05T00:21:08.834823079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:08.834971 containerd[1820]: time="2025-09-05T00:21:08.834948050Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:21:08.835388 containerd[1820]: time="2025-09-05T00:21:08.835376293Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:08.837004 containerd[1820]: time="2025-09-05T00:21:08.836960320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:08.837652 containerd[1820]: time="2025-09-05T00:21:08.837636096Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 849.830455ms" Sep 5 00:21:08.837698 containerd[1820]: time="2025-09-05T00:21:08.837652217Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:21:08.838045 containerd[1820]: time="2025-09-05T00:21:08.838033286Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:21:09.660439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173504841.mount: Deactivated successfully. Sep 5 00:21:09.881702 containerd[1820]: time="2025-09-05T00:21:09.881673499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:09.881917 containerd[1820]: time="2025-09-05T00:21:09.881886980Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:21:09.882342 containerd[1820]: time="2025-09-05T00:21:09.882330631Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:09.883241 containerd[1820]: time="2025-09-05T00:21:09.883226365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:09.883670 containerd[1820]: time="2025-09-05T00:21:09.883629308Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 1.045580493s" Sep 5 00:21:09.883670 containerd[1820]: time="2025-09-05T00:21:09.883644703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:21:09.883980 containerd[1820]: time="2025-09-05T00:21:09.883927129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:21:10.433130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109667388.mount: Deactivated successfully. Sep 5 00:21:11.023754 containerd[1820]: time="2025-09-05T00:21:11.023728847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:11.024012 containerd[1820]: time="2025-09-05T00:21:11.023977313Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:21:11.024329 containerd[1820]: time="2025-09-05T00:21:11.024318390Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:11.025961 containerd[1820]: time="2025-09-05T00:21:11.025946956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:11.026640 containerd[1820]: time="2025-09-05T00:21:11.026625093Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.142682631s" Sep 5 00:21:11.026686 containerd[1820]: time="2025-09-05T00:21:11.026642653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:21:11.026986 containerd[1820]: time="2025-09-05T00:21:11.026974315Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:21:11.609758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608007613.mount: Deactivated successfully. Sep 5 00:21:11.611262 containerd[1820]: time="2025-09-05T00:21:11.611246336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:11.611425 containerd[1820]: time="2025-09-05T00:21:11.611401299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:21:11.611843 containerd[1820]: time="2025-09-05T00:21:11.611831928Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:11.613099 containerd[1820]: time="2025-09-05T00:21:11.613088130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:11.613637 containerd[1820]: time="2025-09-05T00:21:11.613596081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 586.604514ms" Sep 5 00:21:11.613637 containerd[1820]: time="2025-09-05T00:21:11.613612287Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:21:11.613927 containerd[1820]: time="2025-09-05T00:21:11.613918498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:21:12.019418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:21:12.032714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:12.240672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82301850.mount: Deactivated successfully. Sep 5 00:21:12.312804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:12.315023 (kubelet)[2484]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:21:12.333835 kubelet[2484]: E0905 00:21:12.333816 2484 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:21:12.335730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:21:12.335869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:21:12.336271 systemd[1]: kubelet.service: Consumed 100ms CPU time, 117.5M memory peak. Sep 5 00:21:13.371687 containerd[1820]: time="2025-09-05T00:21:13.371659076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:13.371905 containerd[1820]: time="2025-09-05T00:21:13.371890181Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:21:13.372291 containerd[1820]: time="2025-09-05T00:21:13.372281097Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:13.374009 containerd[1820]: time="2025-09-05T00:21:13.373997612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:13.375238 containerd[1820]: time="2025-09-05T00:21:13.375195304Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.761262993s" Sep 5 00:21:13.375238 containerd[1820]: time="2025-09-05T00:21:13.375213092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:21:15.268847 systemd-timesyncd[1729]: Timed out waiting for reply from [2605:6400:488d:e1b2:84ba:ceab:2099:353]:123 (2.flatcar.pool.ntp.org). Sep 5 00:21:15.319523 systemd-timesyncd[1729]: Contacted time server [2600:3c00::f03c:93ff:fe5b:29d1]:123 (2.flatcar.pool.ntp.org). Sep 5 00:21:15.319576 systemd-timesyncd[1729]: Initial clock synchronization to Fri 2025-09-05 00:21:15.365264 UTC. Sep 5 00:21:16.233865 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:16.233972 systemd[1]: kubelet.service: Consumed 100ms CPU time, 117.5M memory peak. Sep 5 00:21:16.252777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:16.269320 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-11.scope)... Sep 5 00:21:16.269327 systemd[1]: Reloading... Sep 5 00:21:16.314535 zram_generator::config[2641]: No configuration found. Sep 5 00:21:16.385095 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:21:16.468997 systemd[1]: Reloading finished in 199 ms. Sep 5 00:21:16.534498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:16.537827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:16.538346 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:21:16.538534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:16.538567 systemd[1]: kubelet.service: Consumed 64ms CPU time, 98.2M memory peak. Sep 5 00:21:16.539933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:16.791231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:16.796270 (kubelet)[2710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:21:16.819321 kubelet[2710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:21:16.819321 kubelet[2710]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:21:16.819321 kubelet[2710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:21:16.819594 kubelet[2710]: I0905 00:21:16.819351 2710 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:21:17.604190 kubelet[2710]: I0905 00:21:17.604174 2710 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:21:17.604190 kubelet[2710]: I0905 00:21:17.604188 2710 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:21:17.604307 kubelet[2710]: I0905 00:21:17.604302 2710 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:21:17.628937 kubelet[2710]: E0905 00:21:17.628879 2710 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://139.178.90.135:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:21:17.630745 kubelet[2710]: I0905 00:21:17.630686 2710 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:21:17.638531 kubelet[2710]: E0905 00:21:17.638498 2710 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:21:17.638562 kubelet[2710]: I0905 00:21:17.638533 2710 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:21:17.647521 kubelet[2710]: I0905 00:21:17.647479 2710 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:21:17.647630 kubelet[2710]: I0905 00:21:17.647613 2710 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:21:17.647757 kubelet[2710]: I0905 00:21:17.647630 2710 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-de5468c6d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:21:17.647757 kubelet[2710]: I0905 00:21:17.647733 2710 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:21:17.647757 kubelet[2710]: I0905 00:21:17.647740 2710 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:21:17.647879 kubelet[2710]: I0905 00:21:17.647812 2710 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:21:17.650659 kubelet[2710]: I0905 00:21:17.650627 2710 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:21:17.650659 kubelet[2710]: I0905 00:21:17.650638 2710 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:21:17.650659 kubelet[2710]: I0905 00:21:17.650650 2710 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:21:17.652351 kubelet[2710]: I0905 00:21:17.652293 2710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:21:17.656351 kubelet[2710]: I0905 00:21:17.656338 2710 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 5 00:21:17.656671 kubelet[2710]: I0905 00:21:17.656662 2710 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:21:17.657151 kubelet[2710]: E0905 00:21:17.657136 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.90.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:21:17.657219 kubelet[2710]: W0905 00:21:17.657184 2710 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:21:17.657219 kubelet[2710]: E0905 00:21:17.657209 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.90.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-de5468c6d2&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:21:17.658444 kubelet[2710]: I0905 00:21:17.658436 2710 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:21:17.658536 kubelet[2710]: I0905 00:21:17.658505 2710 server.go:1289] "Started kubelet" Sep 5 00:21:17.660086 kubelet[2710]: I0905 00:21:17.660036 2710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:21:17.660154 kubelet[2710]: I0905 00:21:17.660108 2710 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:21:17.660310 kubelet[2710]: I0905 00:21:17.660301 2710 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:21:17.661263 kubelet[2710]: I0905 00:21:17.661251 2710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:21:17.661263 kubelet[2710]: I0905 00:21:17.661258 2710 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:21:17.661754 kubelet[2710]: I0905 00:21:17.661729 2710 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:21:17.661802 kubelet[2710]: I0905 00:21:17.661745 2710 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:21:17.663419 kubelet[2710]: I0905 00:21:17.663409 2710 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:21:17.663468 kubelet[2710]: E0905 00:21:17.663424 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:17.663506 kubelet[2710]: I0905 00:21:17.663488 2710 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:21:17.663535 kubelet[2710]: E0905 00:21:17.663513 2710 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:21:17.663570 kubelet[2710]: E0905 00:21:17.663554 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.90.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:21:17.663686 kubelet[2710]: E0905 00:21:17.663671 2710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-de5468c6d2?timeout=10s\": dial tcp 139.178.90.135:6443: connect: connection refused" interval="200ms" Sep 5 00:21:17.663761 kubelet[2710]: I0905 00:21:17.663750 2710 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:21:17.664160 kubelet[2710]: I0905 00:21:17.664151 2710 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:21:17.664160 kubelet[2710]: I0905 00:21:17.664160 2710 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:21:17.665592 kubelet[2710]: E0905 00:21:17.664626 2710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.90.135:6443/api/v1/namespaces/default/events\": dial tcp 139.178.90.135:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230.2.2-n-de5468c6d2.18623b006df0cdf5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230.2.2-n-de5468c6d2,UID:ci-4230.2.2-n-de5468c6d2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230.2.2-n-de5468c6d2,},FirstTimestamp:2025-09-05 00:21:17.658451445 +0000 UTC m=+0.859971052,LastTimestamp:2025-09-05 00:21:17.658451445 +0000 UTC m=+0.859971052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230.2.2-n-de5468c6d2,}" Sep 5 00:21:17.666073 kubelet[2710]: I0905 00:21:17.666060 2710 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:21:17.670061 kubelet[2710]: I0905 00:21:17.670028 2710 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:21:17.670061 kubelet[2710]: I0905 00:21:17.670034 2710 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:21:17.670061 kubelet[2710]: I0905 00:21:17.670059 2710 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:21:17.671284 kubelet[2710]: I0905 00:21:17.671254 2710 policy_none.go:49] "None policy: Start" Sep 5 00:21:17.671284 kubelet[2710]: I0905 00:21:17.671279 2710 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:21:17.671284 kubelet[2710]: I0905 00:21:17.671284 2710 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:21:17.673448 kubelet[2710]: I0905 00:21:17.673434 2710 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:21:17.673488 kubelet[2710]: I0905 00:21:17.673451 2710 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:21:17.673488 kubelet[2710]: I0905 00:21:17.673461 2710 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:21:17.673488 kubelet[2710]: I0905 00:21:17.673466 2710 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:21:17.673535 kubelet[2710]: E0905 00:21:17.673486 2710 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:21:17.673734 kubelet[2710]: E0905 00:21:17.673721 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://139.178.90.135:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:21:17.674401 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:21:17.696342 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:21:17.698681 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:21:17.707150 kubelet[2710]: E0905 00:21:17.707106 2710 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:21:17.707265 kubelet[2710]: I0905 00:21:17.707225 2710 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:21:17.707265 kubelet[2710]: I0905 00:21:17.707237 2710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:21:17.707392 kubelet[2710]: I0905 00:21:17.707352 2710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:21:17.707793 kubelet[2710]: E0905 00:21:17.707736 2710 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:21:17.707793 kubelet[2710]: E0905 00:21:17.707790 2710 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:17.784348 systemd[1]: Created slice kubepods-burstable-poda4b85ef0d75a17fc2ca0f1be4afc465c.slice - libcontainer container kubepods-burstable-poda4b85ef0d75a17fc2ca0f1be4afc465c.slice. Sep 5 00:21:17.794372 kubelet[2710]: E0905 00:21:17.794261 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.802441 systemd[1]: Created slice kubepods-burstable-pod21c442b3900034d87a8adef8b17c3f3f.slice - libcontainer container kubepods-burstable-pod21c442b3900034d87a8adef8b17c3f3f.slice. Sep 5 00:21:17.810947 kubelet[2710]: I0905 00:21:17.810865 2710 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.811630 kubelet[2710]: E0905 00:21:17.811531 2710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.90.135:6443/api/v1/nodes\": dial tcp 139.178.90.135:6443: connect: connection refused" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.823194 kubelet[2710]: E0905 00:21:17.823157 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.824739 systemd[1]: Created slice kubepods-burstable-podf233d75707dfe26cdab42e1e9d91c4d1.slice - libcontainer container kubepods-burstable-podf233d75707dfe26cdab42e1e9d91c4d1.slice. Sep 5 00:21:17.825551 kubelet[2710]: E0905 00:21:17.825515 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.864978 kubelet[2710]: I0905 00:21:17.864727 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f233d75707dfe26cdab42e1e9d91c4d1-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-de5468c6d2\" (UID: \"f233d75707dfe26cdab42e1e9d91c4d1\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.864978 kubelet[2710]: I0905 00:21:17.864860 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4b85ef0d75a17fc2ca0f1be4afc465c-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" (UID: \"a4b85ef0d75a17fc2ca0f1be4afc465c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.864978 kubelet[2710]: I0905 00:21:17.864955 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4b85ef0d75a17fc2ca0f1be4afc465c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" (UID: \"a4b85ef0d75a17fc2ca0f1be4afc465c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.865387 kubelet[2710]: I0905 00:21:17.865043 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.865387 kubelet[2710]: I0905 00:21:17.865108 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4b85ef0d75a17fc2ca0f1be4afc465c-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" (UID: \"a4b85ef0d75a17fc2ca0f1be4afc465c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.865387 kubelet[2710]: I0905 00:21:17.865187 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.865387 kubelet[2710]: E0905 00:21:17.865229 2710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-de5468c6d2?timeout=10s\": dial tcp 139.178.90.135:6443: connect: connection refused" interval="400ms" Sep 5 00:21:17.865387 kubelet[2710]: I0905 00:21:17.865254 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.865828 kubelet[2710]: I0905 00:21:17.865328 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:17.865828 kubelet[2710]: I0905 00:21:17.865381 2710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:18.015052 kubelet[2710]: I0905 00:21:18.014952 2710 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:18.015685 kubelet[2710]: E0905 00:21:18.015587 2710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.90.135:6443/api/v1/nodes\": dial tcp 139.178.90.135:6443: connect: connection refused" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:18.096207 containerd[1820]: time="2025-09-05T00:21:18.096081301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-de5468c6d2,Uid:a4b85ef0d75a17fc2ca0f1be4afc465c,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:18.124712 containerd[1820]: time="2025-09-05T00:21:18.124506550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-de5468c6d2,Uid:21c442b3900034d87a8adef8b17c3f3f,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:18.126043 containerd[1820]: time="2025-09-05T00:21:18.126014075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-de5468c6d2,Uid:f233d75707dfe26cdab42e1e9d91c4d1,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:18.267096 kubelet[2710]: E0905 00:21:18.267037 2710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.90.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230.2.2-n-de5468c6d2?timeout=10s\": dial tcp 139.178.90.135:6443: connect: connection refused" interval="800ms" Sep 5 00:21:18.417141 kubelet[2710]: I0905 00:21:18.417049 2710 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:18.417332 kubelet[2710]: E0905 00:21:18.417286 2710 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://139.178.90.135:6443/api/v1/nodes\": dial tcp 139.178.90.135:6443: connect: connection refused" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:18.504145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406644954.mount: Deactivated successfully. Sep 5 00:21:18.504770 containerd[1820]: time="2025-09-05T00:21:18.504755180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:21:18.504921 containerd[1820]: time="2025-09-05T00:21:18.504902092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:21:18.507225 containerd[1820]: time="2025-09-05T00:21:18.507181358Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:21:18.507827 containerd[1820]: time="2025-09-05T00:21:18.507780810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:21:18.507982 containerd[1820]: time="2025-09-05T00:21:18.507942128Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:21:18.508010 containerd[1820]: time="2025-09-05T00:21:18.507989951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:21:18.508335 containerd[1820]: time="2025-09-05T00:21:18.508301814Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:21:18.510428 containerd[1820]: time="2025-09-05T00:21:18.510413526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:21:18.510916 containerd[1820]: time="2025-09-05T00:21:18.510890273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 414.588213ms" Sep 5 00:21:18.511216 containerd[1820]: time="2025-09-05T00:21:18.511203066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 386.49778ms" Sep 5 00:21:18.512549 containerd[1820]: time="2025-09-05T00:21:18.512537279Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 386.449258ms" Sep 5 00:21:18.515708 kubelet[2710]: E0905 00:21:18.515662 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://139.178.90.135:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:21:18.598238 containerd[1820]: time="2025-09-05T00:21:18.598158211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:18.598429 containerd[1820]: time="2025-09-05T00:21:18.598415235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:18.598429 containerd[1820]: time="2025-09-05T00:21:18.598425643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.598513 containerd[1820]: time="2025-09-05T00:21:18.598287612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:18.598547 containerd[1820]: time="2025-09-05T00:21:18.598510079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.598547 containerd[1820]: time="2025-09-05T00:21:18.598514554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:18.598547 containerd[1820]: time="2025-09-05T00:21:18.598507267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:18.598547 containerd[1820]: time="2025-09-05T00:21:18.598527993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.598547 containerd[1820]: time="2025-09-05T00:21:18.598537438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:18.598650 containerd[1820]: time="2025-09-05T00:21:18.598549807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.598650 containerd[1820]: time="2025-09-05T00:21:18.598582238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.598650 containerd[1820]: time="2025-09-05T00:21:18.598596039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:18.629694 systemd[1]: Started cri-containerd-5788ab0e0e5ebc9c49b97d70c4eb578445e72dac5697c038ba2ddd9c4fe0a4a3.scope - libcontainer container 5788ab0e0e5ebc9c49b97d70c4eb578445e72dac5697c038ba2ddd9c4fe0a4a3. Sep 5 00:21:18.630591 systemd[1]: Started cri-containerd-9a6c0a30cdbf33be1f6e22031e8ba19c7aec93914724cab7aba72bf040a9264c.scope - libcontainer container 9a6c0a30cdbf33be1f6e22031e8ba19c7aec93914724cab7aba72bf040a9264c. Sep 5 00:21:18.631199 kubelet[2710]: E0905 00:21:18.631181 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://139.178.90.135:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230.2.2-n-de5468c6d2&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:21:18.631376 systemd[1]: Started cri-containerd-cfa48ae0a6ed45541ed76c2213f3d635169d96d26275abd23e15211279fafcb2.scope - libcontainer container cfa48ae0a6ed45541ed76c2213f3d635169d96d26275abd23e15211279fafcb2. Sep 5 00:21:18.654387 containerd[1820]: time="2025-09-05T00:21:18.654366564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230.2.2-n-de5468c6d2,Uid:f233d75707dfe26cdab42e1e9d91c4d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5788ab0e0e5ebc9c49b97d70c4eb578445e72dac5697c038ba2ddd9c4fe0a4a3\"" Sep 5 00:21:18.656987 containerd[1820]: time="2025-09-05T00:21:18.656971826Z" level=info msg="CreateContainer within sandbox \"5788ab0e0e5ebc9c49b97d70c4eb578445e72dac5697c038ba2ddd9c4fe0a4a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:21:18.660947 containerd[1820]: time="2025-09-05T00:21:18.660926482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230.2.2-n-de5468c6d2,Uid:21c442b3900034d87a8adef8b17c3f3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a6c0a30cdbf33be1f6e22031e8ba19c7aec93914724cab7aba72bf040a9264c\"" Sep 5 00:21:18.661249 containerd[1820]: time="2025-09-05T00:21:18.661238443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230.2.2-n-de5468c6d2,Uid:a4b85ef0d75a17fc2ca0f1be4afc465c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfa48ae0a6ed45541ed76c2213f3d635169d96d26275abd23e15211279fafcb2\"" Sep 5 00:21:18.662387 containerd[1820]: time="2025-09-05T00:21:18.662369021Z" level=info msg="CreateContainer within sandbox \"5788ab0e0e5ebc9c49b97d70c4eb578445e72dac5697c038ba2ddd9c4fe0a4a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5af9df15706107cfbf6146becb533199ad33c600886e2c4a6c9f601c4b4b5ac6\"" Sep 5 00:21:18.662628 containerd[1820]: time="2025-09-05T00:21:18.662592308Z" level=info msg="StartContainer for \"5af9df15706107cfbf6146becb533199ad33c600886e2c4a6c9f601c4b4b5ac6\"" Sep 5 00:21:18.662669 containerd[1820]: time="2025-09-05T00:21:18.662653319Z" level=info msg="CreateContainer within sandbox \"9a6c0a30cdbf33be1f6e22031e8ba19c7aec93914724cab7aba72bf040a9264c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:21:18.663252 containerd[1820]: time="2025-09-05T00:21:18.663239643Z" level=info msg="CreateContainer within sandbox \"cfa48ae0a6ed45541ed76c2213f3d635169d96d26275abd23e15211279fafcb2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:21:18.667914 containerd[1820]: time="2025-09-05T00:21:18.667828895Z" level=info msg="CreateContainer within sandbox \"9a6c0a30cdbf33be1f6e22031e8ba19c7aec93914724cab7aba72bf040a9264c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8873f14b5d6c26d89df74f59baf216d2378affec4d20c47a36e9115795cef4ba\"" Sep 5 00:21:18.668107 containerd[1820]: time="2025-09-05T00:21:18.668092868Z" level=info msg="StartContainer for \"8873f14b5d6c26d89df74f59baf216d2378affec4d20c47a36e9115795cef4ba\"" Sep 5 00:21:18.669116 containerd[1820]: time="2025-09-05T00:21:18.669098394Z" level=info msg="CreateContainer within sandbox \"cfa48ae0a6ed45541ed76c2213f3d635169d96d26275abd23e15211279fafcb2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"468e84a0d67130541e48f6b54964201b1cc943f917914df65e47725a204c7506\"" Sep 5 00:21:18.669315 containerd[1820]: time="2025-09-05T00:21:18.669304665Z" level=info msg="StartContainer for \"468e84a0d67130541e48f6b54964201b1cc943f917914df65e47725a204c7506\"" Sep 5 00:21:18.687677 systemd[1]: Started cri-containerd-5af9df15706107cfbf6146becb533199ad33c600886e2c4a6c9f601c4b4b5ac6.scope - libcontainer container 5af9df15706107cfbf6146becb533199ad33c600886e2c4a6c9f601c4b4b5ac6. Sep 5 00:21:18.689852 systemd[1]: Started cri-containerd-468e84a0d67130541e48f6b54964201b1cc943f917914df65e47725a204c7506.scope - libcontainer container 468e84a0d67130541e48f6b54964201b1cc943f917914df65e47725a204c7506. Sep 5 00:21:18.690553 systemd[1]: Started cri-containerd-8873f14b5d6c26d89df74f59baf216d2378affec4d20c47a36e9115795cef4ba.scope - libcontainer container 8873f14b5d6c26d89df74f59baf216d2378affec4d20c47a36e9115795cef4ba. Sep 5 00:21:18.699373 kubelet[2710]: E0905 00:21:18.699353 2710 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://139.178.90.135:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.90.135:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:21:18.711016 containerd[1820]: time="2025-09-05T00:21:18.710994911Z" level=info msg="StartContainer for \"5af9df15706107cfbf6146becb533199ad33c600886e2c4a6c9f601c4b4b5ac6\" returns successfully" Sep 5 00:21:18.712715 containerd[1820]: time="2025-09-05T00:21:18.712669077Z" level=info msg="StartContainer for \"468e84a0d67130541e48f6b54964201b1cc943f917914df65e47725a204c7506\" returns successfully" Sep 5 00:21:18.714779 containerd[1820]: time="2025-09-05T00:21:18.714752162Z" level=info msg="StartContainer for \"8873f14b5d6c26d89df74f59baf216d2378affec4d20c47a36e9115795cef4ba\" returns successfully" Sep 5 00:21:19.219484 kubelet[2710]: I0905 00:21:19.219460 2710 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:19.318488 kubelet[2710]: E0905 00:21:19.318465 2710 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:19.415017 kubelet[2710]: I0905 00:21:19.414974 2710 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:19.415017 kubelet[2710]: E0905 00:21:19.415018 2710 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230.2.2-n-de5468c6d2\": node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:19.419506 kubelet[2710]: E0905 00:21:19.419492 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:19.519822 kubelet[2710]: E0905 00:21:19.519778 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:19.620388 kubelet[2710]: E0905 00:21:19.620347 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:19.678676 kubelet[2710]: E0905 00:21:19.678659 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:19.678983 kubelet[2710]: E0905 00:21:19.678973 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:19.679483 kubelet[2710]: E0905 00:21:19.679475 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:19.720523 kubelet[2710]: E0905 00:21:19.720428 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:19.821597 kubelet[2710]: E0905 00:21:19.821341 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:19.922487 kubelet[2710]: E0905 00:21:19.922369 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.023166 kubelet[2710]: E0905 00:21:20.023097 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.123950 kubelet[2710]: E0905 00:21:20.123752 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.224697 kubelet[2710]: E0905 00:21:20.224668 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.325847 kubelet[2710]: E0905 00:21:20.325804 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.426069 kubelet[2710]: E0905 00:21:20.425904 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.526854 kubelet[2710]: E0905 00:21:20.526804 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.628085 kubelet[2710]: E0905 00:21:20.627979 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.685919 kubelet[2710]: E0905 00:21:20.685427 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:20.685919 kubelet[2710]: E0905 00:21:20.685680 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:20.685919 kubelet[2710]: E0905 00:21:20.685836 2710 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:20.728287 kubelet[2710]: E0905 00:21:20.728202 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.828634 kubelet[2710]: E0905 00:21:20.828534 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:20.929625 kubelet[2710]: E0905 00:21:20.929539 2710 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:21.062825 kubelet[2710]: I0905 00:21:21.062768 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.077311 kubelet[2710]: I0905 00:21:21.077256 2710 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:21.077562 kubelet[2710]: I0905 00:21:21.077511 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.082842 kubelet[2710]: I0905 00:21:21.082768 2710 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:21.083034 kubelet[2710]: I0905 00:21:21.082917 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.089217 kubelet[2710]: I0905 00:21:21.089158 2710 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:21.655988 kubelet[2710]: I0905 00:21:21.655841 2710 apiserver.go:52] "Watching apiserver" Sep 5 00:21:21.662285 kubelet[2710]: I0905 00:21:21.662194 2710 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:21:21.684429 kubelet[2710]: I0905 00:21:21.684349 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.684923 kubelet[2710]: I0905 00:21:21.684858 2710 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.693021 kubelet[2710]: I0905 00:21:21.692935 2710 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:21.693021 kubelet[2710]: I0905 00:21:21.692943 2710 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:21.693331 kubelet[2710]: E0905 00:21:21.693090 2710 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.693331 kubelet[2710]: E0905 00:21:21.693090 2710 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:21.758009 systemd[1]: Reload requested from client PID 3045 ('systemctl') (unit session-11.scope)... Sep 5 00:21:21.758017 systemd[1]: Reloading... Sep 5 00:21:21.805456 zram_generator::config[3091]: No configuration found. Sep 5 00:21:21.881525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:21:21.973429 systemd[1]: Reloading finished in 215 ms. Sep 5 00:21:21.994901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:22.000636 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:21:22.000762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:22.000790 systemd[1]: kubelet.service: Consumed 1.216s CPU time, 141.6M memory peak. Sep 5 00:21:22.012427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:21:22.302453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:21:22.304729 (kubelet)[3155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:21:22.324174 kubelet[3155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:21:22.324174 kubelet[3155]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:21:22.324174 kubelet[3155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:21:22.324501 kubelet[3155]: I0905 00:21:22.324203 3155 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:21:22.328104 kubelet[3155]: I0905 00:21:22.328065 3155 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:21:22.328104 kubelet[3155]: I0905 00:21:22.328078 3155 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:21:22.328201 kubelet[3155]: I0905 00:21:22.328195 3155 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:21:22.328939 kubelet[3155]: I0905 00:21:22.328903 3155 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:21:22.331410 kubelet[3155]: I0905 00:21:22.331374 3155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:21:22.332941 kubelet[3155]: E0905 00:21:22.332928 3155 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:21:22.332976 kubelet[3155]: I0905 00:21:22.332942 3155 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:21:22.340399 kubelet[3155]: I0905 00:21:22.340360 3155 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:21:22.340560 kubelet[3155]: I0905 00:21:22.340517 3155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:21:22.340647 kubelet[3155]: I0905 00:21:22.340532 3155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230.2.2-n-de5468c6d2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:21:22.340647 kubelet[3155]: I0905 00:21:22.340620 3155 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:21:22.340647 kubelet[3155]: I0905 00:21:22.340625 3155 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:21:22.340740 kubelet[3155]: I0905 00:21:22.340652 3155 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:21:22.340764 kubelet[3155]: I0905 00:21:22.340757 3155 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:21:22.340788 kubelet[3155]: I0905 00:21:22.340764 3155 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:21:22.340788 kubelet[3155]: I0905 00:21:22.340777 3155 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:21:22.340788 kubelet[3155]: I0905 00:21:22.340786 3155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:21:22.341294 kubelet[3155]: I0905 00:21:22.341283 3155 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 5 00:21:22.341614 kubelet[3155]: I0905 00:21:22.341578 3155 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:21:22.342799 kubelet[3155]: I0905 00:21:22.342759 3155 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:21:22.342799 kubelet[3155]: I0905 00:21:22.342780 3155 server.go:1289] "Started kubelet" Sep 5 00:21:22.342896 kubelet[3155]: I0905 00:21:22.342857 3155 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:21:22.342896 kubelet[3155]: I0905 00:21:22.342872 3155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:21:22.343073 kubelet[3155]: I0905 00:21:22.343062 3155 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:21:22.344770 kubelet[3155]: I0905 00:21:22.344758 3155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:21:22.344892 kubelet[3155]: I0905 00:21:22.344879 3155 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:21:22.344988 kubelet[3155]: E0905 00:21:22.344974 3155 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230.2.2-n-de5468c6d2\" not found" Sep 5 00:21:22.345116 kubelet[3155]: I0905 00:21:22.345106 3155 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:21:22.345260 kubelet[3155]: I0905 00:21:22.345251 3155 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:21:22.345292 kubelet[3155]: I0905 00:21:22.345254 3155 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:21:22.345321 kubelet[3155]: E0905 00:21:22.345310 3155 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:21:22.345348 kubelet[3155]: I0905 00:21:22.345329 3155 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:21:22.345500 kubelet[3155]: I0905 00:21:22.345492 3155 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:21:22.345570 kubelet[3155]: I0905 00:21:22.345559 3155 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:21:22.346832 kubelet[3155]: I0905 00:21:22.346824 3155 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:21:22.350965 kubelet[3155]: I0905 00:21:22.350940 3155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:21:22.351540 kubelet[3155]: I0905 00:21:22.351531 3155 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:21:22.351590 kubelet[3155]: I0905 00:21:22.351545 3155 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:21:22.351590 kubelet[3155]: I0905 00:21:22.351558 3155 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:21:22.351590 kubelet[3155]: I0905 00:21:22.351564 3155 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:21:22.351640 kubelet[3155]: E0905 00:21:22.351596 3155 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:21:22.360986 kubelet[3155]: I0905 00:21:22.360944 3155 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:21:22.360986 kubelet[3155]: I0905 00:21:22.360954 3155 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:21:22.360986 kubelet[3155]: I0905 00:21:22.360965 3155 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:21:22.361101 kubelet[3155]: I0905 00:21:22.361040 3155 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:21:22.361101 kubelet[3155]: I0905 00:21:22.361046 3155 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:21:22.361101 kubelet[3155]: I0905 00:21:22.361058 3155 policy_none.go:49] "None policy: Start" Sep 5 00:21:22.361101 kubelet[3155]: I0905 00:21:22.361063 3155 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:21:22.361101 kubelet[3155]: I0905 00:21:22.361069 3155 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:21:22.361174 kubelet[3155]: I0905 00:21:22.361121 3155 state_mem.go:75] "Updated machine memory state" Sep 5 00:21:22.363267 kubelet[3155]: E0905 00:21:22.363230 3155 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:21:22.363316 kubelet[3155]: I0905 00:21:22.363311 3155 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:21:22.363337 kubelet[3155]: I0905 00:21:22.363318 3155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:21:22.363396 kubelet[3155]: I0905 00:21:22.363389 3155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:21:22.363685 kubelet[3155]: E0905 00:21:22.363673 3155 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:21:22.453508 kubelet[3155]: I0905 00:21:22.453401 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.453508 kubelet[3155]: I0905 00:21:22.453506 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.453893 kubelet[3155]: I0905 00:21:22.453613 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.461713 kubelet[3155]: I0905 00:21:22.461662 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:22.461713 kubelet[3155]: I0905 00:21:22.461693 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:22.462012 kubelet[3155]: I0905 00:21:22.461747 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:22.462012 kubelet[3155]: E0905 00:21:22.461785 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.462012 kubelet[3155]: E0905 00:21:22.461802 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.462012 kubelet[3155]: E0905 00:21:22.461876 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.468528 kubelet[3155]: I0905 00:21:22.468478 3155 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.477226 kubelet[3155]: I0905 00:21:22.477177 3155 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.477484 kubelet[3155]: I0905 00:21:22.477325 3155 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647005 kubelet[3155]: I0905 00:21:22.646744 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a4b85ef0d75a17fc2ca0f1be4afc465c-ca-certs\") pod \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" (UID: \"a4b85ef0d75a17fc2ca0f1be4afc465c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647005 kubelet[3155]: I0905 00:21:22.646840 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a4b85ef0d75a17fc2ca0f1be4afc465c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" (UID: \"a4b85ef0d75a17fc2ca0f1be4afc465c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647005 kubelet[3155]: I0905 00:21:22.646919 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-ca-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647005 kubelet[3155]: I0905 00:21:22.646975 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-k8s-certs\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647579 kubelet[3155]: I0905 00:21:22.647090 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-kubeconfig\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647579 kubelet[3155]: I0905 00:21:22.647197 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f233d75707dfe26cdab42e1e9d91c4d1-kubeconfig\") pod \"kube-scheduler-ci-4230.2.2-n-de5468c6d2\" (UID: \"f233d75707dfe26cdab42e1e9d91c4d1\") " pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647579 kubelet[3155]: I0905 00:21:22.647252 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a4b85ef0d75a17fc2ca0f1be4afc465c-k8s-certs\") pod \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" (UID: \"a4b85ef0d75a17fc2ca0f1be4afc465c\") " pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647579 kubelet[3155]: I0905 00:21:22.647302 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-flexvolume-dir\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.647579 kubelet[3155]: I0905 00:21:22.647371 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21c442b3900034d87a8adef8b17c3f3f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" (UID: \"21c442b3900034d87a8adef8b17c3f3f\") " pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:22.783694 sudo[3206]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 5 00:21:22.784627 sudo[3206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 5 00:21:23.165705 sudo[3206]: pam_unix(sudo:session): session closed for user root Sep 5 00:21:23.341472 kubelet[3155]: I0905 00:21:23.341426 3155 apiserver.go:52] "Watching apiserver" Sep 5 00:21:23.345904 kubelet[3155]: I0905 00:21:23.345865 3155 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:21:23.356001 kubelet[3155]: I0905 00:21:23.355967 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:23.356001 kubelet[3155]: I0905 00:21:23.355975 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:23.356045 kubelet[3155]: I0905 00:21:23.356023 3155 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:23.376426 kubelet[3155]: I0905 00:21:23.376379 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:23.376426 kubelet[3155]: E0905 00:21:23.376411 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:23.380460 kubelet[3155]: I0905 00:21:23.378093 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:23.380460 kubelet[3155]: E0905 00:21:23.378148 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:23.380460 kubelet[3155]: I0905 00:21:23.378205 3155 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Sep 5 00:21:23.380460 kubelet[3155]: E0905 00:21:23.378241 3155 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230.2.2-n-de5468c6d2\" already exists" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" Sep 5 00:21:23.380460 kubelet[3155]: I0905 00:21:23.378433 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230.2.2-n-de5468c6d2" podStartSLOduration=2.37842343 podStartE2EDuration="2.37842343s" podCreationTimestamp="2025-09-05 00:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:23.376809566 +0000 UTC m=+1.069747095" watchObservedRunningTime="2025-09-05 00:21:23.37842343 +0000 UTC m=+1.071360952" Sep 5 00:21:23.386894 kubelet[3155]: I0905 00:21:23.386833 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230.2.2-n-de5468c6d2" podStartSLOduration=2.386825124 podStartE2EDuration="2.386825124s" podCreationTimestamp="2025-09-05 00:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:23.383624041 +0000 UTC m=+1.076561565" watchObservedRunningTime="2025-09-05 00:21:23.386825124 +0000 UTC m=+1.079762648" Sep 5 00:21:23.390935 kubelet[3155]: I0905 00:21:23.390850 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230.2.2-n-de5468c6d2" podStartSLOduration=2.390842105 podStartE2EDuration="2.390842105s" podCreationTimestamp="2025-09-05 00:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:23.386906465 +0000 UTC m=+1.079843991" watchObservedRunningTime="2025-09-05 00:21:23.390842105 +0000 UTC m=+1.083779627" Sep 5 00:21:24.510870 sudo[2093]: pam_unix(sudo:session): session closed for user root Sep 5 00:21:24.511529 sshd[2092]: Connection closed by 147.75.109.163 port 43372 Sep 5 00:21:24.511668 sshd-session[2089]: pam_unix(sshd:session): session closed for user core Sep 5 00:21:24.513508 systemd[1]: sshd@8-139.178.90.135:22-147.75.109.163:43372.service: Deactivated successfully. Sep 5 00:21:24.514432 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:21:24.514535 systemd[1]: session-11.scope: Consumed 4.319s CPU time, 268.8M memory peak. Sep 5 00:21:24.515314 systemd-logind[1802]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:21:24.515968 systemd-logind[1802]: Removed session 11. Sep 5 00:21:27.604018 kubelet[3155]: I0905 00:21:27.603948 3155 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:21:27.605209 kubelet[3155]: I0905 00:21:27.605021 3155 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:21:27.605341 containerd[1820]: time="2025-09-05T00:21:27.604603978Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:21:28.322530 systemd[1]: Created slice kubepods-besteffort-podce86648f_8487_4545_9ad0_46c485a88a1a.slice - libcontainer container kubepods-besteffort-podce86648f_8487_4545_9ad0_46c485a88a1a.slice. Sep 5 00:21:28.334472 systemd[1]: Created slice kubepods-burstable-pod53955307_da67_4152_8b87_4ba980242bb4.slice - libcontainer container kubepods-burstable-pod53955307_da67_4152_8b87_4ba980242bb4.slice. Sep 5 00:21:28.382473 kubelet[3155]: I0905 00:21:28.382412 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-hostproc\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.382722 kubelet[3155]: I0905 00:21:28.382493 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-cgroup\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.382722 kubelet[3155]: I0905 00:21:28.382549 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-hubble-tls\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.382722 kubelet[3155]: I0905 00:21:28.382596 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsz8\" (UniqueName: \"kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-kube-api-access-vxsz8\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.382722 kubelet[3155]: I0905 00:21:28.382645 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-run\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383164 kubelet[3155]: I0905 00:21:28.382729 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-lib-modules\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383164 kubelet[3155]: I0905 00:21:28.382799 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53955307-da67-4152-8b87-4ba980242bb4-clustermesh-secrets\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383164 kubelet[3155]: I0905 00:21:28.382834 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53955307-da67-4152-8b87-4ba980242bb4-cilium-config-path\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383164 kubelet[3155]: I0905 00:21:28.382872 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-net\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383164 kubelet[3155]: I0905 00:21:28.382910 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cni-path\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383164 kubelet[3155]: I0905 00:21:28.382942 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-xtables-lock\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383887 kubelet[3155]: I0905 00:21:28.382974 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-kernel\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.383887 kubelet[3155]: I0905 00:21:28.383056 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce86648f-8487-4545-9ad0-46c485a88a1a-kube-proxy\") pod \"kube-proxy-jcdw4\" (UID: \"ce86648f-8487-4545-9ad0-46c485a88a1a\") " pod="kube-system/kube-proxy-jcdw4" Sep 5 00:21:28.383887 kubelet[3155]: I0905 00:21:28.383091 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce86648f-8487-4545-9ad0-46c485a88a1a-lib-modules\") pod \"kube-proxy-jcdw4\" (UID: \"ce86648f-8487-4545-9ad0-46c485a88a1a\") " pod="kube-system/kube-proxy-jcdw4" Sep 5 00:21:28.383887 kubelet[3155]: I0905 00:21:28.383123 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rblh\" (UniqueName: \"kubernetes.io/projected/ce86648f-8487-4545-9ad0-46c485a88a1a-kube-api-access-2rblh\") pod \"kube-proxy-jcdw4\" (UID: \"ce86648f-8487-4545-9ad0-46c485a88a1a\") " pod="kube-system/kube-proxy-jcdw4" Sep 5 00:21:28.383887 kubelet[3155]: I0905 00:21:28.383174 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-bpf-maps\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.384468 kubelet[3155]: I0905 00:21:28.383218 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-etc-cni-netd\") pod \"cilium-khx4l\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " pod="kube-system/cilium-khx4l" Sep 5 00:21:28.384468 kubelet[3155]: I0905 00:21:28.383826 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce86648f-8487-4545-9ad0-46c485a88a1a-xtables-lock\") pod \"kube-proxy-jcdw4\" (UID: \"ce86648f-8487-4545-9ad0-46c485a88a1a\") " pod="kube-system/kube-proxy-jcdw4" Sep 5 00:21:28.498691 kubelet[3155]: E0905 00:21:28.498617 3155 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:21:28.498691 kubelet[3155]: E0905 00:21:28.498682 3155 projected.go:194] Error preparing data for projected volume kube-api-access-vxsz8 for pod kube-system/cilium-khx4l: configmap "kube-root-ca.crt" not found Sep 5 00:21:28.498971 kubelet[3155]: E0905 00:21:28.498840 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-kube-api-access-vxsz8 podName:53955307-da67-4152-8b87-4ba980242bb4 nodeName:}" failed. No retries permitted until 2025-09-05 00:21:28.998780584 +0000 UTC m=+6.691718139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vxsz8" (UniqueName: "kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-kube-api-access-vxsz8") pod "cilium-khx4l" (UID: "53955307-da67-4152-8b87-4ba980242bb4") : configmap "kube-root-ca.crt" not found Sep 5 00:21:28.499425 kubelet[3155]: E0905 00:21:28.499388 3155 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 00:21:28.499708 kubelet[3155]: E0905 00:21:28.499433 3155 projected.go:194] Error preparing data for projected volume kube-api-access-2rblh for pod kube-system/kube-proxy-jcdw4: configmap "kube-root-ca.crt" not found Sep 5 00:21:28.499708 kubelet[3155]: E0905 00:21:28.499560 3155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce86648f-8487-4545-9ad0-46c485a88a1a-kube-api-access-2rblh podName:ce86648f-8487-4545-9ad0-46c485a88a1a nodeName:}" failed. No retries permitted until 2025-09-05 00:21:28.999524448 +0000 UTC m=+6.692462028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2rblh" (UniqueName: "kubernetes.io/projected/ce86648f-8487-4545-9ad0-46c485a88a1a-kube-api-access-2rblh") pod "kube-proxy-jcdw4" (UID: "ce86648f-8487-4545-9ad0-46c485a88a1a") : configmap "kube-root-ca.crt" not found Sep 5 00:21:28.828597 systemd[1]: Created slice kubepods-besteffort-pod8a9111e4_e1d3_46d0_8ff9_31803e71658b.slice - libcontainer container kubepods-besteffort-pod8a9111e4_e1d3_46d0_8ff9_31803e71658b.slice. Sep 5 00:21:28.889738 kubelet[3155]: I0905 00:21:28.889658 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a9111e4-e1d3-46d0-8ff9-31803e71658b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jqxc8\" (UID: \"8a9111e4-e1d3-46d0-8ff9-31803e71658b\") " pod="kube-system/cilium-operator-6c4d7847fc-jqxc8" Sep 5 00:21:28.890616 kubelet[3155]: I0905 00:21:28.889835 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkcn\" (UniqueName: \"kubernetes.io/projected/8a9111e4-e1d3-46d0-8ff9-31803e71658b-kube-api-access-zfkcn\") pod \"cilium-operator-6c4d7847fc-jqxc8\" (UID: \"8a9111e4-e1d3-46d0-8ff9-31803e71658b\") " pod="kube-system/cilium-operator-6c4d7847fc-jqxc8" Sep 5 00:21:29.132280 containerd[1820]: time="2025-09-05T00:21:29.132185749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jqxc8,Uid:8a9111e4-e1d3-46d0-8ff9-31803e71658b,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:29.142903 containerd[1820]: time="2025-09-05T00:21:29.142827813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:29.142903 containerd[1820]: time="2025-09-05T00:21:29.142862479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:29.142903 containerd[1820]: time="2025-09-05T00:21:29.142870265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:29.143056 containerd[1820]: time="2025-09-05T00:21:29.142917854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:29.168682 systemd[1]: Started cri-containerd-593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36.scope - libcontainer container 593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36. Sep 5 00:21:29.223163 containerd[1820]: time="2025-09-05T00:21:29.223100294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jqxc8,Uid:8a9111e4-e1d3-46d0-8ff9-31803e71658b,Namespace:kube-system,Attempt:0,} returns sandbox id \"593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36\"" Sep 5 00:21:29.224499 containerd[1820]: time="2025-09-05T00:21:29.224478524Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 5 00:21:29.233889 containerd[1820]: time="2025-09-05T00:21:29.233824079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcdw4,Uid:ce86648f-8487-4545-9ad0-46c485a88a1a,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:29.236232 containerd[1820]: time="2025-09-05T00:21:29.236217682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khx4l,Uid:53955307-da67-4152-8b87-4ba980242bb4,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:29.245122 containerd[1820]: time="2025-09-05T00:21:29.245077398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:29.245122 containerd[1820]: time="2025-09-05T00:21:29.245110532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:29.245122 containerd[1820]: time="2025-09-05T00:21:29.245121833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:29.245244 containerd[1820]: time="2025-09-05T00:21:29.245169884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:29.245503 containerd[1820]: time="2025-09-05T00:21:29.245472259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:29.245503 containerd[1820]: time="2025-09-05T00:21:29.245497456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:29.245539 containerd[1820]: time="2025-09-05T00:21:29.245504481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:29.245558 containerd[1820]: time="2025-09-05T00:21:29.245540625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:29.268696 systemd[1]: Started cri-containerd-5393cd0b5b4ff373950a6f3ff5b210bd10d844ebd4f87fe7150d4bc6cdd8c881.scope - libcontainer container 5393cd0b5b4ff373950a6f3ff5b210bd10d844ebd4f87fe7150d4bc6cdd8c881. Sep 5 00:21:29.269460 systemd[1]: Started cri-containerd-c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0.scope - libcontainer container c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0. Sep 5 00:21:29.281921 containerd[1820]: time="2025-09-05T00:21:29.281892727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcdw4,Uid:ce86648f-8487-4545-9ad0-46c485a88a1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5393cd0b5b4ff373950a6f3ff5b210bd10d844ebd4f87fe7150d4bc6cdd8c881\"" Sep 5 00:21:29.282032 containerd[1820]: time="2025-09-05T00:21:29.282010100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khx4l,Uid:53955307-da67-4152-8b87-4ba980242bb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\"" Sep 5 00:21:29.284006 containerd[1820]: time="2025-09-05T00:21:29.283988545Z" level=info msg="CreateContainer within sandbox \"5393cd0b5b4ff373950a6f3ff5b210bd10d844ebd4f87fe7150d4bc6cdd8c881\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:21:29.289633 containerd[1820]: time="2025-09-05T00:21:29.289617816Z" level=info msg="CreateContainer within sandbox \"5393cd0b5b4ff373950a6f3ff5b210bd10d844ebd4f87fe7150d4bc6cdd8c881\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"89a6051f0eeda40b2ed98c62d32a35288c9b1b0c2a418536ba2af09adf841e12\"" Sep 5 00:21:29.289916 containerd[1820]: time="2025-09-05T00:21:29.289871924Z" level=info msg="StartContainer for \"89a6051f0eeda40b2ed98c62d32a35288c9b1b0c2a418536ba2af09adf841e12\"" Sep 5 00:21:29.318002 systemd[1]: Started cri-containerd-89a6051f0eeda40b2ed98c62d32a35288c9b1b0c2a418536ba2af09adf841e12.scope - libcontainer container 89a6051f0eeda40b2ed98c62d32a35288c9b1b0c2a418536ba2af09adf841e12. Sep 5 00:21:29.370984 containerd[1820]: time="2025-09-05T00:21:29.370944924Z" level=info msg="StartContainer for \"89a6051f0eeda40b2ed98c62d32a35288c9b1b0c2a418536ba2af09adf841e12\" returns successfully" Sep 5 00:21:30.672614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590121276.mount: Deactivated successfully. Sep 5 00:21:30.908631 containerd[1820]: time="2025-09-05T00:21:30.908581432Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:30.908841 containerd[1820]: time="2025-09-05T00:21:30.908803430Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 5 00:21:30.909141 containerd[1820]: time="2025-09-05T00:21:30.909106530Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:30.910256 containerd[1820]: time="2025-09-05T00:21:30.910244287Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.685739791s" Sep 5 00:21:30.910283 containerd[1820]: time="2025-09-05T00:21:30.910260589Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 5 00:21:30.910791 containerd[1820]: time="2025-09-05T00:21:30.910780127Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 5 00:21:30.911955 containerd[1820]: time="2025-09-05T00:21:30.911942424Z" level=info msg="CreateContainer within sandbox \"593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 5 00:21:30.916745 containerd[1820]: time="2025-09-05T00:21:30.916698348Z" level=info msg="CreateContainer within sandbox \"593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\"" Sep 5 00:21:30.916981 containerd[1820]: time="2025-09-05T00:21:30.916967791Z" level=info msg="StartContainer for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\"" Sep 5 00:21:30.940729 systemd[1]: Started cri-containerd-5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd.scope - libcontainer container 5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd. Sep 5 00:21:30.961538 containerd[1820]: time="2025-09-05T00:21:30.961486958Z" level=info msg="StartContainer for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" returns successfully" Sep 5 00:21:31.399079 kubelet[3155]: I0905 00:21:31.398896 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jcdw4" podStartSLOduration=3.398845944 podStartE2EDuration="3.398845944s" podCreationTimestamp="2025-09-05 00:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:30.394338126 +0000 UTC m=+8.087275725" watchObservedRunningTime="2025-09-05 00:21:31.398845944 +0000 UTC m=+9.091783554" Sep 5 00:21:31.400077 kubelet[3155]: I0905 00:21:31.399510 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jqxc8" podStartSLOduration=1.713014581 podStartE2EDuration="3.399485969s" podCreationTimestamp="2025-09-05 00:21:28 +0000 UTC" firstStartedPulling="2025-09-05 00:21:29.22421364 +0000 UTC m=+6.917151172" lastFinishedPulling="2025-09-05 00:21:30.910685037 +0000 UTC m=+8.603622560" observedRunningTime="2025-09-05 00:21:31.399135814 +0000 UTC m=+9.092073428" watchObservedRunningTime="2025-09-05 00:21:31.399485969 +0000 UTC m=+9.092423587" Sep 5 00:21:33.091497 update_engine[1807]: I20250905 00:21:33.091462 1807 update_attempter.cc:509] Updating boot flags... Sep 5 00:21:33.120455 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3675) Sep 5 00:21:33.148462 kernel: BTRFS warning: duplicate device /dev/sdb3 devid 1 generation 38 scanned by (udev-worker) (3678) Sep 5 00:21:33.832756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363504224.mount: Deactivated successfully. Sep 5 00:21:34.642895 containerd[1820]: time="2025-09-05T00:21:34.642858524Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:34.643080 containerd[1820]: time="2025-09-05T00:21:34.643037067Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 5 00:21:34.643418 containerd[1820]: time="2025-09-05T00:21:34.643379057Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:21:34.644334 containerd[1820]: time="2025-09-05T00:21:34.644293050Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 3.733497762s" Sep 5 00:21:34.644334 containerd[1820]: time="2025-09-05T00:21:34.644307707Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 5 00:21:34.645705 containerd[1820]: time="2025-09-05T00:21:34.645693961Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:21:34.649695 containerd[1820]: time="2025-09-05T00:21:34.649653299Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\"" Sep 5 00:21:34.649879 containerd[1820]: time="2025-09-05T00:21:34.649865607Z" level=info msg="StartContainer for \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\"" Sep 5 00:21:34.684874 systemd[1]: Started cri-containerd-26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47.scope - libcontainer container 26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47. Sep 5 00:21:34.732473 containerd[1820]: time="2025-09-05T00:21:34.732408532Z" level=info msg="StartContainer for \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\" returns successfully" Sep 5 00:21:34.742094 systemd[1]: cri-containerd-26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47.scope: Deactivated successfully. Sep 5 00:21:35.654668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47-rootfs.mount: Deactivated successfully. Sep 5 00:21:36.053351 containerd[1820]: time="2025-09-05T00:21:36.053278483Z" level=info msg="shim disconnected" id=26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47 namespace=k8s.io Sep 5 00:21:36.053351 containerd[1820]: time="2025-09-05T00:21:36.053315857Z" level=warning msg="cleaning up after shim disconnected" id=26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47 namespace=k8s.io Sep 5 00:21:36.053351 containerd[1820]: time="2025-09-05T00:21:36.053323681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:21:36.060829 containerd[1820]: time="2025-09-05T00:21:36.060804478Z" level=warning msg="cleanup warnings time=\"2025-09-05T00:21:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 00:21:36.404953 containerd[1820]: time="2025-09-05T00:21:36.404849964Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:21:36.410200 containerd[1820]: time="2025-09-05T00:21:36.410178432Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\"" Sep 5 00:21:36.410438 containerd[1820]: time="2025-09-05T00:21:36.410426273Z" level=info msg="StartContainer for \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\"" Sep 5 00:21:36.439599 systemd[1]: Started cri-containerd-1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892.scope - libcontainer container 1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892. Sep 5 00:21:36.454523 containerd[1820]: time="2025-09-05T00:21:36.454496644Z" level=info msg="StartContainer for \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\" returns successfully" Sep 5 00:21:36.463181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:21:36.463503 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:21:36.463625 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:21:36.486441 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:21:36.487213 systemd[1]: cri-containerd-1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892.scope: Deactivated successfully. Sep 5 00:21:36.510861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:21:36.512490 containerd[1820]: time="2025-09-05T00:21:36.512464951Z" level=info msg="shim disconnected" id=1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892 namespace=k8s.io Sep 5 00:21:36.512540 containerd[1820]: time="2025-09-05T00:21:36.512491003Z" level=warning msg="cleaning up after shim disconnected" id=1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892 namespace=k8s.io Sep 5 00:21:36.512540 containerd[1820]: time="2025-09-05T00:21:36.512496081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:21:36.650276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892-rootfs.mount: Deactivated successfully. Sep 5 00:21:37.408170 containerd[1820]: time="2025-09-05T00:21:37.408147526Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:21:37.415631 containerd[1820]: time="2025-09-05T00:21:37.415566379Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\"" Sep 5 00:21:37.416003 containerd[1820]: time="2025-09-05T00:21:37.415963435Z" level=info msg="StartContainer for \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\"" Sep 5 00:21:37.436844 systemd[1]: Started cri-containerd-8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c.scope - libcontainer container 8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c. Sep 5 00:21:37.463372 containerd[1820]: time="2025-09-05T00:21:37.463350998Z" level=info msg="StartContainer for \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\" returns successfully" Sep 5 00:21:37.464173 systemd[1]: cri-containerd-8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c.scope: Deactivated successfully. Sep 5 00:21:37.474765 containerd[1820]: time="2025-09-05T00:21:37.474732330Z" level=info msg="shim disconnected" id=8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c namespace=k8s.io Sep 5 00:21:37.474765 containerd[1820]: time="2025-09-05T00:21:37.474761481Z" level=warning msg="cleaning up after shim disconnected" id=8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c namespace=k8s.io Sep 5 00:21:37.474765 containerd[1820]: time="2025-09-05T00:21:37.474768294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:21:37.654223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c-rootfs.mount: Deactivated successfully. Sep 5 00:21:38.409547 containerd[1820]: time="2025-09-05T00:21:38.409522100Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:21:38.413889 containerd[1820]: time="2025-09-05T00:21:38.413864965Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\"" Sep 5 00:21:38.414167 containerd[1820]: time="2025-09-05T00:21:38.414154539Z" level=info msg="StartContainer for \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\"" Sep 5 00:21:38.433739 systemd[1]: Started cri-containerd-7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199.scope - libcontainer container 7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199. Sep 5 00:21:38.446863 systemd[1]: cri-containerd-7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199.scope: Deactivated successfully. Sep 5 00:21:38.447201 containerd[1820]: time="2025-09-05T00:21:38.447176062Z" level=info msg="StartContainer for \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\" returns successfully" Sep 5 00:21:38.473895 containerd[1820]: time="2025-09-05T00:21:38.473860990Z" level=info msg="shim disconnected" id=7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199 namespace=k8s.io Sep 5 00:21:38.473895 containerd[1820]: time="2025-09-05T00:21:38.473892714Z" level=warning msg="cleaning up after shim disconnected" id=7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199 namespace=k8s.io Sep 5 00:21:38.474027 containerd[1820]: time="2025-09-05T00:21:38.473900658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:21:38.651457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199-rootfs.mount: Deactivated successfully. Sep 5 00:21:39.411561 containerd[1820]: time="2025-09-05T00:21:39.411513922Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:21:39.417166 containerd[1820]: time="2025-09-05T00:21:39.417087430Z" level=info msg="CreateContainer within sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\"" Sep 5 00:21:39.417543 containerd[1820]: time="2025-09-05T00:21:39.417529137Z" level=info msg="StartContainer for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\"" Sep 5 00:21:39.444007 systemd[1]: Started cri-containerd-605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110.scope - libcontainer container 605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110. Sep 5 00:21:39.491661 containerd[1820]: time="2025-09-05T00:21:39.491602074Z" level=info msg="StartContainer for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" returns successfully" Sep 5 00:21:39.553783 kubelet[3155]: I0905 00:21:39.553754 3155 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:21:39.574196 systemd[1]: Created slice kubepods-burstable-pod397293ee_d011_43d6_b0bc_5bda3a20fb57.slice - libcontainer container kubepods-burstable-pod397293ee_d011_43d6_b0bc_5bda3a20fb57.slice. Sep 5 00:21:39.577477 systemd[1]: Created slice kubepods-burstable-pod4d2c3fba_8ff0_4697_b6a8_739e6feb1676.slice - libcontainer container kubepods-burstable-pod4d2c3fba_8ff0_4697_b6a8_739e6feb1676.slice. Sep 5 00:21:39.661470 kubelet[3155]: I0905 00:21:39.661421 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d2c3fba-8ff0-4697-b6a8-739e6feb1676-config-volume\") pod \"coredns-674b8bbfcf-pm4w7\" (UID: \"4d2c3fba-8ff0-4697-b6a8-739e6feb1676\") " pod="kube-system/coredns-674b8bbfcf-pm4w7" Sep 5 00:21:39.661470 kubelet[3155]: I0905 00:21:39.661460 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b62gd\" (UniqueName: \"kubernetes.io/projected/397293ee-d011-43d6-b0bc-5bda3a20fb57-kube-api-access-b62gd\") pod \"coredns-674b8bbfcf-lct45\" (UID: \"397293ee-d011-43d6-b0bc-5bda3a20fb57\") " pod="kube-system/coredns-674b8bbfcf-lct45" Sep 5 00:21:39.661584 kubelet[3155]: I0905 00:21:39.661477 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78sb\" (UniqueName: \"kubernetes.io/projected/4d2c3fba-8ff0-4697-b6a8-739e6feb1676-kube-api-access-f78sb\") pod \"coredns-674b8bbfcf-pm4w7\" (UID: \"4d2c3fba-8ff0-4697-b6a8-739e6feb1676\") " pod="kube-system/coredns-674b8bbfcf-pm4w7" Sep 5 00:21:39.661584 kubelet[3155]: I0905 00:21:39.661487 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/397293ee-d011-43d6-b0bc-5bda3a20fb57-config-volume\") pod \"coredns-674b8bbfcf-lct45\" (UID: \"397293ee-d011-43d6-b0bc-5bda3a20fb57\") " pod="kube-system/coredns-674b8bbfcf-lct45" Sep 5 00:21:39.877950 containerd[1820]: time="2025-09-05T00:21:39.877868514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lct45,Uid:397293ee-d011-43d6-b0bc-5bda3a20fb57,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:39.879309 containerd[1820]: time="2025-09-05T00:21:39.879296479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pm4w7,Uid:4d2c3fba-8ff0-4697-b6a8-739e6feb1676,Namespace:kube-system,Attempt:0,}" Sep 5 00:21:40.422161 kubelet[3155]: I0905 00:21:40.422129 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-khx4l" podStartSLOduration=7.059962383 podStartE2EDuration="12.422110512s" podCreationTimestamp="2025-09-05 00:21:28 +0000 UTC" firstStartedPulling="2025-09-05 00:21:29.282543622 +0000 UTC m=+6.975481147" lastFinishedPulling="2025-09-05 00:21:34.644691752 +0000 UTC m=+12.337629276" observedRunningTime="2025-09-05 00:21:40.42179298 +0000 UTC m=+18.114730506" watchObservedRunningTime="2025-09-05 00:21:40.422110512 +0000 UTC m=+18.115048034" Sep 5 00:21:41.308721 systemd-networkd[1727]: cilium_host: Link UP Sep 5 00:21:41.308864 systemd-networkd[1727]: cilium_net: Link UP Sep 5 00:21:41.309017 systemd-networkd[1727]: cilium_net: Gained carrier Sep 5 00:21:41.309164 systemd-networkd[1727]: cilium_host: Gained carrier Sep 5 00:21:41.357003 systemd-networkd[1727]: cilium_vxlan: Link UP Sep 5 00:21:41.357008 systemd-networkd[1727]: cilium_vxlan: Gained carrier Sep 5 00:21:41.493455 kernel: NET: Registered PF_ALG protocol family Sep 5 00:21:41.806537 systemd-networkd[1727]: cilium_host: Gained IPv6LL Sep 5 00:21:41.933131 systemd-networkd[1727]: lxc_health: Link UP Sep 5 00:21:41.945036 systemd-networkd[1727]: lxc_health: Gained carrier Sep 5 00:21:42.310664 systemd-networkd[1727]: cilium_net: Gained IPv6LL Sep 5 00:21:42.435518 kernel: eth0: renamed from tmp5dbc3 Sep 5 00:21:42.453456 kernel: eth0: renamed from tmp6a0ca Sep 5 00:21:42.459744 systemd-networkd[1727]: lxccfe6d3c5dffb: Link UP Sep 5 00:21:42.460064 systemd-networkd[1727]: lxc4d42aad61c03: Link UP Sep 5 00:21:42.460328 systemd-networkd[1727]: lxccfe6d3c5dffb: Gained carrier Sep 5 00:21:42.460563 systemd-networkd[1727]: lxc4d42aad61c03: Gained carrier Sep 5 00:21:42.694590 systemd-networkd[1727]: cilium_vxlan: Gained IPv6LL Sep 5 00:21:43.718736 systemd-networkd[1727]: lxccfe6d3c5dffb: Gained IPv6LL Sep 5 00:21:43.846657 systemd-networkd[1727]: lxc_health: Gained IPv6LL Sep 5 00:21:44.358610 systemd-networkd[1727]: lxc4d42aad61c03: Gained IPv6LL Sep 5 00:21:44.781322 containerd[1820]: time="2025-09-05T00:21:44.781109978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:44.781322 containerd[1820]: time="2025-09-05T00:21:44.781312741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:44.781322 containerd[1820]: time="2025-09-05T00:21:44.781320431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:44.781601 containerd[1820]: time="2025-09-05T00:21:44.781371107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:44.782027 containerd[1820]: time="2025-09-05T00:21:44.781967482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:21:44.782027 containerd[1820]: time="2025-09-05T00:21:44.781994238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:21:44.782027 containerd[1820]: time="2025-09-05T00:21:44.782001350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:44.782099 containerd[1820]: time="2025-09-05T00:21:44.782038087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:21:44.805719 systemd[1]: Started cri-containerd-5dbc39cb3d0ac2360825fa30eaca52197b767512a54a947b169cfdeebb87a534.scope - libcontainer container 5dbc39cb3d0ac2360825fa30eaca52197b767512a54a947b169cfdeebb87a534. Sep 5 00:21:44.806475 systemd[1]: Started cri-containerd-6a0ca38174c4b4127dcd72f017ebf1e59754f7c6e17f47db814bd3f6a1d9d933.scope - libcontainer container 6a0ca38174c4b4127dcd72f017ebf1e59754f7c6e17f47db814bd3f6a1d9d933. Sep 5 00:21:44.827850 containerd[1820]: time="2025-09-05T00:21:44.827826397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lct45,Uid:397293ee-d011-43d6-b0bc-5bda3a20fb57,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dbc39cb3d0ac2360825fa30eaca52197b767512a54a947b169cfdeebb87a534\"" Sep 5 00:21:44.827945 containerd[1820]: time="2025-09-05T00:21:44.827883546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pm4w7,Uid:4d2c3fba-8ff0-4697-b6a8-739e6feb1676,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a0ca38174c4b4127dcd72f017ebf1e59754f7c6e17f47db814bd3f6a1d9d933\"" Sep 5 00:21:44.829693 containerd[1820]: time="2025-09-05T00:21:44.829678932Z" level=info msg="CreateContainer within sandbox \"6a0ca38174c4b4127dcd72f017ebf1e59754f7c6e17f47db814bd3f6a1d9d933\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:21:44.830082 containerd[1820]: time="2025-09-05T00:21:44.830068037Z" level=info msg="CreateContainer within sandbox \"5dbc39cb3d0ac2360825fa30eaca52197b767512a54a947b169cfdeebb87a534\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:21:44.834297 containerd[1820]: time="2025-09-05T00:21:44.834280453Z" level=info msg="CreateContainer within sandbox \"5dbc39cb3d0ac2360825fa30eaca52197b767512a54a947b169cfdeebb87a534\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d585bb2f9943bd0e662a2ff1aa853365952a3ee0329a80d32ffced1f8c960b02\"" Sep 5 00:21:44.834546 containerd[1820]: time="2025-09-05T00:21:44.834534972Z" level=info msg="StartContainer for \"d585bb2f9943bd0e662a2ff1aa853365952a3ee0329a80d32ffced1f8c960b02\"" Sep 5 00:21:44.834638 containerd[1820]: time="2025-09-05T00:21:44.834625286Z" level=info msg="CreateContainer within sandbox \"6a0ca38174c4b4127dcd72f017ebf1e59754f7c6e17f47db814bd3f6a1d9d933\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d80a028bbc36cc3b237c7c0cbe8b2bb1e4d08541cf2469b90168158548214cd\"" Sep 5 00:21:44.834783 containerd[1820]: time="2025-09-05T00:21:44.834772997Z" level=info msg="StartContainer for \"0d80a028bbc36cc3b237c7c0cbe8b2bb1e4d08541cf2469b90168158548214cd\"" Sep 5 00:21:44.855747 systemd[1]: Started cri-containerd-0d80a028bbc36cc3b237c7c0cbe8b2bb1e4d08541cf2469b90168158548214cd.scope - libcontainer container 0d80a028bbc36cc3b237c7c0cbe8b2bb1e4d08541cf2469b90168158548214cd. Sep 5 00:21:44.856359 systemd[1]: Started cri-containerd-d585bb2f9943bd0e662a2ff1aa853365952a3ee0329a80d32ffced1f8c960b02.scope - libcontainer container d585bb2f9943bd0e662a2ff1aa853365952a3ee0329a80d32ffced1f8c960b02. Sep 5 00:21:44.868138 containerd[1820]: time="2025-09-05T00:21:44.868115309Z" level=info msg="StartContainer for \"0d80a028bbc36cc3b237c7c0cbe8b2bb1e4d08541cf2469b90168158548214cd\" returns successfully" Sep 5 00:21:44.869056 containerd[1820]: time="2025-09-05T00:21:44.869040154Z" level=info msg="StartContainer for \"d585bb2f9943bd0e662a2ff1aa853365952a3ee0329a80d32ffced1f8c960b02\" returns successfully" Sep 5 00:21:45.437483 kubelet[3155]: I0905 00:21:45.437441 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pm4w7" podStartSLOduration=17.437429017 podStartE2EDuration="17.437429017s" podCreationTimestamp="2025-09-05 00:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:45.437076316 +0000 UTC m=+23.130013839" watchObservedRunningTime="2025-09-05 00:21:45.437429017 +0000 UTC m=+23.130366541" Sep 5 00:21:45.442039 kubelet[3155]: I0905 00:21:45.441996 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lct45" podStartSLOduration=17.441981675 podStartE2EDuration="17.441981675s" podCreationTimestamp="2025-09-05 00:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:21:45.441932077 +0000 UTC m=+23.134869608" watchObservedRunningTime="2025-09-05 00:21:45.441981675 +0000 UTC m=+23.134919200" Sep 5 00:21:51.306643 kubelet[3155]: I0905 00:21:51.306538 3155 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:27:30.485461 systemd[1]: Started sshd@9-139.178.90.135:22-147.75.109.163:36734.service - OpenSSH per-connection server daemon (147.75.109.163:36734). Sep 5 00:27:30.517635 sshd[4793]: Accepted publickey for core from 147.75.109.163 port 36734 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:30.518397 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:30.521974 systemd-logind[1802]: New session 12 of user core. Sep 5 00:27:30.533696 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:27:30.632465 sshd[4795]: Connection closed by 147.75.109.163 port 36734 Sep 5 00:27:30.632632 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:30.634238 systemd[1]: sshd@9-139.178.90.135:22-147.75.109.163:36734.service: Deactivated successfully. Sep 5 00:27:30.635165 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:27:30.635866 systemd-logind[1802]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:27:30.636310 systemd-logind[1802]: Removed session 12. Sep 5 00:27:35.650918 systemd[1]: Started sshd@10-139.178.90.135:22-147.75.109.163:36738.service - OpenSSH per-connection server daemon (147.75.109.163:36738). Sep 5 00:27:35.683025 sshd[4825]: Accepted publickey for core from 147.75.109.163 port 36738 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:35.683758 sshd-session[4825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:35.687008 systemd-logind[1802]: New session 13 of user core. Sep 5 00:27:35.713025 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:27:35.811403 sshd[4827]: Connection closed by 147.75.109.163 port 36738 Sep 5 00:27:35.811622 sshd-session[4825]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:35.813441 systemd[1]: sshd@10-139.178.90.135:22-147.75.109.163:36738.service: Deactivated successfully. Sep 5 00:27:35.814511 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:27:35.815359 systemd-logind[1802]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:27:35.816158 systemd-logind[1802]: Removed session 13. Sep 5 00:27:40.828399 systemd[1]: Started sshd@11-139.178.90.135:22-147.75.109.163:44772.service - OpenSSH per-connection server daemon (147.75.109.163:44772). Sep 5 00:27:40.861282 sshd[4855]: Accepted publickey for core from 147.75.109.163 port 44772 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:40.864936 sshd-session[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:40.877500 systemd-logind[1802]: New session 14 of user core. Sep 5 00:27:40.894951 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:27:40.988465 sshd[4857]: Connection closed by 147.75.109.163 port 44772 Sep 5 00:27:40.988705 sshd-session[4855]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:40.990343 systemd[1]: sshd@11-139.178.90.135:22-147.75.109.163:44772.service: Deactivated successfully. Sep 5 00:27:40.991292 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:27:40.992051 systemd-logind[1802]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:27:40.992613 systemd-logind[1802]: Removed session 14. Sep 5 00:27:46.016593 systemd[1]: Started sshd@12-139.178.90.135:22-147.75.109.163:44780.service - OpenSSH per-connection server daemon (147.75.109.163:44780). Sep 5 00:27:46.049531 sshd[4883]: Accepted publickey for core from 147.75.109.163 port 44780 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:46.052880 sshd-session[4883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:46.065254 systemd-logind[1802]: New session 15 of user core. Sep 5 00:27:46.079923 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:27:46.176817 sshd[4885]: Connection closed by 147.75.109.163 port 44780 Sep 5 00:27:46.177085 sshd-session[4883]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:46.205104 systemd[1]: sshd@12-139.178.90.135:22-147.75.109.163:44780.service: Deactivated successfully. Sep 5 00:27:46.206631 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:27:46.208003 systemd-logind[1802]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:27:46.209264 systemd[1]: Started sshd@13-139.178.90.135:22-147.75.109.163:44794.service - OpenSSH per-connection server daemon (147.75.109.163:44794). Sep 5 00:27:46.210377 systemd-logind[1802]: Removed session 15. Sep 5 00:27:46.264047 sshd[4910]: Accepted publickey for core from 147.75.109.163 port 44794 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:46.265716 sshd-session[4910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:46.272038 systemd-logind[1802]: New session 16 of user core. Sep 5 00:27:46.294982 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:27:46.405561 sshd[4914]: Connection closed by 147.75.109.163 port 44794 Sep 5 00:27:46.405745 sshd-session[4910]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:46.421990 systemd[1]: sshd@13-139.178.90.135:22-147.75.109.163:44794.service: Deactivated successfully. Sep 5 00:27:46.422941 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:27:46.423762 systemd-logind[1802]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:27:46.424522 systemd[1]: Started sshd@14-139.178.90.135:22-147.75.109.163:44802.service - OpenSSH per-connection server daemon (147.75.109.163:44802). Sep 5 00:27:46.425021 systemd-logind[1802]: Removed session 16. Sep 5 00:27:46.459791 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 44802 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:46.460725 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:46.463992 systemd-logind[1802]: New session 17 of user core. Sep 5 00:27:46.483593 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:27:46.575782 sshd[4940]: Connection closed by 147.75.109.163 port 44802 Sep 5 00:27:46.575932 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:46.577869 systemd[1]: sshd@14-139.178.90.135:22-147.75.109.163:44802.service: Deactivated successfully. Sep 5 00:27:46.578807 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:27:46.579238 systemd-logind[1802]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:27:46.579724 systemd-logind[1802]: Removed session 17. Sep 5 00:27:51.606785 systemd[1]: Started sshd@15-139.178.90.135:22-147.75.109.163:33844.service - OpenSSH per-connection server daemon (147.75.109.163:33844). Sep 5 00:27:51.636944 sshd[4962]: Accepted publickey for core from 147.75.109.163 port 33844 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:51.637698 sshd-session[4962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:51.640801 systemd-logind[1802]: New session 18 of user core. Sep 5 00:27:51.654737 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:27:51.743119 sshd[4964]: Connection closed by 147.75.109.163 port 33844 Sep 5 00:27:51.743333 sshd-session[4962]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:51.771004 systemd[1]: sshd@15-139.178.90.135:22-147.75.109.163:33844.service: Deactivated successfully. Sep 5 00:27:51.771793 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:27:51.772453 systemd-logind[1802]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:27:51.773100 systemd[1]: Started sshd@16-139.178.90.135:22-147.75.109.163:33856.service - OpenSSH per-connection server daemon (147.75.109.163:33856). Sep 5 00:27:51.773478 systemd-logind[1802]: Removed session 18. Sep 5 00:27:51.806523 sshd[4988]: Accepted publickey for core from 147.75.109.163 port 33856 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:51.809853 sshd-session[4988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:51.822628 systemd-logind[1802]: New session 19 of user core. Sep 5 00:27:51.833056 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:27:52.000930 sshd[4992]: Connection closed by 147.75.109.163 port 33856 Sep 5 00:27:52.001102 sshd-session[4988]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:52.019249 systemd[1]: sshd@16-139.178.90.135:22-147.75.109.163:33856.service: Deactivated successfully. Sep 5 00:27:52.020750 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:27:52.022093 systemd-logind[1802]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:27:52.023474 systemd[1]: Started sshd@17-139.178.90.135:22-147.75.109.163:33872.service - OpenSSH per-connection server daemon (147.75.109.163:33872). Sep 5 00:27:52.024547 systemd-logind[1802]: Removed session 19. Sep 5 00:27:52.073072 sshd[5012]: Accepted publickey for core from 147.75.109.163 port 33872 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:52.074471 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:52.079940 systemd-logind[1802]: New session 20 of user core. Sep 5 00:27:52.094865 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:27:52.729958 sshd[5016]: Connection closed by 147.75.109.163 port 33872 Sep 5 00:27:52.730392 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:52.742219 systemd[1]: sshd@17-139.178.90.135:22-147.75.109.163:33872.service: Deactivated successfully. Sep 5 00:27:52.743455 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:27:52.744301 systemd-logind[1802]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:27:52.745211 systemd[1]: Started sshd@18-139.178.90.135:22-147.75.109.163:33880.service - OpenSSH per-connection server daemon (147.75.109.163:33880). Sep 5 00:27:52.745802 systemd-logind[1802]: Removed session 20. Sep 5 00:27:52.782739 sshd[5045]: Accepted publickey for core from 147.75.109.163 port 33880 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:52.786297 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:52.799420 systemd-logind[1802]: New session 21 of user core. Sep 5 00:27:52.817929 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:27:53.023655 sshd[5050]: Connection closed by 147.75.109.163 port 33880 Sep 5 00:27:53.023843 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:53.036727 systemd[1]: sshd@18-139.178.90.135:22-147.75.109.163:33880.service: Deactivated successfully. Sep 5 00:27:53.037614 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:27:53.038273 systemd-logind[1802]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:27:53.038997 systemd[1]: Started sshd@19-139.178.90.135:22-147.75.109.163:33884.service - OpenSSH per-connection server daemon (147.75.109.163:33884). Sep 5 00:27:53.039433 systemd-logind[1802]: Removed session 21. Sep 5 00:27:53.072837 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 33884 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:53.076118 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:53.089040 systemd-logind[1802]: New session 22 of user core. Sep 5 00:27:53.105886 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:27:53.240987 sshd[5075]: Connection closed by 147.75.109.163 port 33884 Sep 5 00:27:53.241173 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:53.242723 systemd[1]: sshd@19-139.178.90.135:22-147.75.109.163:33884.service: Deactivated successfully. Sep 5 00:27:53.243646 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:27:53.244343 systemd-logind[1802]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:27:53.244938 systemd-logind[1802]: Removed session 22. Sep 5 00:27:58.267631 systemd[1]: Started sshd@20-139.178.90.135:22-147.75.109.163:33888.service - OpenSSH per-connection server daemon (147.75.109.163:33888). Sep 5 00:27:58.298665 sshd[5101]: Accepted publickey for core from 147.75.109.163 port 33888 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:27:58.301986 sshd-session[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:27:58.314487 systemd-logind[1802]: New session 23 of user core. Sep 5 00:27:58.336683 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:27:58.420332 sshd[5103]: Connection closed by 147.75.109.163 port 33888 Sep 5 00:27:58.420503 sshd-session[5101]: pam_unix(sshd:session): session closed for user core Sep 5 00:27:58.422195 systemd[1]: sshd@20-139.178.90.135:22-147.75.109.163:33888.service: Deactivated successfully. Sep 5 00:27:58.423203 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:27:58.423973 systemd-logind[1802]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:27:58.424611 systemd-logind[1802]: Removed session 23. Sep 5 00:28:03.433389 systemd[1]: Started sshd@21-139.178.90.135:22-147.75.109.163:58294.service - OpenSSH per-connection server daemon (147.75.109.163:58294). Sep 5 00:28:03.467345 sshd[5130]: Accepted publickey for core from 147.75.109.163 port 58294 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:28:03.468086 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:03.471183 systemd-logind[1802]: New session 24 of user core. Sep 5 00:28:03.495785 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:28:03.586109 sshd[5132]: Connection closed by 147.75.109.163 port 58294 Sep 5 00:28:03.586296 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:03.605335 systemd[1]: sshd@21-139.178.90.135:22-147.75.109.163:58294.service: Deactivated successfully. Sep 5 00:28:03.606464 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:28:03.607389 systemd-logind[1802]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:28:03.608326 systemd[1]: Started sshd@22-139.178.90.135:22-147.75.109.163:58310.service - OpenSSH per-connection server daemon (147.75.109.163:58310). Sep 5 00:28:03.609021 systemd-logind[1802]: Removed session 24. Sep 5 00:28:03.645417 sshd[5155]: Accepted publickey for core from 147.75.109.163 port 58310 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:28:03.646082 sshd-session[5155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:03.649040 systemd-logind[1802]: New session 25 of user core. Sep 5 00:28:03.659651 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:28:05.026546 containerd[1820]: time="2025-09-05T00:28:05.026426832Z" level=info msg="StopContainer for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" with timeout 30 (s)" Sep 5 00:28:05.027433 containerd[1820]: time="2025-09-05T00:28:05.027164601Z" level=info msg="Stop container \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" with signal terminated" Sep 5 00:28:05.045968 systemd[1]: cri-containerd-5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd.scope: Deactivated successfully. Sep 5 00:28:05.068559 containerd[1820]: time="2025-09-05T00:28:05.068524509Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:28:05.073168 containerd[1820]: time="2025-09-05T00:28:05.073145950Z" level=info msg="StopContainer for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" with timeout 2 (s)" Sep 5 00:28:05.073283 containerd[1820]: time="2025-09-05T00:28:05.073273822Z" level=info msg="Stop container \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" with signal terminated" Sep 5 00:28:05.073336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd-rootfs.mount: Deactivated successfully. Sep 5 00:28:05.076391 systemd-networkd[1727]: lxc_health: Link DOWN Sep 5 00:28:05.076394 systemd-networkd[1727]: lxc_health: Lost carrier Sep 5 00:28:05.094044 containerd[1820]: time="2025-09-05T00:28:05.094011833Z" level=info msg="shim disconnected" id=5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd namespace=k8s.io Sep 5 00:28:05.094108 containerd[1820]: time="2025-09-05T00:28:05.094045074Z" level=warning msg="cleaning up after shim disconnected" id=5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd namespace=k8s.io Sep 5 00:28:05.094108 containerd[1820]: time="2025-09-05T00:28:05.094051014Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:05.101651 containerd[1820]: time="2025-09-05T00:28:05.101573738Z" level=info msg="StopContainer for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" returns successfully" Sep 5 00:28:05.102006 containerd[1820]: time="2025-09-05T00:28:05.101974452Z" level=info msg="StopPodSandbox for \"593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36\"" Sep 5 00:28:05.102040 containerd[1820]: time="2025-09-05T00:28:05.101995256Z" level=info msg="Container to stop \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:28:05.103421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36-shm.mount: Deactivated successfully. Sep 5 00:28:05.105819 systemd[1]: cri-containerd-593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36.scope: Deactivated successfully. Sep 5 00:28:05.107268 systemd[1]: cri-containerd-605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110.scope: Deactivated successfully. Sep 5 00:28:05.107470 systemd[1]: cri-containerd-605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110.scope: Consumed 6.636s CPU time, 167.9M memory peak, 144K read from disk, 13.3M written to disk. Sep 5 00:28:05.118996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110-rootfs.mount: Deactivated successfully. Sep 5 00:28:05.120655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36-rootfs.mount: Deactivated successfully. Sep 5 00:28:05.133048 containerd[1820]: time="2025-09-05T00:28:05.132987278Z" level=info msg="shim disconnected" id=605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110 namespace=k8s.io Sep 5 00:28:05.133048 containerd[1820]: time="2025-09-05T00:28:05.133016688Z" level=warning msg="cleaning up after shim disconnected" id=605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110 namespace=k8s.io Sep 5 00:28:05.133048 containerd[1820]: time="2025-09-05T00:28:05.133021683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:05.140554 containerd[1820]: time="2025-09-05T00:28:05.140502402Z" level=info msg="StopContainer for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" returns successfully" Sep 5 00:28:05.140858 containerd[1820]: time="2025-09-05T00:28:05.140800745Z" level=info msg="StopPodSandbox for \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\"" Sep 5 00:28:05.140858 containerd[1820]: time="2025-09-05T00:28:05.140825682Z" level=info msg="Container to stop \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:28:05.140858 containerd[1820]: time="2025-09-05T00:28:05.140855196Z" level=info msg="Container to stop \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:28:05.140938 containerd[1820]: time="2025-09-05T00:28:05.140862692Z" level=info msg="Container to stop \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:28:05.140938 containerd[1820]: time="2025-09-05T00:28:05.140871864Z" level=info msg="Container to stop \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:28:05.140938 containerd[1820]: time="2025-09-05T00:28:05.140881828Z" level=info msg="Container to stop \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:28:05.143759 containerd[1820]: time="2025-09-05T00:28:05.143729714Z" level=info msg="shim disconnected" id=593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36 namespace=k8s.io Sep 5 00:28:05.143818 containerd[1820]: time="2025-09-05T00:28:05.143759205Z" level=warning msg="cleaning up after shim disconnected" id=593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36 namespace=k8s.io Sep 5 00:28:05.143818 containerd[1820]: time="2025-09-05T00:28:05.143768495Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:05.144310 systemd[1]: cri-containerd-c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0.scope: Deactivated successfully. Sep 5 00:28:05.151394 containerd[1820]: time="2025-09-05T00:28:05.151369526Z" level=info msg="TearDown network for sandbox \"593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36\" successfully" Sep 5 00:28:05.151394 containerd[1820]: time="2025-09-05T00:28:05.151390795Z" level=info msg="StopPodSandbox for \"593e50faeb0023c73b4d7815ca2f5a0e84265e8d6cc8cee0df99b6f71eee7f36\" returns successfully" Sep 5 00:28:05.155742 containerd[1820]: time="2025-09-05T00:28:05.155703168Z" level=info msg="shim disconnected" id=c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0 namespace=k8s.io Sep 5 00:28:05.155742 containerd[1820]: time="2025-09-05T00:28:05.155736529Z" level=warning msg="cleaning up after shim disconnected" id=c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0 namespace=k8s.io Sep 5 00:28:05.155742 containerd[1820]: time="2025-09-05T00:28:05.155743723Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:05.162188 containerd[1820]: time="2025-09-05T00:28:05.162167325Z" level=info msg="TearDown network for sandbox \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" successfully" Sep 5 00:28:05.162188 containerd[1820]: time="2025-09-05T00:28:05.162184788Z" level=info msg="StopPodSandbox for \"c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0\" returns successfully" Sep 5 00:28:05.245076 kubelet[3155]: I0905 00:28:05.244983 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-hostproc\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.246238 kubelet[3155]: I0905 00:28:05.245101 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-hubble-tls\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.246238 kubelet[3155]: I0905 00:28:05.245153 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cni-path\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.246238 kubelet[3155]: I0905 00:28:05.245133 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-hostproc" (OuterVolumeSpecName: "hostproc") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.246238 kubelet[3155]: I0905 00:28:05.245199 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-etc-cni-netd\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.246238 kubelet[3155]: I0905 00:28:05.245256 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-kernel\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.246238 kubelet[3155]: I0905 00:28:05.245264 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cni-path" (OuterVolumeSpecName: "cni-path") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.247016 kubelet[3155]: I0905 00:28:05.245309 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-bpf-maps\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.247016 kubelet[3155]: I0905 00:28:05.245334 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.247016 kubelet[3155]: I0905 00:28:05.245360 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.247016 kubelet[3155]: I0905 00:28:05.245362 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a9111e4-e1d3-46d0-8ff9-31803e71658b-cilium-config-path\") pod \"8a9111e4-e1d3-46d0-8ff9-31803e71658b\" (UID: \"8a9111e4-e1d3-46d0-8ff9-31803e71658b\") " Sep 5 00:28:05.247016 kubelet[3155]: I0905 00:28:05.245497 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.247587 kubelet[3155]: I0905 00:28:05.245532 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-cgroup\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.247587 kubelet[3155]: I0905 00:28:05.245583 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.247587 kubelet[3155]: I0905 00:28:05.245669 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfkcn\" (UniqueName: \"kubernetes.io/projected/8a9111e4-e1d3-46d0-8ff9-31803e71658b-kube-api-access-zfkcn\") pod \"8a9111e4-e1d3-46d0-8ff9-31803e71658b\" (UID: \"8a9111e4-e1d3-46d0-8ff9-31803e71658b\") " Sep 5 00:28:05.247587 kubelet[3155]: I0905 00:28:05.245728 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-run\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.247587 kubelet[3155]: I0905 00:28:05.245797 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-xtables-lock\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.248060 kubelet[3155]: I0905 00:28:05.245838 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.248060 kubelet[3155]: I0905 00:28:05.245881 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxsz8\" (UniqueName: \"kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-kube-api-access-vxsz8\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.248060 kubelet[3155]: I0905 00:28:05.245943 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.248060 kubelet[3155]: I0905 00:28:05.245963 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-lib-modules\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.248060 kubelet[3155]: I0905 00:28:05.246032 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.248573 kubelet[3155]: I0905 00:28:05.246110 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53955307-da67-4152-8b87-4ba980242bb4-cilium-config-path\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.248573 kubelet[3155]: I0905 00:28:05.246217 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53955307-da67-4152-8b87-4ba980242bb4-clustermesh-secrets\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.248573 kubelet[3155]: I0905 00:28:05.246304 3155 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-net\") pod \"53955307-da67-4152-8b87-4ba980242bb4\" (UID: \"53955307-da67-4152-8b87-4ba980242bb4\") " Sep 5 00:28:05.248573 kubelet[3155]: I0905 00:28:05.246481 3155 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-lib-modules\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.248573 kubelet[3155]: I0905 00:28:05.246488 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 5 00:28:05.248573 kubelet[3155]: I0905 00:28:05.246548 3155 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-hostproc\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246602 3155 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cni-path\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246651 3155 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-etc-cni-netd\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246698 3155 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-kernel\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246756 3155 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-bpf-maps\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246810 3155 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-cgroup\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246857 3155 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-cilium-run\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.249246 kubelet[3155]: I0905 00:28:05.246905 3155 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-xtables-lock\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.252126 kubelet[3155]: I0905 00:28:05.252032 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:28:05.252126 kubelet[3155]: I0905 00:28:05.252055 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9111e4-e1d3-46d0-8ff9-31803e71658b-kube-api-access-zfkcn" (OuterVolumeSpecName: "kube-api-access-zfkcn") pod "8a9111e4-e1d3-46d0-8ff9-31803e71658b" (UID: "8a9111e4-e1d3-46d0-8ff9-31803e71658b"). InnerVolumeSpecName "kube-api-access-zfkcn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:28:05.252475 kubelet[3155]: I0905 00:28:05.252144 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a9111e4-e1d3-46d0-8ff9-31803e71658b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a9111e4-e1d3-46d0-8ff9-31803e71658b" (UID: "8a9111e4-e1d3-46d0-8ff9-31803e71658b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:28:05.252475 kubelet[3155]: I0905 00:28:05.252164 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-kube-api-access-vxsz8" (OuterVolumeSpecName: "kube-api-access-vxsz8") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "kube-api-access-vxsz8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:28:05.252908 kubelet[3155]: I0905 00:28:05.252794 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53955307-da67-4152-8b87-4ba980242bb4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:28:05.253099 kubelet[3155]: I0905 00:28:05.252950 3155 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53955307-da67-4152-8b87-4ba980242bb4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "53955307-da67-4152-8b87-4ba980242bb4" (UID: "53955307-da67-4152-8b87-4ba980242bb4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348192 3155 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a9111e4-e1d3-46d0-8ff9-31803e71658b-cilium-config-path\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348264 3155 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zfkcn\" (UniqueName: \"kubernetes.io/projected/8a9111e4-e1d3-46d0-8ff9-31803e71658b-kube-api-access-zfkcn\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348297 3155 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxsz8\" (UniqueName: \"kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-kube-api-access-vxsz8\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348333 3155 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/53955307-da67-4152-8b87-4ba980242bb4-cilium-config-path\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348367 3155 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/53955307-da67-4152-8b87-4ba980242bb4-clustermesh-secrets\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348395 3155 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/53955307-da67-4152-8b87-4ba980242bb4-host-proc-sys-net\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.348435 kubelet[3155]: I0905 00:28:05.348426 3155 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/53955307-da67-4152-8b87-4ba980242bb4-hubble-tls\") on node \"ci-4230.2.2-n-de5468c6d2\" DevicePath \"\"" Sep 5 00:28:05.453139 kubelet[3155]: I0905 00:28:05.453056 3155 scope.go:117] "RemoveContainer" containerID="605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110" Sep 5 00:28:05.455802 containerd[1820]: time="2025-09-05T00:28:05.455726198Z" level=info msg="RemoveContainer for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\"" Sep 5 00:28:05.457957 containerd[1820]: time="2025-09-05T00:28:05.457917696Z" level=info msg="RemoveContainer for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" returns successfully" Sep 5 00:28:05.458072 kubelet[3155]: I0905 00:28:05.458041 3155 scope.go:117] "RemoveContainer" containerID="7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199" Sep 5 00:28:05.458483 systemd[1]: Removed slice kubepods-burstable-pod53955307_da67_4152_8b87_4ba980242bb4.slice - libcontainer container kubepods-burstable-pod53955307_da67_4152_8b87_4ba980242bb4.slice. Sep 5 00:28:05.458545 systemd[1]: kubepods-burstable-pod53955307_da67_4152_8b87_4ba980242bb4.slice: Consumed 6.718s CPU time, 168.5M memory peak, 144K read from disk, 13.3M written to disk. Sep 5 00:28:05.458618 containerd[1820]: time="2025-09-05T00:28:05.458494982Z" level=info msg="RemoveContainer for \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\"" Sep 5 00:28:05.459053 systemd[1]: Removed slice kubepods-besteffort-pod8a9111e4_e1d3_46d0_8ff9_31803e71658b.slice - libcontainer container kubepods-besteffort-pod8a9111e4_e1d3_46d0_8ff9_31803e71658b.slice. Sep 5 00:28:05.459603 containerd[1820]: time="2025-09-05T00:28:05.459592760Z" level=info msg="RemoveContainer for \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\" returns successfully" Sep 5 00:28:05.459685 kubelet[3155]: I0905 00:28:05.459675 3155 scope.go:117] "RemoveContainer" containerID="8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c" Sep 5 00:28:05.460071 containerd[1820]: time="2025-09-05T00:28:05.460063317Z" level=info msg="RemoveContainer for \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\"" Sep 5 00:28:05.461097 containerd[1820]: time="2025-09-05T00:28:05.461047765Z" level=info msg="RemoveContainer for \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\" returns successfully" Sep 5 00:28:05.461131 kubelet[3155]: I0905 00:28:05.461116 3155 scope.go:117] "RemoveContainer" containerID="1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892" Sep 5 00:28:05.461495 containerd[1820]: time="2025-09-05T00:28:05.461485319Z" level=info msg="RemoveContainer for \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\"" Sep 5 00:28:05.462490 containerd[1820]: time="2025-09-05T00:28:05.462480207Z" level=info msg="RemoveContainer for \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\" returns successfully" Sep 5 00:28:05.462556 kubelet[3155]: I0905 00:28:05.462548 3155 scope.go:117] "RemoveContainer" containerID="26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47" Sep 5 00:28:05.463698 containerd[1820]: time="2025-09-05T00:28:05.463685828Z" level=info msg="RemoveContainer for \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\"" Sep 5 00:28:05.465047 containerd[1820]: time="2025-09-05T00:28:05.465005526Z" level=info msg="RemoveContainer for \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\" returns successfully" Sep 5 00:28:05.465099 kubelet[3155]: I0905 00:28:05.465088 3155 scope.go:117] "RemoveContainer" containerID="605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110" Sep 5 00:28:05.465209 containerd[1820]: time="2025-09-05T00:28:05.465191750Z" level=error msg="ContainerStatus for \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\": not found" Sep 5 00:28:05.465263 kubelet[3155]: E0905 00:28:05.465252 3155 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\": not found" containerID="605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110" Sep 5 00:28:05.465293 kubelet[3155]: I0905 00:28:05.465271 3155 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110"} err="failed to get container status \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\": rpc error: code = NotFound desc = an error occurred when try to find container \"605b3ef196307c6a85934409cad61dd3356267ed7eaf24424c5a35266bb64110\": not found" Sep 5 00:28:05.465314 kubelet[3155]: I0905 00:28:05.465295 3155 scope.go:117] "RemoveContainer" containerID="7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199" Sep 5 00:28:05.465411 containerd[1820]: time="2025-09-05T00:28:05.465388116Z" level=error msg="ContainerStatus for \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\": not found" Sep 5 00:28:05.465473 kubelet[3155]: E0905 00:28:05.465464 3155 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\": not found" containerID="7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199" Sep 5 00:28:05.465529 kubelet[3155]: I0905 00:28:05.465475 3155 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199"} err="failed to get container status \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f0c9e14c3c575ff595d99bc969eeeba67f939da82f2ecd6b7ba1cb390819199\": not found" Sep 5 00:28:05.465529 kubelet[3155]: I0905 00:28:05.465499 3155 scope.go:117] "RemoveContainer" containerID="8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c" Sep 5 00:28:05.465620 containerd[1820]: time="2025-09-05T00:28:05.465600053Z" level=error msg="ContainerStatus for \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\": not found" Sep 5 00:28:05.465692 kubelet[3155]: E0905 00:28:05.465673 3155 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\": not found" containerID="8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c" Sep 5 00:28:05.465742 kubelet[3155]: I0905 00:28:05.465697 3155 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c"} err="failed to get container status \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a4043abb7b128c2ff1468436af26af04b3057468dacc40c2a6b83b19650136c\": not found" Sep 5 00:28:05.465742 kubelet[3155]: I0905 00:28:05.465712 3155 scope.go:117] "RemoveContainer" containerID="1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892" Sep 5 00:28:05.465822 containerd[1820]: time="2025-09-05T00:28:05.465806633Z" level=error msg="ContainerStatus for \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\": not found" Sep 5 00:28:05.465868 kubelet[3155]: E0905 00:28:05.465859 3155 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\": not found" containerID="1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892" Sep 5 00:28:05.465898 kubelet[3155]: I0905 00:28:05.465870 3155 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892"} err="failed to get container status \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\": rpc error: code = NotFound desc = an error occurred when try to find container \"1c8676e438733cbed561a9b61f0442e1870aca3746e28f501dcc52a0e6a3a892\": not found" Sep 5 00:28:05.465898 kubelet[3155]: I0905 00:28:05.465877 3155 scope.go:117] "RemoveContainer" containerID="26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47" Sep 5 00:28:05.465968 containerd[1820]: time="2025-09-05T00:28:05.465955117Z" level=error msg="ContainerStatus for \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\": not found" Sep 5 00:28:05.466012 kubelet[3155]: E0905 00:28:05.466005 3155 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\": not found" containerID="26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47" Sep 5 00:28:05.466033 kubelet[3155]: I0905 00:28:05.466016 3155 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47"} err="failed to get container status \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\": rpc error: code = NotFound desc = an error occurred when try to find container \"26c00812ac680716f80e5b63f8cb3b3182cda18065634064f19c55c055682f47\": not found" Sep 5 00:28:05.466033 kubelet[3155]: I0905 00:28:05.466025 3155 scope.go:117] "RemoveContainer" containerID="5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd" Sep 5 00:28:05.466427 containerd[1820]: time="2025-09-05T00:28:05.466416207Z" level=info msg="RemoveContainer for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\"" Sep 5 00:28:05.467672 containerd[1820]: time="2025-09-05T00:28:05.467613944Z" level=info msg="RemoveContainer for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" returns successfully" Sep 5 00:28:05.467711 kubelet[3155]: I0905 00:28:05.467704 3155 scope.go:117] "RemoveContainer" containerID="5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd" Sep 5 00:28:05.467868 containerd[1820]: time="2025-09-05T00:28:05.467832523Z" level=error msg="ContainerStatus for \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\": not found" Sep 5 00:28:05.467994 kubelet[3155]: E0905 00:28:05.467943 3155 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\": not found" containerID="5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd" Sep 5 00:28:05.467994 kubelet[3155]: I0905 00:28:05.467953 3155 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd"} err="failed to get container status \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"5d51016b8f07074b508a573e7e9abc834269ed22c7688f895a42ed33b5b876bd\": not found" Sep 5 00:28:06.047946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0-rootfs.mount: Deactivated successfully. Sep 5 00:28:06.048003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1ec4163c872a6aa95d06388043fb2d8b774abb9cbc74c2b5356763bf1f415e0-shm.mount: Deactivated successfully. Sep 5 00:28:06.048044 systemd[1]: var-lib-kubelet-pods-53955307\x2dda67\x2d4152\x2d8b87\x2d4ba980242bb4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxsz8.mount: Deactivated successfully. Sep 5 00:28:06.048081 systemd[1]: var-lib-kubelet-pods-8a9111e4\x2de1d3\x2d46d0\x2d8ff9\x2d31803e71658b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzfkcn.mount: Deactivated successfully. Sep 5 00:28:06.048123 systemd[1]: var-lib-kubelet-pods-53955307\x2dda67\x2d4152\x2d8b87\x2d4ba980242bb4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 5 00:28:06.048160 systemd[1]: var-lib-kubelet-pods-53955307\x2dda67\x2d4152\x2d8b87\x2d4ba980242bb4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 5 00:28:06.353764 kubelet[3155]: I0905 00:28:06.353687 3155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53955307-da67-4152-8b87-4ba980242bb4" path="/var/lib/kubelet/pods/53955307-da67-4152-8b87-4ba980242bb4/volumes" Sep 5 00:28:06.354214 kubelet[3155]: I0905 00:28:06.354201 3155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a9111e4-e1d3-46d0-8ff9-31803e71658b" path="/var/lib/kubelet/pods/8a9111e4-e1d3-46d0-8ff9-31803e71658b/volumes" Sep 5 00:28:06.976478 sshd[5159]: Connection closed by 147.75.109.163 port 58310 Sep 5 00:28:06.977294 sshd-session[5155]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:06.998882 systemd[1]: sshd@22-139.178.90.135:22-147.75.109.163:58310.service: Deactivated successfully. Sep 5 00:28:06.999714 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:28:07.000253 systemd-logind[1802]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:28:07.001360 systemd[1]: Started sshd@23-139.178.90.135:22-147.75.109.163:58314.service - OpenSSH per-connection server daemon (147.75.109.163:58314). Sep 5 00:28:07.001911 systemd-logind[1802]: Removed session 25. Sep 5 00:28:07.034268 sshd[5331]: Accepted publickey for core from 147.75.109.163 port 58314 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:28:07.037609 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:07.050724 systemd-logind[1802]: New session 26 of user core. Sep 5 00:28:07.073022 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:28:07.476502 kubelet[3155]: E0905 00:28:07.476431 3155 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 5 00:28:07.495285 sshd[5335]: Connection closed by 147.75.109.163 port 58314 Sep 5 00:28:07.495545 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:07.509683 systemd[1]: sshd@23-139.178.90.135:22-147.75.109.163:58314.service: Deactivated successfully. Sep 5 00:28:07.510604 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:28:07.511262 systemd-logind[1802]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:28:07.512126 systemd[1]: Started sshd@24-139.178.90.135:22-147.75.109.163:58326.service - OpenSSH per-connection server daemon (147.75.109.163:58326). Sep 5 00:28:07.513121 systemd-logind[1802]: Removed session 26. Sep 5 00:28:07.515268 systemd[1]: Created slice kubepods-burstable-pod8f53356a_9771_4297_8a87_2b1178f8cd2a.slice - libcontainer container kubepods-burstable-pod8f53356a_9771_4297_8a87_2b1178f8cd2a.slice. Sep 5 00:28:07.544607 sshd[5357]: Accepted publickey for core from 147.75.109.163 port 58326 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:28:07.545277 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:07.548142 systemd-logind[1802]: New session 27 of user core. Sep 5 00:28:07.560644 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:28:07.564369 kubelet[3155]: I0905 00:28:07.564318 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-xtables-lock\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564369 kubelet[3155]: I0905 00:28:07.564343 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f53356a-9771-4297-8a87-2b1178f8cd2a-cilium-config-path\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564369 kubelet[3155]: I0905 00:28:07.564361 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-host-proc-sys-kernel\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564492 kubelet[3155]: I0905 00:28:07.564374 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-bpf-maps\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564492 kubelet[3155]: I0905 00:28:07.564387 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-hostproc\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564492 kubelet[3155]: I0905 00:28:07.564399 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-lib-modules\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564492 kubelet[3155]: I0905 00:28:07.564426 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f53356a-9771-4297-8a87-2b1178f8cd2a-clustermesh-secrets\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564492 kubelet[3155]: I0905 00:28:07.564467 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-cilium-cgroup\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564492 kubelet[3155]: I0905 00:28:07.564484 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9jb\" (UniqueName: \"kubernetes.io/projected/8f53356a-9771-4297-8a87-2b1178f8cd2a-kube-api-access-pz9jb\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564637 kubelet[3155]: I0905 00:28:07.564499 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-host-proc-sys-net\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564637 kubelet[3155]: I0905 00:28:07.564512 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-etc-cni-netd\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564637 kubelet[3155]: I0905 00:28:07.564524 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8f53356a-9771-4297-8a87-2b1178f8cd2a-cilium-ipsec-secrets\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564637 kubelet[3155]: I0905 00:28:07.564535 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f53356a-9771-4297-8a87-2b1178f8cd2a-hubble-tls\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564637 kubelet[3155]: I0905 00:28:07.564549 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-cni-path\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.564637 kubelet[3155]: I0905 00:28:07.564561 3155 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f53356a-9771-4297-8a87-2b1178f8cd2a-cilium-run\") pod \"cilium-n9rxm\" (UID: \"8f53356a-9771-4297-8a87-2b1178f8cd2a\") " pod="kube-system/cilium-n9rxm" Sep 5 00:28:07.615374 sshd[5362]: Connection closed by 147.75.109.163 port 58326 Sep 5 00:28:07.616158 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:07.644143 systemd[1]: sshd@24-139.178.90.135:22-147.75.109.163:58326.service: Deactivated successfully. Sep 5 00:28:07.648701 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:28:07.652424 systemd-logind[1802]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:28:07.670208 systemd[1]: Started sshd@25-139.178.90.135:22-147.75.109.163:58342.service - OpenSSH per-connection server daemon (147.75.109.163:58342). Sep 5 00:28:07.679605 systemd-logind[1802]: Removed session 27. Sep 5 00:28:07.702930 sshd[5368]: Accepted publickey for core from 147.75.109.163 port 58342 ssh2: RSA SHA256:hqJ+loX/XeI4xbQIkP7jWqpC2HucOOvFKnzWyrl36Us Sep 5 00:28:07.703573 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:28:07.706374 systemd-logind[1802]: New session 28 of user core. Sep 5 00:28:07.723942 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 00:28:07.818127 containerd[1820]: time="2025-09-05T00:28:07.818054563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n9rxm,Uid:8f53356a-9771-4297-8a87-2b1178f8cd2a,Namespace:kube-system,Attempt:0,}" Sep 5 00:28:07.832670 containerd[1820]: time="2025-09-05T00:28:07.832611528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:28:07.832670 containerd[1820]: time="2025-09-05T00:28:07.832660340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:28:07.832670 containerd[1820]: time="2025-09-05T00:28:07.832669863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:28:07.832784 containerd[1820]: time="2025-09-05T00:28:07.832711735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:28:07.850739 systemd[1]: Started cri-containerd-62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b.scope - libcontainer container 62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b. Sep 5 00:28:07.861264 containerd[1820]: time="2025-09-05T00:28:07.861242213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n9rxm,Uid:8f53356a-9771-4297-8a87-2b1178f8cd2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\"" Sep 5 00:28:07.863494 containerd[1820]: time="2025-09-05T00:28:07.863478973Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 5 00:28:07.867100 containerd[1820]: time="2025-09-05T00:28:07.867086901Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00\"" Sep 5 00:28:07.867285 containerd[1820]: time="2025-09-05T00:28:07.867273076Z" level=info msg="StartContainer for \"d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00\"" Sep 5 00:28:07.891940 systemd[1]: Started cri-containerd-d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00.scope - libcontainer container d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00. Sep 5 00:28:07.936647 containerd[1820]: time="2025-09-05T00:28:07.936600587Z" level=info msg="StartContainer for \"d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00\" returns successfully" Sep 5 00:28:07.945003 systemd[1]: cri-containerd-d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00.scope: Deactivated successfully. Sep 5 00:28:07.990892 containerd[1820]: time="2025-09-05T00:28:07.990767207Z" level=info msg="shim disconnected" id=d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00 namespace=k8s.io Sep 5 00:28:07.990892 containerd[1820]: time="2025-09-05T00:28:07.990879096Z" level=warning msg="cleaning up after shim disconnected" id=d6a528bfecc4128b36544d30bba7bd6b0a243cc32938984a24882fe211b52e00 namespace=k8s.io Sep 5 00:28:07.991372 containerd[1820]: time="2025-09-05T00:28:07.990907692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:08.471317 containerd[1820]: time="2025-09-05T00:28:08.471279791Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 5 00:28:08.475757 containerd[1820]: time="2025-09-05T00:28:08.475712027Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0\"" Sep 5 00:28:08.476136 containerd[1820]: time="2025-09-05T00:28:08.476104634Z" level=info msg="StartContainer for \"ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0\"" Sep 5 00:28:08.506642 systemd[1]: Started cri-containerd-ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0.scope - libcontainer container ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0. Sep 5 00:28:08.522739 containerd[1820]: time="2025-09-05T00:28:08.522686568Z" level=info msg="StartContainer for \"ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0\" returns successfully" Sep 5 00:28:08.528606 systemd[1]: cri-containerd-ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0.scope: Deactivated successfully. Sep 5 00:28:08.547249 containerd[1820]: time="2025-09-05T00:28:08.547184554Z" level=info msg="shim disconnected" id=ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0 namespace=k8s.io Sep 5 00:28:08.547249 containerd[1820]: time="2025-09-05T00:28:08.547217286Z" level=warning msg="cleaning up after shim disconnected" id=ef5ff33f00bb4cb55727e88adacf93bade6952fbcd0c8d62b505ca0ca77295f0 namespace=k8s.io Sep 5 00:28:08.547249 containerd[1820]: time="2025-09-05T00:28:08.547223267Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:08.553107 containerd[1820]: time="2025-09-05T00:28:08.553047062Z" level=warning msg="cleanup warnings time=\"2025-09-05T00:28:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 00:28:09.477977 containerd[1820]: time="2025-09-05T00:28:09.477941988Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 5 00:28:09.486599 containerd[1820]: time="2025-09-05T00:28:09.486572434Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4\"" Sep 5 00:28:09.486869 containerd[1820]: time="2025-09-05T00:28:09.486857379Z" level=info msg="StartContainer for \"8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4\"" Sep 5 00:28:09.515894 systemd[1]: Started cri-containerd-8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4.scope - libcontainer container 8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4. Sep 5 00:28:09.574476 containerd[1820]: time="2025-09-05T00:28:09.574400810Z" level=info msg="StartContainer for \"8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4\" returns successfully" Sep 5 00:28:09.577228 systemd[1]: cri-containerd-8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4.scope: Deactivated successfully. Sep 5 00:28:09.608846 containerd[1820]: time="2025-09-05T00:28:09.608814217Z" level=info msg="shim disconnected" id=8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4 namespace=k8s.io Sep 5 00:28:09.608846 containerd[1820]: time="2025-09-05T00:28:09.608844979Z" level=warning msg="cleaning up after shim disconnected" id=8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4 namespace=k8s.io Sep 5 00:28:09.608958 containerd[1820]: time="2025-09-05T00:28:09.608851912Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:09.615633 containerd[1820]: time="2025-09-05T00:28:09.615575649Z" level=warning msg="cleanup warnings time=\"2025-09-05T00:28:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 00:28:09.678997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b43692881e9377ae68756d05a5eaed66d0f6d2607bbccef43a1665d4e3273e4-rootfs.mount: Deactivated successfully. Sep 5 00:28:10.480850 containerd[1820]: time="2025-09-05T00:28:10.480772780Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 5 00:28:10.488010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3440797162.mount: Deactivated successfully. Sep 5 00:28:10.488636 containerd[1820]: time="2025-09-05T00:28:10.488592578Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25\"" Sep 5 00:28:10.489029 containerd[1820]: time="2025-09-05T00:28:10.488983831Z" level=info msg="StartContainer for \"c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25\"" Sep 5 00:28:10.516762 systemd[1]: Started cri-containerd-c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25.scope - libcontainer container c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25. Sep 5 00:28:10.529563 systemd[1]: cri-containerd-c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25.scope: Deactivated successfully. Sep 5 00:28:10.530346 containerd[1820]: time="2025-09-05T00:28:10.530302570Z" level=info msg="StartContainer for \"c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25\" returns successfully" Sep 5 00:28:10.564656 containerd[1820]: time="2025-09-05T00:28:10.564528336Z" level=info msg="shim disconnected" id=c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25 namespace=k8s.io Sep 5 00:28:10.564656 containerd[1820]: time="2025-09-05T00:28:10.564646370Z" level=warning msg="cleaning up after shim disconnected" id=c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25 namespace=k8s.io Sep 5 00:28:10.565299 containerd[1820]: time="2025-09-05T00:28:10.564683770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:28:10.678274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8a5459bcb0cd5fba4d676c2901033e32c9730758cbdd9d4ae4cae0f4901de25-rootfs.mount: Deactivated successfully. Sep 5 00:28:10.698124 kubelet[3155]: I0905 00:28:10.697983 3155 setters.go:618] "Node became not ready" node="ci-4230.2.2-n-de5468c6d2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-05T00:28:10Z","lastTransitionTime":"2025-09-05T00:28:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 5 00:28:11.490387 containerd[1820]: time="2025-09-05T00:28:11.490344707Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 5 00:28:11.498097 containerd[1820]: time="2025-09-05T00:28:11.498077428Z" level=info msg="CreateContainer within sandbox \"62aec63ad351c9d1db62f0653586afe925c2cf3178a86b5e4b684d0a44ad839b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"378084ee5796dfe13158b0944f04ce7121627e4ab78b53be7944182df8dcc10c\"" Sep 5 00:28:11.498537 containerd[1820]: time="2025-09-05T00:28:11.498454410Z" level=info msg="StartContainer for \"378084ee5796dfe13158b0944f04ce7121627e4ab78b53be7944182df8dcc10c\"" Sep 5 00:28:11.531905 systemd[1]: Started cri-containerd-378084ee5796dfe13158b0944f04ce7121627e4ab78b53be7944182df8dcc10c.scope - libcontainer container 378084ee5796dfe13158b0944f04ce7121627e4ab78b53be7944182df8dcc10c. Sep 5 00:28:11.594714 containerd[1820]: time="2025-09-05T00:28:11.594635223Z" level=info msg="StartContainer for \"378084ee5796dfe13158b0944f04ce7121627e4ab78b53be7944182df8dcc10c\" returns successfully" Sep 5 00:28:11.773454 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 5 00:28:12.527252 kubelet[3155]: I0905 00:28:12.527121 3155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n9rxm" podStartSLOduration=5.52707926 podStartE2EDuration="5.52707926s" podCreationTimestamp="2025-09-05 00:28:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:28:12.526173658 +0000 UTC m=+410.219111251" watchObservedRunningTime="2025-09-05 00:28:12.52707926 +0000 UTC m=+410.220016838" Sep 5 00:28:15.101858 systemd-networkd[1727]: lxc_health: Link UP Sep 5 00:28:15.102029 systemd-networkd[1727]: lxc_health: Gained carrier Sep 5 00:28:16.870614 systemd-networkd[1727]: lxc_health: Gained IPv6LL Sep 5 00:28:20.327162 sshd[5375]: Connection closed by 147.75.109.163 port 58342 Sep 5 00:28:20.327370 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Sep 5 00:28:20.329098 systemd[1]: sshd@25-139.178.90.135:22-147.75.109.163:58342.service: Deactivated successfully. Sep 5 00:28:20.330039 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 00:28:20.330774 systemd-logind[1802]: Session 28 logged out. Waiting for processes to exit. Sep 5 00:28:20.331301 systemd-logind[1802]: Removed session 28.